mirror of https://github.com/facebook/rocksdb.git
5764 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
Andrew Ryan Chang | af2a36d2c7 |
Record newest_key_time as a table property (#13083)
Summary: This PR does two things: 1. Adds a new table property `newest_key_time` 2. Uses this property to improve TTL and temperature change compaction. ### Context The current `creation_time` table property should really be named `oldest_ancestor_time`. For flush output files, this is the oldest key time in the file. For compaction output files, this is the minimum among all oldest key times in the input files. The problem with using the oldest ancestor time for TTL compaction is that we may end up dropping files earlier than we should. What we really want is the newest (i.e. "youngest") key time. Right now we take a roundabout way to estimate this value -- we take the value of the _oldest_ key time for the _next_ (newer) SST file. This is also why the current code has checks for `index >= 1`. Our new property `newest_key_time` is set to the file creation time during flushes, and the max over all input files for compactions. There were some additional smaller changes that I had to make for testing purposes: - Refactoring the mock table reader to support specifying my own table properties - Refactoring out a test utility method `GetLevelFileMetadatas` that would otherwise be copy/pasted in 3 places Credit to cbi42 for the problem explanation and proposed solution ### Testing - Added a dedicated unit test to my `newest_key_time` logic in isolation (i.e. are we populating the property on flush and compaction) - Updated the existing unit tests (for TTL/temperate change compaction), which were comprehensive enough to break when I first made my code changes. I removed the test setup code which set the file metadata `oldest_ancestor_time`, so we know we are actually only using the new table property instead. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13083 Reviewed By: cbi42 Differential Revision: D65298604 Pulled By: archang19 fbshipit-source-id: 898ef91b692ab33f5129a2a16b64ecadd4c32432 |
|
Peter Dillinger | a28cc4a38c |
Fix a leak of open Blob files (#13106)
Summary:
An earlier change (
|
|
Jay Huh | 1987313a94 |
TableProperties Serialization Follow Ups (#13095)
Summary: Follow ups from https://github.com/facebook/rocksdb/issues/13089 - Take `TableProperties` as `const &` instead of `std::shared_ptr<const TableProperties>` - Move TableProperties OptionsTypeMap definition to another place for other use outside of Remote Compaction - Add a test verify that the set of field serializations of TableProperties is complete Pull Request resolved: https://github.com/facebook/rocksdb/pull/13095 Test Plan: ``` ./options_settable_test --gtest_filter="*TablePropertiesAllFieldsSettable*" ``` I also intentionally tried adding a new field to `TableProperties`. If it's missed in the OptionsType map, the test detects the missing bytes set and successfully fails. Reviewed By: pdillinger Differential Revision: D65077398 Pulled By: jaykorean fbshipit-source-id: cf10560eb4a467ca523b11fd64945dbc86ac378f |
|
Peter Dillinger | e34087c524 |
Add a temporary hook for custom yielding in long-running op (#13103)
Summary: This is a simplified version of https://github.com/facebook/rocksdb/issues/13096, which called for a way to hook into long-running loops completely within RocksDB to change their thread priority (or similar). The current prime hook point is `DBIter::FindNextUserEntryInternal` likely because of iterating over tombstones. This is implemented using the weak symbol hack for ease of back-porting/patching, and while we get to know potential future requirements better for integration into the public API. (Consider potential relationships to `Env::GetThreadStatusUpdater()` and `TransactionDBMutexFactory`.) Pull Request resolved: https://github.com/facebook/rocksdb/pull/13103 Test Plan: Performance validated with db_bench and DEBUG_LEVEL=0: `./db_bench --benchmarks=fillseq,deleterandom,readseq[-X100] --value_size=1 --num=1000000` No consistent difference seen; variances likely in how DB / executable / memory were laid out. ``` With an empty hook: readseq [AVG 100 runs] : 1753018 (± 8850) ops/sec; 28.4 (± 0.1) MB/sec readseq [MEDIAN 100 runs] : 1763746 ops/sec; 28.6 MB/sec (recompile) readseq [AVG 100 runs] : 1789019 (± 10260) ops/sec; 29.0 (± 0.2) MB/sec readseq [MEDIAN 100 runs] : 1801849 ops/sec; 29.2 MB/sec Base: readseq [AVG 100 runs] : 1772196 (± 8240) ops/sec; 28.7 (± 0.1) MB/sec readseq [MEDIAN 100 runs] : 1780453 ops/sec; 28.9 MB/sec (recompile) readseq [AVG 100 runs] : 1777637 (± 7613) ops/sec; 28.8 (± 0.1) MB/sec readseq [MEDIAN 100 runs] : 1786657 ops/sec; 29.0 MB/sec With a functional hook (count number of calls into it): readseq [AVG 100 runs] : 1796733 (± 8854) ops/sec; 29.1 (± 0.1) MB/sec readseq [MEDIAN 100 runs] : 1804690 ops/sec; 29.3 MB/sec RocksDbThreadYield: 126915800 (recompile) readseq [AVG 100 runs] : 1775371 (± 10529) ops/sec; 28.8 (± 0.2) MB/sec readseq [MEDIAN 100 runs] : 1789046 ops/sec; 29.0 MB/sec RocksDbThreadYield: 126977000 Base: readseq [AVG 100 runs] : 1773071 (± 10657) ops/sec; 28.7 (± 0.2) MB/sec readseq [MEDIAN 100 runs] : 1783414 ops/sec; 28.9 MB/sec (recompile) readseq [AVG 100 runs] : 1750852 (± 10184) ops/sec; 28.4 (± 0.2) MB/sec readseq [MEDIAN 100 runs] : 1763587 ops/sec; 28.6 MB/sec ``` Reviewed By: george-reynya Differential Revision: D65235379 Pulled By: pdillinger fbshipit-source-id: 7829e4cc25a56d4c1801b8adf9c7f7aa49ab7aca |
|
leipeng | 8109046222 |
secondary instance: remove unnessisary cfds_changed->count() (#13086)
Summary: `cfds_changed->count(cfd)` is not needed, just blind insert. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13086 Reviewed By: hx235 Differential Revision: D64712400 Pulled By: cbi42 fbshipit-source-id: 4ef62aaa724c8397baa4ff350c16a7a8d04d7067 |
|
Jay Huh | 57a8e69d4e |
Include TableProperties in the CompactionServiceResult (#13089)
Summary: In Remote Compactions, the primary host receives the serialized compaction result from the remote worker and deserializes it to build the output. Unlike Local Compactions, where table properties are built by TableBuilder, in Remote Compactions, these properties were not included in the serialized compaction result. This was likely done intentionally since the table properties are already available in the SST files. Because TableProperties are not populated as part of CompactionOutputs for remote compactions, we were unable to log the table properties in OnCompactionComplete and use them for verification. We are adding the TableProperties as part of the CompactionServiceOutputFile in this PR. By including the TableProperties in the serialized compaction result, the primary host will be able to access them and verify that they match the values read from the actual SST files. We are also adding the populating `format_version` in table_properties of in TableBuilder. This has not been a big issue because the `format_version` is written to the SST files directly from `TableOptions.format_version`. When loaded from the SST files, it's populated directly by reading from the MetaBlock. This info has only been missing in the TableBuilder's Rep.props. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13089 Test Plan: ``` ./compaction_job_test ``` ``` ./compaction_service_test ``` Reviewed By: pdillinger Differential Revision: D64878740 Pulled By: jaykorean fbshipit-source-id: b6f2fdce851e6477ecb4dd5a87cdc62e176b746b |
|
Peter Dillinger | 3fd1f11d35 |
Fix race to make BlockBasedTableOptions effectively mutable (#13082)
Summary: Fix a longstanding race condition in SetOptions for `block_based_table_factory` options. The fix is mostly described in new, unified `TableFactoryParseFn()` in `cf_options.cc`. Also in this PR: * Adds a virtual `Clone()` function to TableFactory * To avoid behavioral hiccups with `SetOptions`, make the "hidden state" of `BlockBasedTableFactory` shared between an original and a clone. For example, `TailPrefetchStats` * `Configurable` was allowed to be copied but was not safe to do so, because the copy would have and use pointers into object it was copied from (!!!). This has been fixed using relative instead of absolute pointers, though it's still technically relying on undefined behavior (consistent object layout for non-standard-layout types). For future follow-up: * Deny SetOptions on block cache options (dubious and not yet made safe with proper shared_ptr handling) Fixes https://github.com/facebook/rocksdb/issues/10079 Pull Request resolved: https://github.com/facebook/rocksdb/pull/13082 Test Plan: added to unit tests and crash test Ran TSAN blackbox crashtest for hours with options to amplify potential race (see https://github.com/facebook/rocksdb/issues/10079) Reviewed By: cbi42 Differential Revision: D64947243 Pulled By: pdillinger fbshipit-source-id: 8390299149f50e2a2b39a5247680f2637edb23c8 |
|
Yu Zhang | 9c94559de7 |
Optimize compaction for standalone range deletion files (#13078)
Summary: This PR adds some optimization for compacting standalone range deletion files. A standalone range deletion file is one with just a single range deletion. Currently, such a file is used in bulk loading to achieve something like atomically delete old version of all data with one big range deletion and adding new version of data. These are the changes included in the PR: 1) When a standalone range deletion file is ingested via bulk loading, it's marked for compaction. 2) When picking input files during compaction picking, we attempt to only pick a standalone range deletion file when oldest snapshot is at or above the file's seqno. To do this, `PickCompaction` API is updated to take existing snapshots as an input. This is only done for the universal compaction + UDT disabled combination, we save querying for existing snapshots and not pass it for all other cases. 3) At `Compaction` construction time, the input files will be filtered to examine if any of them can be skipped for compaction iterator. For example, if all the data of the file is deleted by a standalone range tombstone, and the oldest snapshot is at or above such range tombstone, this file will be filtered out. 4) Every time a snapshot is released, we examine if any column family has standalone range deletion files that becomes eligible to be scheduled for compaction. And schedule one for it. Potential future improvements: - Add some dedicated statistics for the filtered files. - Extend this input filtering to L0 files' compactions cases when a newer L0 file could shadow an older L0 file Pull Request resolved: https://github.com/facebook/rocksdb/pull/13078 Test Plan: Added unit tests and stress tested a few rounds Reviewed By: cbi42 Differential Revision: D64879415 Pulled By: jowlyzhang fbshipit-source-id: 02b8683fddbe11f093bcaa0a38406deb39f44d9e |
|
Changyu Bi | 8b38d4b400 |
Fix write tracing to check callback status (#13088)
Summary: we currently record write operations to tracer before checking callback in PipelinedWriteImpl and WriteImplWALOnly. For optimistic transaction DB, this means that an operation can be recorded to tracer even when it's not written to DB or WAL. I suspect this is the reason some of our optimistic txn crash test is failing. The evidence is that the trace contains some duplicated entry and has more entries compared to the corresponding entry in WAL. This PR moves the tracer logic to be after checking callback status. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13088 Test Plan: monitor crash test. Reviewed By: hx235 Differential Revision: D64711753 Pulled By: cbi42 fbshipit-source-id: 55fd1223538ec6294ce84a957c306d3d9d91df5f |
|
Levi Tamasi | c0be6a4b90 |
Support allow_unprepared_value for multi-CF iterators (#13079)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13079 The patch adds support for the new read option `allow_unprepared_value` to the multi-column-family iterators `CoalescingIterator` and `AttributeGroupIterator`. When this option is set, these iterators populate their value (`value()` + `columns()` or `attribute_groups()`) in an on-demand fashion when `PrepareValue()` is called. Calling `PrepareValue()` on the child iterators is similarly deferred until `PrepareValue()` is called on the main iterator. Reviewed By: jowlyzhang Differential Revision: D64570587 fbshipit-source-id: 783c8d408ad10074417dabca7b82c5e1fe5cab36 |
|
Jay Huh | 0ca691654f |
Fix Unit Test failing from uninit values in CompactionServiceInput (#13080)
Summary: # Summary There was a [test failure](https://github.com/facebook/rocksdb/actions/runs/11381731053/job/31663774089?fbclid=IwZXh0bgNhZW0CMTEAAR0YJVdnkKUhN15RJQrLsvicxqzReS6y4A14VFQbWu-81XJsSsyNepXAr2c_aem_JyQqNdtpeKFSA6CjlD-pDg) from uninit value in the CompactionServiceInput ``` [ RUN ] CompactionJobTest.InputSerialization ==79945== Use of uninitialised value of size 8 ==79945== at 0x58EA69B: _itoa_word (_itoa.c:179) ==79945== by 0x5906574: __vfprintf_internal (vfprintf-internal.c:1687) ==79945== by 0x591AF99: __vsnprintf_internal (vsnprintf.c:114) ==79945== by 0x1654AE: std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > __gnu_cxx::__to_xstring<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, char>(int (*)(char*, unsigned long, char const*, __va_list_tag*), unsigned long, char const*, ...) (string_conversions.h:111) ==79945== by 0x5126C65: to_string (basic_string.h:6568) ==79945== by 0x5126C65: rocksdb::SerializeSingleOptionHelper(void const*, rocksdb::OptionType, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*) (options_helper.cc:541) ==79945== by 0x512718B: rocksdb::OptionTypeInfo::Serialize(rocksdb::ConfigOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >*) const (options_helper.cc:1084) ``` This was due to `options_file_number` value not set in the unit test. However, this value is guaranteed to be set in the normal path. It was just missing in the test path. Setting the 0 as the default value for uninitialized fields in the `CompactionServiceInput` and `CompactionServiceResult` for now. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13080 Test Plan: Existing tests should be sufficient Reviewed By: cbi42 Differential Revision: D64573567 Pulled By: jaykorean fbshipit-source-id: 7843a951770c74445620623d069a52ba93ad94d5 |
|
Peter Dillinger | ac24f152a1 |
Refactor `table_factory` into MutableCFOptions (#13077)
Summary: This is setting up for a fix to a data race in SetOptions on BlockBasedTableOptions (BBTO), https://github.com/facebook/rocksdb/issues/10079 The race will be fixed by replacing `table_factory` with a modified copy whenever we want to modify a BBTO field. An argument could be made that this change creates more entaglement between features (e.g. BlobSource <-> MutableCFOptions), rather than (conceptually) minimizing the dependencies of each feature, but * Most of these things already depended on ImmutableOptions * Historically there has been a lot of plumbing (and possible small CPU overhead) involved in adding features that need to reach a lot of places, like `block_protection_bytes_per_key`. Keeping those wrapped up in options simplifies that. * SuperVersion management generally takes care of lifetime management of MutableCFOptions, so is not that difficult. (Crash test agrees so far.) There are some FIXME places where it is known to be unsafe to replace `block_cache` unless/until we handle shared_ptr tracking properly. HOWEVER, replacing `block_cache` is generally dubious, at least while existing users of the old block cache (e.g. table readers) can continue indefinitely. The change to cf_options.cc is essentially just moving code (not changing). I'm not concerned about the performance of copying another shared_ptr with MutableCFOptions, but I left a note about considering an improvement if more shared_ptr are added to it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13077 Test Plan: existing tests, crash test. Unit test DBOptionsTest.GetLatestCFOptions updated with some temporary logic. MemoryTest required some refactoring (simplification) for the change. Reviewed By: cbi42 Differential Revision: D64546903 Pulled By: pdillinger fbshipit-source-id: 69ae97ce5cf4c01b58edc4c5d4687eb1e5bf5855 |
|
Levi Tamasi | a44f4b8653 |
Couple of small improvements for (Iterator)AttributeGroup (#13076)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13076 The patch makes it possible to construct an `IteratorAttributeGroup` using an `AttributeGroup` instance, and implements `operator==` / `operator!=` for these two classes consistently. It also makes some minor improvements in the related test suites `CoalescingIteratorTest` and `AttributeGroupIteratorTest`. Reviewed By: jaykorean Differential Revision: D64510653 fbshipit-source-id: 95d3340168fa3b34e7ef534587b19131f0a27fb7 |
|
Jay Huh | f22557886e |
Fix Compaction Stats (#13071)
Summary: Compaction stats code is not so straightforward to understand. Here's a bit of context for this PR and why this change was made. - **CompactionStats (compaction_stats_.stats):** Internal stats about the compaction used for logging and public metrics. - **CompactionJobStats (compaction_job_stats_)**: The public stats at job level. It's part of Compaction event listener and included in the CompactionResult. - **CompactionOutputsStats**: output stats only. resides in CompactionOutputs. It gets aggregated toward the CompactionStats (internal stats). The internal stats, `compaction_stats_.stats`, has the output information recorded from the compaction iterator, but it does not have any input information (input records, input output files) until `UpdateCompactionStats()` gets called. We cannot simply call `UpdateCompactionStats()` to fill in the input information in the remote compaction (which is a subcompaction of the primary host's compaction) because the `compaction->inputs()` have the full list of input files and `UpdateCompactionStats()` takes the entire list of records in all files. `num_input_records` gets double-counted if multiple sub-compactions are submitted to the remote worker. The job level stats (in the case of remote compaction, it's subcompaction level stat), `compaction_job_stats_`, has the correct input records, but has no output information. We can use `UpdateCompactionJobStats(compaction_stats_.stats)` to set the output information (num_output_records, num_output_files, etc.) from the `compaction_stats_.stats`, but it also sets all other fields including the input information which sets all back to 0. Therefore, we are overriding `UpdateCompactionJobStats()` in remote worker only to update job level stats, `compaction_job_stats_`, with output information of the internal stats. Baiscally, we are merging the aggregated output info from the internal stats and aggregated input info from the compaction job stats. In this PR we are also fixing how we are setting `is_remote_compaction` in CompactionJobStats. - OnCompactionBegin event, if options.compaction_service is set, `is_remote_compaction=true` for all compactions except for trivial moves - OnCompactionCompleted event, if any of the sub_compactions were done remotely, compaction level stats's `is_remote_compaction` will be true Other minor changes - num_output_records is already available in CompactionJobStats. No need to store separately in CompactionResult. - total_bytes is not needed. - Renamed `SubcompactionState::AggregateCompactionStats()` to `SubcompactionState::AggregateCompactionOutputStats()` to make it clear that it's only aggregating output stats. - Renamed `SetTotalBytes()` to `AddBytesWritten()` to make it more clear that it's adding total written bytes from the compaction output. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13071 Test Plan: Unit Tests added and updated ``` ./compaction_service_test ``` Reviewed By: anand1976 Differential Revision: D64479657 Pulled By: jaykorean fbshipit-source-id: a7a776a00dc718abae95d856b661bcbafd3b0ed5 |
|
Changyu Bi | 8ad4c7efc4 |
Add an API to check if an SST file is generated by SstFileWriter (#13072)
Summary:
Some users want to check if a file in their DB was created by SstFileWriter and ingested into hte DB. Files created by SstFileWriter records additional table properties
|
|
Changyu Bi | 787730c859 |
Add an ingestion option to not fill block cache (#13067)
Summary: add `IngestExternalFileOptions::fill_cache` to allow users to ingest files without loading index/filter/data and other blocks into block cache during file ingestion. This can be useful when users are ingesting files into a CF that is not available to readers yet. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13067 Test Plan: * unit test: `ExternalSSTFileTest.NoBlockCache` * ran one round of crash test with fill_cache disabled: `python3 ./tools/db_crashtest.py --simple blackbox --ops_per_thread=1000000 --interval=30 --ingest_external_file_one_in=200 --level0_stop_writes_trigger=200 --level0_slowdown_writes_trigger=100 --sync_fault_injection=0 --disable_wal=0 --manual_wal_flush_one_in=0` Reviewed By: jowlyzhang Differential Revision: D64356424 Pulled By: cbi42 fbshipit-source-id: b380c26f5987238e1ed7d42ceef0390cfaa0b8e2 |
|
Levi Tamasi | ecc084d301 |
Support the on-demand loading of blobs during iteration (#13069)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13069 Currently, when using range scans with BlobDB, the iterator logic eagerly loads values from blob files when landing on a new entry. This can be wasteful in use cases where the values associated with some keys in the range are not used by the application. The patch introduces a new read option `allow_unprepared_value`; when specified, this option results in the above eager loading getting bypassed. Values needed by the application can be then loaded on an on-demand basis by calling the new iterator API `PrepareValue`. Note that currently, only regular single-CF iterators are supported; multi-CF iterators and transactions will be extended in later PRs. Reviewed By: jowlyzhang Differential Revision: D64360723 fbshipit-source-id: ee55502fa15dcb307a984922b9afc9d9da15d6e1 |
|
Jay Huh | da5e11310b |
Preserve Options File (#13074)
Summary: In https://github.com/facebook/rocksdb/issues/13025 , we made a change to load the latest options file in the remote worker instead of serializing the entire set of options. That was done under assumption that OPTIONS file do not get purged often. While testing, we learned that this happens more often than we want it to be, so we want to prevent the OPTIONS file from getting purged anytime between when the remote compaction is scheduled and the option is loaded in the remote worker. Like how we are protecting new SST files from getting purged using `min_pending_output`, we are doing the same by keeping track of `min_options_file_number`. Any OPTIONS file with number greater than `min_options_file_number` will be protected from getting purged. Just like `min_pending_output`, `min_options_file_number` gets bumped when the compaction is done. This is only applicable when `options.compaction_service` is set. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13074 Test Plan: ``` ./compaction_service_test --gtest_filter="*PreservedOptionsLocalCompaction*" ./compaction_service_test --gtest_filter="*PreservedOptionsRemoteCompaction*" ``` Reviewed By: anand1976 Differential Revision: D64433795 Pulled By: jaykorean fbshipit-source-id: 0d902773f0909d9481dec40abf0b4c54ce5e86b2 |
|
Levi Tamasi | 55de26580a |
Small improvement to MultiCFIteratorImpl (#13075)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13075 The patch simplifies the iteration logic in `MultiCFIteratorImpl::{Advance,Populate}Iterator` a bit and adds some assertions to uniformly enforce the invariant that any iterators currently on the heap should be valid and have an OK status. Reviewed By: jaykorean Differential Revision: D64429566 fbshipit-source-id: 36bc22465285b670f859692a048e10f21df7da7a |
|
Yu Zhang | 2cb00c6921 |
Ingest files in separate batches if they overlap (#13064)
Summary: This PR assigns levels to files in separate batches if they overlap. This approach can potentially assign external files to lower levels. In the prepare stage, if the input files' key range overlaps themselves, we divide them up in the user specified order into multiple batches. Where the files in the same batch do not overlap with each other, but key range could overlap between batches. If the input files' key range don't overlap, they always just make one default batch. During the level assignment stage, we assign levels to files one batch after another. It's guaranteed that files within one batch are not overlapping, we assign level to each file one after another. If the previous batch's uppermost level is specified, all files in this batch will be assigned to levels that are higher than that level. The uppermost level used by this batch of files is also tracked, so that it can be used by the next batch. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13064 Test Plan: Updated test and added new test Manually stress tested Reviewed By: cbi42 Differential Revision: D64428373 Pulled By: jowlyzhang fbshipit-source-id: 5aeff125c14094c87cc50088505010dfd2da3d6e |
|
Jay Huh | dd76862b00 |
Add file_checksum from FileChecksumGenFactory and Tests for corrupted output (#13060)
Summary: - When `FileChecksumGenFactory` is set, include the `file_checksum` and `file_checksum_func_name` in the output file metadata - ~~In Remote Compaction, try opening the output files in the temporary directory to do a quick sanity check before returning the result with status.~~ - After offline discussion, we decided to rely on Primary's existing Compaction flow to sanity check the output files. If the output file is corrupted, we will still be able to catch it and not installing it even after renaming them to cf_paths. The corrupted file in the cf_path won't be added to the MANIFEST and will be purged as part of the next `PurgeObsoleteFiles()` call. - Unit Test has been added to validate above. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13060 Test Plan: Unit test added ``` ./compaction_service_test --gtest_filter="*CorruptedOutput*" ./compaction_service_test --gtest_filter="*TruncatedOutput*" ./compaction_service_test --gtest_filter="*CustomFileChecksum*" ./compaction_job_test --gtest_filter="*ResultSerialization*" ``` Reviewed By: cbi42 Differential Revision: D64189645 Pulled By: jaykorean fbshipit-source-id: 6cf28720169c960c80df257806bfee3c0d177159 |
|
Peter Dillinger | 351d2fd2b6 |
Make simple BlockBasedTableOptions mutable (#10021)
Summary: In theory, there should be no danger in mutability, as table builders and readers work from copies of BlockBasedTableOptions. However, there is currently an unresolved read-write race that affecting SetOptions on BBTO fields. This should be generally acceptable for non-pointer options of 64 bits or less, but a fix is needed to make it mutability general here. See https://github.com/facebook/rocksdb/issues/10079 This change systematically sets all of those "simple" options (and future such options) as mutable. (Resurrecting this PR perhaps preferable to proposed https://github.com/facebook/rocksdb/issues/13063) Pull Request resolved: https://github.com/facebook/rocksdb/pull/10021 Test Plan: Some unit test updates. XXX comment added to stress test code Reviewed By: cbi42 Differential Revision: D64360967 Pulled By: pdillinger fbshipit-source-id: ff220fa778331852fe331b42b76ac4adfcd2d760 |
|
Yu Zhang | 8592517c89 |
Remove stale entries from L0 files when UDT is not persisted (#13035)
Summary: When user-defined timestamps are not persisted, currently we replace the actual timestamp with min timestamp after an entry is output from compaction iterator. Compaction iterator won't be able to help with removing stale entries this way. This PR adds a wrapper iterator `TimestampStrippingIterator` for `MemTableIterator` that does the min timestamp replacement at the memtable iteration step. It is used by flush and can help remove stale entries from landing in L0 files. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13035 Test Plan: Added unit test Reviewed By: pdillinger, cbi42 Differential Revision: D63423682 Pulled By: jowlyzhang fbshipit-source-id: 087dcc9cee97b9ea51b8d2b88dc91c2984d54e55 |
|
Yu Zhang | a571cbed17 |
Use same logic to assign level for non-overlapping files in universal compaction (#13059)
Summary: When the input files are not overlapping, a.k.a `files_overlap_=false`, it's best to assign them to non L0 levels so that they are not one sorted run each. This can be done regardless of compaction style being leveled or universal without any side effects. Just my guessing, this special handling may be there because universal compaction used to have an invariant that sequence number on higher levels should not be smaller than sequence number in lower levels. File ingestion used to try to keep up to that promise by doing "sequence number stealing" from the to be assigned level. However, that invariant is no longer true after deletion triggered compaction is added for universal compaction, and we also removed the sequence stealing logic from file ingestion. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13059 Test Plan: Updated existing tests Reviewed By: cbi42 Differential Revision: D64220100 Pulled By: jowlyzhang fbshipit-source-id: 70a83afba7f4c52d502c393844e6b3273d5cf628 |
|
Levi Tamasi | 2c9aa69a93 |
Refactor the BlobDB-related parts of DBIter (#13061)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13061 As groundwork for further changes, the patch refactors the BlobDB-related parts of `DBIter` by 1) introducing a new internal helper class `DBIter::BlobReader` that encapsulates all members needed to retrieve a blob value (namely, `Version` and the `ReadOptions` fields) and 2) factoring out and cleaning up some duplicate logic related to resolving blob references in the non-Merge (see `SetValueAndColumnsFromBlob`) and Merge (see `MergeWithBlobBaseValue`) cases. Reviewed By: jowlyzhang Differential Revision: D64078099 fbshipit-source-id: 22d5bd93e6e5be5cc9ecf6c4ee6954f2eb016aff |
|
Jay Huh | fe6c8cb1d6 |
Print unknown writebatch tag (#13062)
Summary:
Add additional info for debugging purpose by doing the same as what WBWI does
|
|
Hui Xiao | 632746bb5b |
Improve DBTest.DynamicLevelCompressionPerLevel (#13044)
Summary: **Context/Summary:** A part of this test is to verify compression conditionally happens depending on the shape of the LSM when `options.level_compaction_dynamic_level_bytes = true;`. It uses the total file size to determine whether compression has happened or not. This involves some hard-coded math hard to understand. This PR replaces those with statistics that directly shows whether compression has happened or not. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13044 Test Plan: Existing test Reviewed By: jaykorean Differential Revision: D63666361 Pulled By: hx235 fbshipit-source-id: 8c9b1bea9b06ff1e3ed95c576aec6705159af137 |
|
Yu Zhang | 8181dfb1c4 |
Fix a bug for surfacing write unix time (#13057)
Summary: The write unix time from non L0 files are not surfaced properly because the level's wrapper iterator doesn't have a `write_unix_time` implementation that delegates to the corresponding file. The unit test didn't catch this because it incorrectly destroy the old db and reopen to check write time, instead of just reopen and check. This fix also include a change to support ldb's scan command to get write time for easier debugging. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13057 Test Plan: Updated unit tests Reviewed By: pdillinger Differential Revision: D64015107 Pulled By: jowlyzhang fbshipit-source-id: 244474f78a034f80c9235eea2aa8a0f4e54dff59 |
|
Yu Zhang | 263fa15b44 |
Handle a possible overflow (#13046)
Summary: Stress test detects this variable could potentially overflow, so added some runtime handling to avoid it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13046 Test Plan: Existing tests Reviewed By: hx235 Differential Revision: D63911396 Pulled By: jowlyzhang fbshipit-source-id: 7c9abcd74ac9937b211c0ea4bb683677390837c5 |
|
Changyu Bi | bceb2dfe6a |
Introduce minimum compaction debt requirement for parallel compaction (#13054)
Summary: a small CF can trigger parallel compaction that applies to the entire DB. This is because the bottommost file size of a small CF can be too small compared to l0 files when a l0->lbase compaction happens. We prevent this by requiring some minimum on the compaction debt. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13054 Test Plan: updated unit test. Reviewed By: hx235 Differential Revision: D63861042 Pulled By: cbi42 fbshipit-source-id: 43bbf327988ef0ef912cd2fc700e3d096a8d2c18 |
|
Yu Zhang | 32dd657bad |
Add some per key optimization for UDT in memtable only feature (#13031)
Summary: This PR added some optimizations for the per key handling for SST file for the user-defined timestamps in Memtable only feature. CPU profiling shows this part is a big culprit for regression. This optimization saves some string construction/destruction/appending/copying. vector operations like reserve/emplace_back. When iterating keys in a block, we need to copy some shared bytes from previous key, put it together with the non shared bytes and find a right location to pad the min timestamp. Previously, we create a tmp local string buffer to first construct the key from its pieces, and then copying this local string's content into `IterKey`'s buffer. To avoid having this local string and to avoid this extra copy. Instead of piecing together the key in a local string first, we just track all the pieces that make this key in a reused Slice array. And then copy the pieces in order into `IterKey`'s buffer. Since the previous key should be kept intact while we are copying some shared bytes from it, we added a secondary buffer in `IterKey` and alternate between primary buffer and secondary buffer. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13031 Test Plan: Existing tests. Reviewed By: ltamasi Differential Revision: D63416531 Pulled By: jowlyzhang fbshipit-source-id: 9819b0e02301a2dbc90621b2fe4f651bc912113c |
|
Levi Tamasi | 917e98ff9e |
Templatize MultiCfIteratorImpl to avoid std::function's overhead (#13052)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13052 Currently, `MultiCfIteratorImpl` uses `std::function`s for `reset_func_` and `populate_func_`, which uses type erasure and has a performance overhead. The patch turns `MultiCfIteratorImpl` into a template that takes the two function object types as template parameters, and changes `AttributeGroupIteratorImpl` and `CoalescingIterator` so they pass in function objects of named types (as opposed to lambdas). Reviewed By: jaykorean Differential Revision: D63802598 fbshipit-source-id: e202f6d80c9054335e5b2571051a67a9e012c2d0 |
|
Yu Zhang | 9375c3b635 |
Fix `needs_flush` assertion in file ingestion (#13045)
Summary: This PR makes file ingestion job's flush wait a bit further until the SuperVersion is also updated. This is necessary since follow up operations will use the current SuperVersion to do range overlapping check and level assignment. In debug mode, file ingestion job's second `NeedsFlush` call could have been invoked when the memtables are flushed but the SuperVersion hasn't been updated yet, triggering the assertion. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13045 Test Plan: Existing tests Manually stress tested Reviewed By: cbi42 Differential Revision: D63671151 Pulled By: jowlyzhang fbshipit-source-id: 95a169e58a7e59f6dd4125e7296e9060fe4c63a7 |
|
Peter Dillinger | dd23e84cad |
Re-implement GetApproximateMemTableStats for skip lists (#13047)
Summary: GetApproximateMemTableStats() could return some bad results with the standard skip list memtable. See this new db_bench test showing the dismal distribution of results when the actual number of entries in range is 1000: ``` $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=1000 ... filluniquerandom : 1.391 micros/op 718915 ops/sec 1.391 seconds 1000000 operations; 11.7 MB/s approximatememtablestats : 3.711 micros/op 269492 ops/sec 3.711 seconds 1000000 operations; Reported entry count stats (expected 1000): Count: 1000000 Average: 2344.1611 StdDev: 26587.27 Min: 0 Median: 965.8555 Max: 835273 Percentiles: P50: 965.86 P75: 1610.77 P99: 12618.01 P99.9: 74991.58 P99.99: 830970.97 ------------------------------------------------------ [ 0, 1 ] 131344 13.134% 13.134% ### ( 1, 2 ] 115 0.011% 13.146% ( 2, 3 ] 106 0.011% 13.157% ( 3, 4 ] 190 0.019% 13.176% ( 4, 6 ] 214 0.021% 13.197% ( 6, 10 ] 522 0.052% 13.249% ( 10, 15 ] 748 0.075% 13.324% ( 15, 22 ] 1002 0.100% 13.424% ( 22, 34 ] 1948 0.195% 13.619% ( 34, 51 ] 3067 0.307% 13.926% ( 51, 76 ] 4213 0.421% 14.347% ( 76, 110 ] 5721 0.572% 14.919% ( 110, 170 ] 11375 1.137% 16.056% ( 170, 250 ] 17928 1.793% 17.849% ( 250, 380 ] 36597 3.660% 21.509% # ( 380, 580 ] 77882 7.788% 29.297% ## ( 580, 870 ] 160193 16.019% 45.317% ### ( 870, 1300 ] 210098 21.010% 66.326% #### ( 1300, 1900 ] 167461 16.746% 83.072% ### ( 1900, 2900 ] 78678 7.868% 90.940% ## ( 2900, 4400 ] 47743 4.774% 95.715% # ( 4400, 6600 ] 17650 1.765% 97.480% ( 6600, 9900 ] 11895 1.190% 98.669% ( 9900, 14000 ] 4993 0.499% 99.168% ( 14000, 22000 ] 2384 0.238% 99.407% ( 22000, 33000 ] 1966 0.197% 99.603% ( 50000, 75000 ] 2968 0.297% 99.900% ( 570000, 860000 ] 999 0.100% 100.000% readrandom : 1.967 micros/op 508487 ops/sec 1.967 seconds 1000000 operations; 8.2 MB/s (1000000 of 1000000 found) ``` Perhaps the only good thing to say about the old implementation was that it was fast, though apparently not that fast. I've implemented a much more robust and reasonably fast new version of the function. It's still logarithmic but with some larger constant factors. The standard deviation from true count is around 20% or less, and roughly the CPU cost of two memtable point look-ups. See code comments for detail. ``` $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=1000 ... filluniquerandom : 1.478 micros/op 676434 ops/sec 1.478 seconds 1000000 operations; 11.0 MB/s approximatememtablestats : 2.694 micros/op 371157 ops/sec 2.694 seconds 1000000 operations; Reported entry count stats (expected 1000): Count: 1000000 Average: 1073.5158 StdDev: 197.80 Min: 608 Median: 1079.9506 Max: 2176 Percentiles: P50: 1079.95 P75: 1223.69 P99: 1852.36 P99.9: 1898.70 P99.99: 2176.00 ------------------------------------------------------ ( 580, 870 ] 134848 13.485% 13.485% ### ( 870, 1300 ] 747868 74.787% 88.272% ############### ( 1300, 1900 ] 116536 11.654% 99.925% ## ( 1900, 2900 ] 748 0.075% 100.000% readrandom : 1.997 micros/op 500654 ops/sec 1.997 seconds 1000000 operations; 8.1 MB/s (1000000 of 1000000 found) ``` We can already see that the distribution of results is dramatically better and wonderfully normal-looking, with relative standard deviation around 20%. The function is also FASTER, at least with these parameters. Let's look how this behavior generalizes, first *much* larger range: ``` $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=30000 filluniquerandom : 1.390 micros/op 719654 ops/sec 1.376 seconds 990000 operations; 11.7 MB/s approximatememtablestats : 1.129 micros/op 885649 ops/sec 1.129 seconds 1000000 operations; Reported entry count stats (expected 30000): Count: 1000000 Average: 31098.8795 StdDev: 3601.47 Min: 21504 Median: 29333.9303 Max: 43008 Percentiles: P50: 29333.93 P75: 33018.00 P99: 43008.00 P99.9: 43008.00 P99.99: 43008.00 ------------------------------------------------------ ( 14000, 22000 ] 408 0.041% 0.041% ( 22000, 33000 ] 749327 74.933% 74.974% ############### ( 33000, 50000 ] 250265 25.027% 100.000% ##### readrandom : 1.894 micros/op 528083 ops/sec 1.894 seconds 1000000 operations; 8.5 MB/s (989989 of 1000000 found) ``` This is *even faster* and relatively *more accurate*, with relative standard deviation closer to 10%. Code comments explain why. Now let's look at smaller ranges. Implementation quirks or conveniences: * When actual number in range is >= 40, the minimum return value is 40. * When the actual is <= 10, it is guaranteed to return that actual number. ``` $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=75 ... filluniquerandom : 1.417 micros/op 705668 ops/sec 1.417 seconds 999975 operations; 11.4 MB/s approximatememtablestats : 3.342 micros/op 299197 ops/sec 3.342 seconds 1000000 operations; Reported entry count stats (expected 75): Count: 1000000 Average: 75.1210 StdDev: 15.02 Min: 40 Median: 71.9395 Max: 256 Percentiles: P50: 71.94 P75: 89.69 P99: 119.12 P99.9: 166.68 P99.99: 229.78 ------------------------------------------------------ ( 34, 51 ] 38867 3.887% 3.887% # ( 51, 76 ] 550554 55.055% 58.942% ########### ( 76, 110 ] 398854 39.885% 98.828% ######## ( 110, 170 ] 11353 1.135% 99.963% ( 170, 250 ] 364 0.036% 99.999% ( 250, 380 ] 8 0.001% 100.000% readrandom : 1.861 micros/op 537224 ops/sec 1.861 seconds 1000000 operations; 8.7 MB/s (999974 of 1000000 found) $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=25 ... filluniquerandom : 1.501 micros/op 666283 ops/sec 1.501 seconds 1000000 operations; 10.8 MB/s approximatememtablestats : 5.118 micros/op 195401 ops/sec 5.118 seconds 1000000 operations; Reported entry count stats (expected 25): Count: 1000000 Average: 26.2392 StdDev: 4.58 Min: 25 Median: 28.4590 Max: 72 Percentiles: P50: 28.46 P75: 31.69 P99: 49.27 P99.9: 67.95 P99.99: 72.00 ------------------------------------------------------ ( 22, 34 ] 928936 92.894% 92.894% ################### ( 34, 51 ] 67960 6.796% 99.690% # ( 51, 76 ] 3104 0.310% 100.000% readrandom : 1.892 micros/op 528595 ops/sec 1.892 seconds 1000000 operations; 8.6 MB/s (1000000 of 1000000 found) $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=10 ... filluniquerandom : 1.642 micros/op 608916 ops/sec 1.642 seconds 1000000 operations; 9.9 MB/s approximatememtablestats : 3.042 micros/op 328721 ops/sec 3.042 seconds 1000000 operations; Reported entry count stats (expected 10): Count: 1000000 Average: 10.0000 StdDev: 0.00 Min: 10 Median: 10.0000 Max: 10 Percentiles: P50: 10.00 P75: 10.00 P99: 10.00 P99.9: 10.00 P99.99: 10.00 ------------------------------------------------------ ( 6, 10 ] 1000000 100.000% 100.000% #################### readrandom : 1.805 micros/op 554126 ops/sec 1.805 seconds 1000000 operations; 9.0 MB/s (1000000 of 1000000 found) ``` Remarkably consistent. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13047 Test Plan: new db_bench test for both performance and accuracy (see above); added to crash test; unit test updated. Reviewed By: cbi42 Differential Revision: D63722003 Pulled By: pdillinger fbshipit-source-id: cfc8613c085e87c17ecec22d82601aac2a5a1b26 |
|
Jay Huh | 2a5ff78c12 |
More info in CompactionServiceJobInfo and CompactionJobStats (#13029)
Summary: Add the following to the `CompactionServiceJobInfo` - compaction_reason - is_full_compaction - is_manual_compaction - bottommost_level Added `is_remote_compaction` to the `CompactionJobStats` and set initial values to avoid UB for uninitialized values. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13029 Test Plan: ``` ./compaction_service_test --gtest_filter="*CompactionInfo*" ``` Reviewed By: anand1976 Differential Revision: D63322878 Pulled By: jaykorean fbshipit-source-id: f02a66ca45e660b9d354a43837d8ec6beb7621fb |
|
Peter Dillinger | a1a102ffce |
Steps toward deprecating implicit prefix seek, related fixes (#13026)
Summary: With some new use cases onboarding to prefix extractors/seek/filters, one of the risks is existing iterator code, e.g. for maintenance tasks, being unintentionally subject to prefix seek semantics. This is a longstanding known design flaw with prefix seek, and `prefix_same_as_start` and `auto_prefix_mode` were steps in the direction of making that obsolete. However, we can't just immediately set `total_order_seek` to true by default, because that would impact so much code instantly. Here we add a new DB option, `prefix_seek_opt_in_only` that basically allows users to transition to the future behavior when they are ready. When set to true, all iterators will be treated as if `total_order_seek=true` and then the only ways to get prefix seek semantics are with `prefix_same_as_start` or `auto_prefix_mode`. Related fixes / changes: * Make sure that `prefix_same_as_start` and `auto_prefix_mode` are compatible with (or override) `total_order_seek` (depending on your interpretation). * Fix a bug in which a new iterator after dynamically changing the prefix extractor might mix different prefix semantics between memtable and SSTs. Both should use the latest extractor semantics, which means iterators ignoring memtable prefix filters with an old extractor. And that means passing the latest prefix extractor to new memtable iterators that might use prefix seek. (Without the fix, the test added for this fails in many ways.) Suggested follow-up: * Investigate a FIXME where a MergeIteratorBuilder is created in db_impl.cc. No unit test detects a change in value that should impact correctness. * Make memtable prefix bloom compatible with `auto_prefix_mode`, which might require involving the memtablereps because we don't know at iterator creation time (only seek time) whether an auto_prefix_mode seek will be a prefix seek. * Add `prefix_same_as_start` testing to db_stress Pull Request resolved: https://github.com/facebook/rocksdb/pull/13026 Test Plan: tests updated, added. Add combination of `total_order_seek=true` and `auto_prefix_mode=true` to stress test. Ran `make blackbox_crash_test` for a long while. Manually ran tests with `prefix_seek_opt_in_only=true` as default, looking for unexpected issues. I inspected most of the results and migrated many tests to be ready for such a change (but not all). Reviewed By: ltamasi Differential Revision: D63147378 Pulled By: pdillinger fbshipit-source-id: 1f4477b730683d43b4be7e933338583702d3c25e |
|
Jay Huh | 5f4a8c3da4 |
Load latest options from OPTIONS file in Remote host (#13025)
Summary: We've been serializing and deserializing DBOptions and CFOptions (and other CF into) as part of `CompactionServiceInput`. These are all readily available in the OPTIONS file and the remote worker can read the OPTIONS file to obtain the same information. This helps reducing the size of payload significantly. In a very rare scenario if the OPTIONS file is purged due to options change by primary host at the same time while the remote host is loading the latest options, it may fail. In this case, we just retry once. This also solves the problem where we had to open the default CF with the CFOption from another CF if the remote compaction is for a non-default column family. (TODO comment in /db_impl_secondary.cc) Pull Request resolved: https://github.com/facebook/rocksdb/pull/13025 Test Plan: Unit Tests ``` ./compaction_service_test ``` ``` ./compaction_job_test ``` Also tested with Meta's internal Offload Infra Reviewed By: anand1976, cbi42 Differential Revision: D63100109 Pulled By: jaykorean fbshipit-source-id: b7162695e31e2c5a920daa7f432842163a5b156d |
|
Changyu Bi | 71e38dbe25 |
Compact one file at a time for FIFO temperature change compactions (#13018)
Summary: Per customer request, we should not merge multiple SST files together during temperature change compaction, since this can cause FIFO TTL compactions to be delayed. This PR changes the compaction picking logic to pick one file at a time. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13018 Test Plan: * updated some existing unit tests to test this new behavior. Reviewed By: jowlyzhang Differential Revision: D62883292 Pulled By: cbi42 fbshipit-source-id: 6a9fc8c296b5d9b17168ef6645f25153241c8b93 |
|
Levi Tamasi | 54ace7f340 |
Change the semantics of blob_garbage_collection_force_threshold to provide better control over space amp (#13022)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13022 Currently, `blob_garbage_collection_force_threshold` applies to the oldest batch of blob files, which is typically only a small subset of the blob files currently eligible for garbage collection. This can result in a form of head-of-line blocking: no GC-triggered compactions will be scheduled if the oldest batch does not currently exceed the threshold, even if a lot of higher-numbered blob files do. This can in turn lead to high space amplification that exceeds the soft bound implicit in the force threshold (e.g. 50% would suggest a space amp of <2 and 75% would imply a space amp of <4). The patch changes the semantics of this configuration threshold to apply to the entire set of blob files that are eligible for garbage collection based on `blob_garbage_collection_age_cutoff`. This provides more intuitive semantics for the option and can provide a better write amp/space amp trade-off. (Note that GC-triggered compactions still pick the same SST files as before, so triggered GC still targets the oldest the blob files.) Reviewed By: jowlyzhang Differential Revision: D62977860 fbshipit-source-id: a999f31fe9cdda313de513f0e7a6fc707424d4a3 |
|
Peter Dillinger | 98c33cb8e3 |
Steps toward making IDENTITY file obsolete (#13019)
Summary: * Set write_dbid_to_manifest=true by default * Add new option write_identity_file (default true) that allows us to opt-in to future behavior without identity file * Refactor related DB open code to minimize code duplication _Recommend hiding whitespace changes for review_ Intended follow-up: add support to ldb for reading and even replacing the DB identity in the manifest. Could be a variant of `update_manifest` command or based on it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13019 Test Plan: unit tests and stress test updated for new functionality Reviewed By: anand1976 Differential Revision: D62898229 Pulled By: pdillinger fbshipit-source-id: c08b25cf790610b034e51a9de0dc78b921abbcf0 |
|
Peter Dillinger | 10984e8c26 |
Fix and generalize framework for filtering range queries, etc. (#13005)
Summary: There was a subtle design/contract bug in the previous version of range filtering in experimental.h If someone implemented a key segments extractor with "all or nothing" fixed size segments, that could result in unsafe range filtering. For example, with two segments of width 3: ``` x = 0x|12 34 56|78 9A 00| y = 0x|12 34 56||78 9B z = 0x|12 34 56|78 9C 00| ``` Segment 1 of y (empty) is out of order with segment 1 of x and z. I have re-worked the contract to make it clear what does work, and implemented a standard extractor for fixed-size segments, CappedKeySegmentsExtractor. The safe approach for filtering is to consume as much as is available for a segment in the case of a short key. I have also added support for min-max filtering with reverse byte-wise comparator, which is probably the 2nd most common comparator for RocksDB users (because of MySQL). It might seem that a min-max filter doesn't care about forward or reverse ordering, but it does when trying to determine whether in input range from segment values v1 to v2, where it so happens that v2 is byte-wise less than v1, is an empty forward interval or a non-empty reverse interval. At least in the current setup, we don't have that context. A new unit test (with some refactoring) tests CappedKeySegmentsExtractor, reverse byte-wise comparator, and the corresponding min-max filter. I have also (contractually / mathematically) generalized the framework to comparators other than the byte-wise comparator, and made other generalizations to make the extractor limitations more explicitly connected to the particular filters and filtering used--at least in description. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13005 Test Plan: added unit tests as described Reviewed By: jowlyzhang Differential Revision: D62769784 Pulled By: pdillinger fbshipit-source-id: 0d41f0d0273586bdad55e4aa30381ebc861f7044 |
|
Nick Brekhus | 0611eb5b9d |
Fix orphaned files in SstFileManager (#13015)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13015 `Close()`ing a database now releases tracked files in `SstFileManager`. Previously this space would be leaked until the database was later reopened. Reviewed By: jowlyzhang Differential Revision: D62590773 fbshipit-source-id: 5461bd253d974ac4967ad52fee92e2650f8a9a28 |
|
Changyu Bi | f97e33454f |
Fix a bug with auto recovery on WAL write error (#12995)
Summary:
A recent crash test failure shows that auto recovery from WAL write failure can cause CFs to be inconsistent. A unit test repro in P1569398553. The following is an example sequence of events:
```
0. manual_wal_flush is true. There are multiple CFs in a DB.
1. Submit a write batch with updates to multiple CF
2. A FlushWAL or a memtable swtich that will try to write the buffered WAL data. Fail this write so that buffered WAL data is dropped:
|
|
Nicholas Ormrod | 0e04ef1a96 |
Deshim coro in fbcode/internal_repo_rocksdb
Summary: The following rules were deshimmed: ``` //folly/experimental/coro:accumulate -> //folly/coro:accumulate //folly/experimental/coro:async_generator -> //folly/coro:async_generator //folly/experimental/coro:async_pipe -> //folly/coro:async_pipe //folly/experimental/coro:async_scope -> //folly/coro:async_scope //folly/experimental/coro:async_stack -> //folly/coro:async_stack //folly/experimental/coro:baton -> //folly/coro:baton //folly/experimental/coro:blocking_wait -> //folly/coro:blocking_wait //folly/experimental/coro:collect -> //folly/coro:collect //folly/experimental/coro:concat -> //folly/coro:concat //folly/experimental/coro:coroutine -> //folly/coro:coroutine //folly/experimental/coro:current_executor -> //folly/coro:current_executor //folly/experimental/coro:detach_on_cancel -> //folly/coro:detach_on_cancel //folly/experimental/coro:detail_barrier -> //folly/coro:detail_barrier //folly/experimental/coro:detail_barrier_task -> //folly/coro:detail_barrier_task //folly/experimental/coro:detail_current_async_frame -> //folly/coro:detail_current_async_frame //folly/experimental/coro:detail_helpers -> //folly/coro:detail_helpers //folly/experimental/coro:detail_malloc -> //folly/coro:detail_malloc //folly/experimental/coro:detail_manual_lifetime -> //folly/coro:detail_manual_lifetime //folly/experimental/coro:detail_traits -> //folly/coro:detail_traits //folly/experimental/coro:filter -> //folly/coro:filter //folly/experimental/coro:future_util -> //folly/coro:future_util //folly/experimental/coro:generator -> //folly/coro:generator //folly/experimental/coro:gmock_helpers -> //folly/coro:gmock_helpers //folly/experimental/coro:gtest_helpers -> //folly/coro:gtest_helpers //folly/experimental/coro:inline_task -> //folly/coro:inline_task //folly/experimental/coro:invoke -> //folly/coro:invoke //folly/experimental/coro:merge -> //folly/coro:merge //folly/experimental/coro:mutex -> //folly/coro:mutex //folly/experimental/coro:promise -> //folly/coro:promise //folly/experimental/coro:result -> //folly/coro:result //folly/experimental/coro:retry -> //folly/coro:retry //folly/experimental/coro:rust_adaptors -> //folly/coro:rust_adaptors //folly/experimental/coro:scope_exit -> //folly/coro:scope_exit //folly/experimental/coro:shared_lock -> //folly/coro:shared_lock //folly/experimental/coro:shared_mutex -> //folly/coro:shared_mutex //folly/experimental/coro:sleep -> //folly/coro:sleep //folly/experimental/coro:small_unbounded_queue -> //folly/coro:small_unbounded_queue //folly/experimental/coro:task -> //folly/coro:task //folly/experimental/coro:timed_wait -> //folly/coro:timed_wait //folly/experimental/coro:timeout -> //folly/coro:timeout //folly/experimental/coro:traits -> //folly/coro:traits //folly/experimental/coro:transform -> //folly/coro:transform //folly/experimental/coro:unbounded_queue -> //folly/coro:unbounded_queue //folly/experimental/coro:via_if_async -> //folly/coro:via_if_async //folly/experimental/coro:with_async_stack -> //folly/coro:with_async_stack //folly/experimental/coro:with_cancellation -> //folly/coro:with_cancellation //folly/experimental/coro:bounded_queue -> //folly/coro:bounded_queue //folly/experimental/coro:shared_promise -> //folly/coro:shared_promise //folly/experimental/coro:cleanup -> //folly/coro:cleanup //folly/experimental/coro:auto_cleanup_fwd -> //folly/coro:auto_cleanup_fwd //folly/experimental/coro:auto_cleanup -> //folly/coro:auto_cleanup ``` The following headers were deshimmed: ``` folly/experimental/coro/Accumulate.h -> folly/coro/Accumulate.h folly/experimental/coro/Accumulate-inl.h -> folly/coro/Accumulate-inl.h folly/experimental/coro/AsyncGenerator.h -> folly/coro/AsyncGenerator.h folly/experimental/coro/AsyncPipe.h -> folly/coro/AsyncPipe.h folly/experimental/coro/AsyncScope.h -> folly/coro/AsyncScope.h folly/experimental/coro/AsyncStack.h -> folly/coro/AsyncStack.h folly/experimental/coro/Baton.h -> folly/coro/Baton.h folly/experimental/coro/BlockingWait.h -> folly/coro/BlockingWait.h folly/experimental/coro/Collect.h -> folly/coro/Collect.h folly/experimental/coro/Collect-inl.h -> folly/coro/Collect-inl.h folly/experimental/coro/Concat.h -> folly/coro/Concat.h folly/experimental/coro/Concat-inl.h -> folly/coro/Concat-inl.h folly/experimental/coro/Coroutine.h -> folly/coro/Coroutine.h folly/experimental/coro/CurrentExecutor.h -> folly/coro/CurrentExecutor.h folly/experimental/coro/DetachOnCancel.h -> folly/coro/DetachOnCancel.h folly/experimental/coro/detail/Barrier.h -> folly/coro/detail/Barrier.h folly/experimental/coro/detail/BarrierTask.h -> folly/coro/detail/BarrierTask.h folly/experimental/coro/detail/CurrentAsyncFrame.h -> folly/coro/detail/CurrentAsyncFrame.h folly/experimental/coro/detail/Helpers.h -> folly/coro/detail/Helpers.h folly/experimental/coro/detail/Malloc.h -> folly/coro/detail/Malloc.h folly/experimental/coro/detail/ManualLifetime.h -> folly/coro/detail/ManualLifetime.h folly/experimental/coro/detail/Traits.h -> folly/coro/detail/Traits.h folly/experimental/coro/Filter.h -> folly/coro/Filter.h folly/experimental/coro/Filter-inl.h -> folly/coro/Filter-inl.h folly/experimental/coro/FutureUtil.h -> folly/coro/FutureUtil.h folly/experimental/coro/Generator.h -> folly/coro/Generator.h folly/experimental/coro/GmockHelpers.h -> folly/coro/GmockHelpers.h folly/experimental/coro/GtestHelpers.h -> folly/coro/GtestHelpers.h folly/experimental/coro/detail/InlineTask.h -> folly/coro/detail/InlineTask.h folly/experimental/coro/Invoke.h -> folly/coro/Invoke.h folly/experimental/coro/Merge.h -> folly/coro/Merge.h folly/experimental/coro/Merge-inl.h -> folly/coro/Merge-inl.h folly/experimental/coro/Mutex.h -> folly/coro/Mutex.h folly/experimental/coro/Promise.h -> folly/coro/Promise.h folly/experimental/coro/Result.h -> folly/coro/Result.h folly/experimental/coro/Retry.h -> folly/coro/Retry.h folly/experimental/coro/RustAdaptors.h -> folly/coro/RustAdaptors.h folly/experimental/coro/ScopeExit.h -> folly/coro/ScopeExit.h folly/experimental/coro/SharedLock.h -> folly/coro/SharedLock.h folly/experimental/coro/SharedMutex.h -> folly/coro/SharedMutex.h folly/experimental/coro/Sleep.h -> folly/coro/Sleep.h folly/experimental/coro/Sleep-inl.h -> folly/coro/Sleep-inl.h folly/experimental/coro/SmallUnboundedQueue.h -> folly/coro/SmallUnboundedQueue.h folly/experimental/coro/Task.h -> folly/coro/Task.h folly/experimental/coro/TimedWait.h -> folly/coro/TimedWait.h folly/experimental/coro/Timeout.h -> folly/coro/Timeout.h folly/experimental/coro/Timeout-inl.h -> folly/coro/Timeout-inl.h folly/experimental/coro/Traits.h -> folly/coro/Traits.h folly/experimental/coro/Transform.h -> folly/coro/Transform.h folly/experimental/coro/Transform-inl.h -> folly/coro/Transform-inl.h folly/experimental/coro/UnboundedQueue.h -> folly/coro/UnboundedQueue.h folly/experimental/coro/ViaIfAsync.h -> folly/coro/ViaIfAsync.h folly/experimental/coro/WithAsyncStack.h -> folly/coro/WithAsyncStack.h folly/experimental/coro/WithCancellation.h -> folly/coro/WithCancellation.h folly/experimental/coro/BoundedQueue.h -> folly/coro/BoundedQueue.h folly/experimental/coro/SharedPromise.h -> folly/coro/SharedPromise.h folly/experimental/coro/Cleanup.h -> folly/coro/Cleanup.h folly/experimental/coro/AutoCleanup-fwd.h -> folly/coro/AutoCleanup-fwd.h folly/experimental/coro/AutoCleanup.h -> folly/coro/AutoCleanup.h ``` This is a codemod. It was automatically generated and will be landed once it is approved and tests are passing in sandcastle. You have been added as a reviewer by Sentinel or Butterfly. Autodiff project: dcoro Autodiff partition: fbcode.internal_repo_rocksdb Autodiff bookmark: ad.dcoro.fbcode.internal_repo_rocksdb Reviewed By: dtolnay Differential Revision: D62684411 fbshipit-source-id: 8dbd31ab64fcdd99435d322035b9668e3200e0a3 |
|
anand76 | cabd2d8718 |
Fix a couple of missing cases of retry on corruption (#13007)
Summary: For SST checksum mismatch corruptions in the read path, RocksDB retries the read if the underlying file system supports verification and reconstruction of data (`FSSupportedOps::kVerifyAndReconstructRead`). There were a couple of places where the retry was missing - reading the SST footer and the properties block. This PR fixes the retry in those cases. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13007 Test Plan: Add new unit tests Reviewed By: jaykorean Differential Revision: D62519186 Pulled By: anand1976 fbshipit-source-id: 50aa38f18f2a53531a9fc8d4ccdf34fbf034ed59 |
|
Changyu Bi | e490f2b051 |
Fix a bug in ReFitLevel() where `FileMetaData::being_compacted` is not cleared (#13009)
Summary: in ReFitLevel(), we were not setting being_compacted to false after ReFitLevel() is done. This is not a issue if refit level is successful, since new FileMetaData is created for files at the target level. However, if there's an error during RefitLevel(), e.g., Manifest write failure, we should clear the being_compacted field for these files. Otherwise, these files will not be picked for compaction until db reopen. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13009 Test Plan: existing test. - stress test failure in T200339331 should not happen anymore. Reviewed By: hx235 Differential Revision: D62597169 Pulled By: cbi42 fbshipit-source-id: 0ba659806da6d6d4b42384fc95268b2d7bad720e |
|
Yu Zhang | 43bc71fef6 |
Add an internal API MemTableList::GetEditForDroppingCurrentVersion (#13001)
Summary: Prepare this internal API to be used by atomic data replacement. The main purpose of this API is to get a `VersionEdit` to mark the entire current `MemTableListVersion` as dropped. Flush needs the similar functionality when installing results, so that logic is refactored into a util function `GetDBRecoveryEditForObsoletingMemTables` to be shared by flush and this internal API. To test this internal API, flush's result installation is redirected to use this API when it is flushing all the immutable MemTables in debug mode. It should achieve the exact same results, just with a duplicated `VersionEdit::log_number` field that doesn't upsets the recovery logic. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13001 Test Plan: Existing tests Reviewed By: pdillinger Differential Revision: D62309591 Pulled By: jowlyzhang fbshipit-source-id: e25914d9a2e281c25ab7ee31a66eaf6adfae4b88 |
|
Yu Zhang | 0c6e9c036a |
Make compaction always use the input version with extra ref protection (#12992)
Summary: `Compaction` is already creating its own ref for the input Version: |
|
Yu Zhang | a24574e80a |
Add documentation for background job's state transition (#12994)
Summary: The `SchedulePending*` API is a bit confusing since it doesn't immediately schedule the work and can be confused with the actual scheduling. So I have changed these to be `EnqueuePending*` and added some documentation for the corresponding state transitions of these background work. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12994 Test Plan: existing tests Reviewed By: cbi42 Differential Revision: D62252746 Pulled By: jowlyzhang fbshipit-source-id: ee68be6ed33070cad9a5004b7b3e16f5bcb041bf |
|
Changyu Bi | cd6f802ccb |
Add a new file ingestion option `link_files` (#12980)
Summary: Add option `IngestExternalFileOptions::link_files` that hard links input files and preserves original file links after ingestion, unlike `move_files` which will unlink input files after ingestion. This can be useful when being used together with `allow_db_generated_files` to ingest files from another DB. Also reverted the change to `move_files` in https://github.com/facebook/rocksdb/issues/12959 to simplify the contract so that it will always unlink input files without exception. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12980 Test Plan: updated unit test `ExternSSTFileLinkFailFallbackTest.LinkFailFallBackExternalSst` to test that input files will not be unlinked. Reviewed By: pdillinger Differential Revision: D61925111 Pulled By: cbi42 fbshipit-source-id: eadaca72e1ae5288bdd195d57158466e5656fa62 |