mirror of https://github.com/facebook/rocksdb.git
275 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
Peter Dillinger | 351d2fd2b6 |
Make simple BlockBasedTableOptions mutable (#10021)
Summary: In theory, there should be no danger in mutability, as table builders and readers work from copies of BlockBasedTableOptions. However, there is currently an unresolved read-write race that affecting SetOptions on BBTO fields. This should be generally acceptable for non-pointer options of 64 bits or less, but a fix is needed to make it mutability general here. See https://github.com/facebook/rocksdb/issues/10079 This change systematically sets all of those "simple" options (and future such options) as mutable. (Resurrecting this PR perhaps preferable to proposed https://github.com/facebook/rocksdb/issues/13063) Pull Request resolved: https://github.com/facebook/rocksdb/pull/10021 Test Plan: Some unit test updates. XXX comment added to stress test code Reviewed By: cbi42 Differential Revision: D64360967 Pulled By: pdillinger fbshipit-source-id: ff220fa778331852fe331b42b76ac4adfcd2d760 |
|
Yu Zhang | a571cbed17 |
Use same logic to assign level for non-overlapping files in universal compaction (#13059)
Summary: When the input files are not overlapping, a.k.a `files_overlap_=false`, it's best to assign them to non L0 levels so that they are not one sorted run each. This can be done regardless of compaction style being leveled or universal without any side effects. Just my guessing, this special handling may be there because universal compaction used to have an invariant that sequence number on higher levels should not be smaller than sequence number in lower levels. File ingestion used to try to keep up to that promise by doing "sequence number stealing" from the to be assigned level. However, that invariant is no longer true after deletion triggered compaction is added for universal compaction, and we also removed the sequence stealing logic from file ingestion. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13059 Test Plan: Updated existing tests Reviewed By: cbi42 Differential Revision: D64220100 Pulled By: jowlyzhang fbshipit-source-id: 70a83afba7f4c52d502c393844e6b3273d5cf628 |
|
Changyu Bi | bceb2dfe6a |
Introduce minimum compaction debt requirement for parallel compaction (#13054)
Summary: a small CF can trigger parallel compaction that applies to the entire DB. This is because the bottommost file size of a small CF can be too small compared to l0 files when a l0->lbase compaction happens. We prevent this by requiring some minimum on the compaction debt. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13054 Test Plan: updated unit test. Reviewed By: hx235 Differential Revision: D63861042 Pulled By: cbi42 fbshipit-source-id: 43bbf327988ef0ef912cd2fc700e3d096a8d2c18 |
|
Peter Dillinger | dd23e84cad |
Re-implement GetApproximateMemTableStats for skip lists (#13047)
Summary: GetApproximateMemTableStats() could return some bad results with the standard skip list memtable. See this new db_bench test showing the dismal distribution of results when the actual number of entries in range is 1000: ``` $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=1000 ... filluniquerandom : 1.391 micros/op 718915 ops/sec 1.391 seconds 1000000 operations; 11.7 MB/s approximatememtablestats : 3.711 micros/op 269492 ops/sec 3.711 seconds 1000000 operations; Reported entry count stats (expected 1000): Count: 1000000 Average: 2344.1611 StdDev: 26587.27 Min: 0 Median: 965.8555 Max: 835273 Percentiles: P50: 965.86 P75: 1610.77 P99: 12618.01 P99.9: 74991.58 P99.99: 830970.97 ------------------------------------------------------ [ 0, 1 ] 131344 13.134% 13.134% ### ( 1, 2 ] 115 0.011% 13.146% ( 2, 3 ] 106 0.011% 13.157% ( 3, 4 ] 190 0.019% 13.176% ( 4, 6 ] 214 0.021% 13.197% ( 6, 10 ] 522 0.052% 13.249% ( 10, 15 ] 748 0.075% 13.324% ( 15, 22 ] 1002 0.100% 13.424% ( 22, 34 ] 1948 0.195% 13.619% ( 34, 51 ] 3067 0.307% 13.926% ( 51, 76 ] 4213 0.421% 14.347% ( 76, 110 ] 5721 0.572% 14.919% ( 110, 170 ] 11375 1.137% 16.056% ( 170, 250 ] 17928 1.793% 17.849% ( 250, 380 ] 36597 3.660% 21.509% # ( 380, 580 ] 77882 7.788% 29.297% ## ( 580, 870 ] 160193 16.019% 45.317% ### ( 870, 1300 ] 210098 21.010% 66.326% #### ( 1300, 1900 ] 167461 16.746% 83.072% ### ( 1900, 2900 ] 78678 7.868% 90.940% ## ( 2900, 4400 ] 47743 4.774% 95.715% # ( 4400, 6600 ] 17650 1.765% 97.480% ( 6600, 9900 ] 11895 1.190% 98.669% ( 9900, 14000 ] 4993 0.499% 99.168% ( 14000, 22000 ] 2384 0.238% 99.407% ( 22000, 33000 ] 1966 0.197% 99.603% ( 50000, 75000 ] 2968 0.297% 99.900% ( 570000, 860000 ] 999 0.100% 100.000% readrandom : 1.967 micros/op 508487 ops/sec 1.967 seconds 1000000 operations; 8.2 MB/s (1000000 of 1000000 found) ``` Perhaps the only good thing to say about the old implementation was that it was fast, though apparently not that fast. I've implemented a much more robust and reasonably fast new version of the function. It's still logarithmic but with some larger constant factors. The standard deviation from true count is around 20% or less, and roughly the CPU cost of two memtable point look-ups. See code comments for detail. ``` $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=1000 ... filluniquerandom : 1.478 micros/op 676434 ops/sec 1.478 seconds 1000000 operations; 11.0 MB/s approximatememtablestats : 2.694 micros/op 371157 ops/sec 2.694 seconds 1000000 operations; Reported entry count stats (expected 1000): Count: 1000000 Average: 1073.5158 StdDev: 197.80 Min: 608 Median: 1079.9506 Max: 2176 Percentiles: P50: 1079.95 P75: 1223.69 P99: 1852.36 P99.9: 1898.70 P99.99: 2176.00 ------------------------------------------------------ ( 580, 870 ] 134848 13.485% 13.485% ### ( 870, 1300 ] 747868 74.787% 88.272% ############### ( 1300, 1900 ] 116536 11.654% 99.925% ## ( 1900, 2900 ] 748 0.075% 100.000% readrandom : 1.997 micros/op 500654 ops/sec 1.997 seconds 1000000 operations; 8.1 MB/s (1000000 of 1000000 found) ``` We can already see that the distribution of results is dramatically better and wonderfully normal-looking, with relative standard deviation around 20%. The function is also FASTER, at least with these parameters. Let's look how this behavior generalizes, first *much* larger range: ``` $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=30000 filluniquerandom : 1.390 micros/op 719654 ops/sec 1.376 seconds 990000 operations; 11.7 MB/s approximatememtablestats : 1.129 micros/op 885649 ops/sec 1.129 seconds 1000000 operations; Reported entry count stats (expected 30000): Count: 1000000 Average: 31098.8795 StdDev: 3601.47 Min: 21504 Median: 29333.9303 Max: 43008 Percentiles: P50: 29333.93 P75: 33018.00 P99: 43008.00 P99.9: 43008.00 P99.99: 43008.00 ------------------------------------------------------ ( 14000, 22000 ] 408 0.041% 0.041% ( 22000, 33000 ] 749327 74.933% 74.974% ############### ( 33000, 50000 ] 250265 25.027% 100.000% ##### readrandom : 1.894 micros/op 528083 ops/sec 1.894 seconds 1000000 operations; 8.5 MB/s (989989 of 1000000 found) ``` This is *even faster* and relatively *more accurate*, with relative standard deviation closer to 10%. Code comments explain why. Now let's look at smaller ranges. Implementation quirks or conveniences: * When actual number in range is >= 40, the minimum return value is 40. * When the actual is <= 10, it is guaranteed to return that actual number. ``` $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=75 ... filluniquerandom : 1.417 micros/op 705668 ops/sec 1.417 seconds 999975 operations; 11.4 MB/s approximatememtablestats : 3.342 micros/op 299197 ops/sec 3.342 seconds 1000000 operations; Reported entry count stats (expected 75): Count: 1000000 Average: 75.1210 StdDev: 15.02 Min: 40 Median: 71.9395 Max: 256 Percentiles: P50: 71.94 P75: 89.69 P99: 119.12 P99.9: 166.68 P99.99: 229.78 ------------------------------------------------------ ( 34, 51 ] 38867 3.887% 3.887% # ( 51, 76 ] 550554 55.055% 58.942% ########### ( 76, 110 ] 398854 39.885% 98.828% ######## ( 110, 170 ] 11353 1.135% 99.963% ( 170, 250 ] 364 0.036% 99.999% ( 250, 380 ] 8 0.001% 100.000% readrandom : 1.861 micros/op 537224 ops/sec 1.861 seconds 1000000 operations; 8.7 MB/s (999974 of 1000000 found) $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=25 ... filluniquerandom : 1.501 micros/op 666283 ops/sec 1.501 seconds 1000000 operations; 10.8 MB/s approximatememtablestats : 5.118 micros/op 195401 ops/sec 5.118 seconds 1000000 operations; Reported entry count stats (expected 25): Count: 1000000 Average: 26.2392 StdDev: 4.58 Min: 25 Median: 28.4590 Max: 72 Percentiles: P50: 28.46 P75: 31.69 P99: 49.27 P99.9: 67.95 P99.99: 72.00 ------------------------------------------------------ ( 22, 34 ] 928936 92.894% 92.894% ################### ( 34, 51 ] 67960 6.796% 99.690% # ( 51, 76 ] 3104 0.310% 100.000% readrandom : 1.892 micros/op 528595 ops/sec 1.892 seconds 1000000 operations; 8.6 MB/s (1000000 of 1000000 found) $ ./db_bench --benchmarks=filluniquerandom,approximatememtablestats,readrandom --value_size=1 --num=1000000 --batch_size=10 ... filluniquerandom : 1.642 micros/op 608916 ops/sec 1.642 seconds 1000000 operations; 9.9 MB/s approximatememtablestats : 3.042 micros/op 328721 ops/sec 3.042 seconds 1000000 operations; Reported entry count stats (expected 10): Count: 1000000 Average: 10.0000 StdDev: 0.00 Min: 10 Median: 10.0000 Max: 10 Percentiles: P50: 10.00 P75: 10.00 P99: 10.00 P99.9: 10.00 P99.99: 10.00 ------------------------------------------------------ ( 6, 10 ] 1000000 100.000% 100.000% #################### readrandom : 1.805 micros/op 554126 ops/sec 1.805 seconds 1000000 operations; 9.0 MB/s (1000000 of 1000000 found) ``` Remarkably consistent. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13047 Test Plan: new db_bench test for both performance and accuracy (see above); added to crash test; unit test updated. Reviewed By: cbi42 Differential Revision: D63722003 Pulled By: pdillinger fbshipit-source-id: cfc8613c085e87c17ecec22d82601aac2a5a1b26 |
|
Peter Dillinger | 79790cf2a8 |
Bug fix and test BuildDBOptions (#13038)
Summary: The following DBOptions were not being propagated through BuildDBOptions, which could at least lead to settings being lost through `GetOptionsFromString()`, possibly elsewhere as well: * background_close_inactive_wals * write_dbid_to_manifest * write_identity_file * prefix_seek_opt_in_only Pull Request resolved: https://github.com/facebook/rocksdb/pull/13038 Test Plan: This problem was not being caught by OptionsSettableTest.DBOptionsAllFieldsSettable when the option was omitted from both options_helper.cc and options_settable_test.cc. I have added to the test to catch future instances (and the updated test was how I found three of the four missing options). The same kind of bug seems to be caught by ColumnFamilyOptionsAllFieldsSettable, and AFAIK analogous code does not exist for BlockBasedTableOptions. Reviewed By: ltamasi Differential Revision: D63483779 Pulled By: pdillinger fbshipit-source-id: a5d5f6e434174bacb8e5d251b767e81e62b7225a |
|
anand76 | 22105366d9 |
More accurate accounting of compressed cache memory (#13032)
Summary: When an item is inserted into the compressed secondary cache, this PR calculates the charge using the malloc_usable_size of the allocated memory, as well as the unique pointer allocation. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13032 Test Plan: New unit test Reviewed By: pdillinger Differential Revision: D63418493 Pulled By: anand1976 fbshipit-source-id: 1db2835af6867442bb8cf6d9bf412e120ddd3824 |
|
anand76 | d02f63cc54 |
Respect lowest_used_cache_tier option for compressed blocks (#13030)
Summary: If the lowest_used_cache_tier DB option is set to kVolatileTier, skip insertion of compressed blocks into the secondary cache. Previously, these were always inserted into the secondary cache via the InsertSaved() method, leading to pollution of the secondary cache with blocks that would never be read. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13030 Test Plan: Add a new unit test Reviewed By: pdillinger Differential Revision: D63329841 Pulled By: anand1976 fbshipit-source-id: 14d2fce2ed309401d9ad4d2e7c356218b6673f7b |
|
Levi Tamasi | fbbb08770f |
Update HISTORY.md, version.h, and the format compatibility check script for the 9.7 release (#13027)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13027 Reviewed By: jowlyzhang Differential Revision: D63158344 Pulled By: ltamasi fbshipit-source-id: e650a0024155d52c7aa2afd0f242b8363071279a |
|
Peter Dillinger | a1a102ffce |
Steps toward deprecating implicit prefix seek, related fixes (#13026)
Summary: With some new use cases onboarding to prefix extractors/seek/filters, one of the risks is existing iterator code, e.g. for maintenance tasks, being unintentionally subject to prefix seek semantics. This is a longstanding known design flaw with prefix seek, and `prefix_same_as_start` and `auto_prefix_mode` were steps in the direction of making that obsolete. However, we can't just immediately set `total_order_seek` to true by default, because that would impact so much code instantly. Here we add a new DB option, `prefix_seek_opt_in_only` that basically allows users to transition to the future behavior when they are ready. When set to true, all iterators will be treated as if `total_order_seek=true` and then the only ways to get prefix seek semantics are with `prefix_same_as_start` or `auto_prefix_mode`. Related fixes / changes: * Make sure that `prefix_same_as_start` and `auto_prefix_mode` are compatible with (or override) `total_order_seek` (depending on your interpretation). * Fix a bug in which a new iterator after dynamically changing the prefix extractor might mix different prefix semantics between memtable and SSTs. Both should use the latest extractor semantics, which means iterators ignoring memtable prefix filters with an old extractor. And that means passing the latest prefix extractor to new memtable iterators that might use prefix seek. (Without the fix, the test added for this fails in many ways.) Suggested follow-up: * Investigate a FIXME where a MergeIteratorBuilder is created in db_impl.cc. No unit test detects a change in value that should impact correctness. * Make memtable prefix bloom compatible with `auto_prefix_mode`, which might require involving the memtablereps because we don't know at iterator creation time (only seek time) whether an auto_prefix_mode seek will be a prefix seek. * Add `prefix_same_as_start` testing to db_stress Pull Request resolved: https://github.com/facebook/rocksdb/pull/13026 Test Plan: tests updated, added. Add combination of `total_order_seek=true` and `auto_prefix_mode=true` to stress test. Ran `make blackbox_crash_test` for a long while. Manually ran tests with `prefix_seek_opt_in_only=true` as default, looking for unexpected issues. I inspected most of the results and migrated many tests to be ready for such a change (but not all). Reviewed By: ltamasi Differential Revision: D63147378 Pulled By: pdillinger fbshipit-source-id: 1f4477b730683d43b4be7e933338583702d3c25e |
|
anand76 | 6549b11714 |
Make Cache a customizable class (#13024)
Summary: This PR allows a Cache object to be created using the object registry. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13024 Reviewed By: pdillinger Differential Revision: D63043233 Pulled By: anand1976 fbshipit-source-id: 5bc3f7c29b35ad62638ff8205451303e2cecea9d |
|
Changyu Bi | 71e38dbe25 |
Compact one file at a time for FIFO temperature change compactions (#13018)
Summary: Per customer request, we should not merge multiple SST files together during temperature change compaction, since this can cause FIFO TTL compactions to be delayed. This PR changes the compaction picking logic to pick one file at a time. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13018 Test Plan: * updated some existing unit tests to test this new behavior. Reviewed By: jowlyzhang Differential Revision: D62883292 Pulled By: cbi42 fbshipit-source-id: 6a9fc8c296b5d9b17168ef6645f25153241c8b93 |
|
Levi Tamasi | 54ace7f340 |
Change the semantics of blob_garbage_collection_force_threshold to provide better control over space amp (#13022)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13022 Currently, `blob_garbage_collection_force_threshold` applies to the oldest batch of blob files, which is typically only a small subset of the blob files currently eligible for garbage collection. This can result in a form of head-of-line blocking: no GC-triggered compactions will be scheduled if the oldest batch does not currently exceed the threshold, even if a lot of higher-numbered blob files do. This can in turn lead to high space amplification that exceeds the soft bound implicit in the force threshold (e.g. 50% would suggest a space amp of <2 and 75% would imply a space amp of <4). The patch changes the semantics of this configuration threshold to apply to the entire set of blob files that are eligible for garbage collection based on `blob_garbage_collection_age_cutoff`. This provides more intuitive semantics for the option and can provide a better write amp/space amp trade-off. (Note that GC-triggered compactions still pick the same SST files as before, so triggered GC still targets the oldest the blob files.) Reviewed By: jowlyzhang Differential Revision: D62977860 fbshipit-source-id: a999f31fe9cdda313de513f0e7a6fc707424d4a3 |
|
Peter Dillinger | 98c33cb8e3 |
Steps toward making IDENTITY file obsolete (#13019)
Summary: * Set write_dbid_to_manifest=true by default * Add new option write_identity_file (default true) that allows us to opt-in to future behavior without identity file * Refactor related DB open code to minimize code duplication _Recommend hiding whitespace changes for review_ Intended follow-up: add support to ldb for reading and even replacing the DB identity in the manifest. Could be a variant of `update_manifest` command or based on it. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13019 Test Plan: unit tests and stress test updated for new functionality Reviewed By: anand1976 Differential Revision: D62898229 Pulled By: pdillinger fbshipit-source-id: c08b25cf790610b034e51a9de0dc78b921abbcf0 |
|
Nick Brekhus | 0611eb5b9d |
Fix orphaned files in SstFileManager (#13015)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/13015 `Close()`ing a database now releases tracked files in `SstFileManager`. Previously this space would be leaked until the database was later reopened. Reviewed By: jowlyzhang Differential Revision: D62590773 fbshipit-source-id: 5461bd253d974ac4967ad52fee92e2650f8a9a28 |
|
Changyu Bi | f97e33454f |
Fix a bug with auto recovery on WAL write error (#12995)
Summary:
A recent crash test failure shows that auto recovery from WAL write failure can cause CFs to be inconsistent. A unit test repro in P1569398553. The following is an example sequence of events:
```
0. manual_wal_flush is true. There are multiple CFs in a DB.
1. Submit a write batch with updates to multiple CF
2. A FlushWAL or a memtable swtich that will try to write the buffered WAL data. Fail this write so that buffered WAL data is dropped:
|
|
Changyu Bi | e490f2b051 |
Fix a bug in ReFitLevel() where `FileMetaData::being_compacted` is not cleared (#13009)
Summary: in ReFitLevel(), we were not setting being_compacted to false after ReFitLevel() is done. This is not a issue if refit level is successful, since new FileMetaData is created for files at the target level. However, if there's an error during RefitLevel(), e.g., Manifest write failure, we should clear the being_compacted field for these files. Otherwise, these files will not be picked for compaction until db reopen. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13009 Test Plan: existing test. - stress test failure in T200339331 should not happen anymore. Reviewed By: hx235 Differential Revision: D62597169 Pulled By: cbi42 fbshipit-source-id: 0ba659806da6d6d4b42384fc95268b2d7bad720e |
|
Changyu Bi | cd6f802ccb |
Add a new file ingestion option `link_files` (#12980)
Summary: Add option `IngestExternalFileOptions::link_files` that hard links input files and preserves original file links after ingestion, unlike `move_files` which will unlink input files after ingestion. This can be useful when being used together with `allow_db_generated_files` to ingest files from another DB. Also reverted the change to `move_files` in https://github.com/facebook/rocksdb/issues/12959 to simplify the contract so that it will always unlink input files without exception. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12980 Test Plan: updated unit test `ExternSSTFileLinkFailFallbackTest.LinkFailFallBackExternalSst` to test that input files will not be unlinked. Reviewed By: pdillinger Differential Revision: D61925111 Pulled By: cbi42 fbshipit-source-id: eadaca72e1ae5288bdd195d57158466e5656fa62 |
|
Changyu Bi | 4eb5878ab2 |
Support ingesting db generated files using hard link (#12959)
Summary: so `IngestExternalFileOptions::move_files` and `IngestExternalFileOptions::allow_db_generated_files` are now compatible. The original file links won't be removed if `allow_db_generated_files` is true. This is to prevent deleting files from another DB. There was a [comment](https://github.com/facebook/rocksdb/pull/12750#discussion_r1684509620) in https://github.com/facebook/rocksdb/issues/12750 about how exactly-once ingestion would work with `move_files`. I've discussed with customer and decided that it can be done by reading the target DB to see if it contains any ingested key. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12959 Test Plan: updated unit tests `IngestDBGeneratedFileTest*` to enable `move_files`. Reviewed By: jowlyzhang Differential Revision: D61703480 Pulled By: cbi42 fbshipit-source-id: 6b4294369767f989a2f36bbace4ca3c0257aeaf7 |
|
Changyu Bi | c62de54c7c |
Record largest seqno in table properties and verify in file ingestion (#12951)
Summary:
this helps to avoid scanning input files when ingesting db generated files:
|
|
Jay Huh | 5b8f5cbcf4 |
Update main branch for 9.6 release (#12945)
Summary:
Main branch cut at
|
|
Changyu Bi | defd97bc9d |
Add an option to verify memtable key order during reads (#12889)
Summary: add a new CF option `paranoid_memory_checks` that allows additional data integrity validations during read/scan. Currently, skiplist-based memtable will validate the order of keys visited. Further data validation can be added in different layers. The option will be opt-in due to performance overhead. The motivation for this feature is for services where data correctness is critical and want to detect in-memory corruption earlier. For a corrupted memtable key, this feature can help to detect it during during reads instead of during flush with existing protections (OutputValidator that verifies key order or per kv checksum). See internally linked task for more context. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12889 Test Plan: * new unit test added for paranoid_memory_checks=true. * existing unit test for paranoid_memory_checks=false. * enable in stress test. Performance Benchmark: we check for performance regression in read path where data is in memtable only. For each benchmark, the script was run at the same time for main and this PR: * Memtable-only randomread ops/sec: ``` (for I in $(seq 1 50);do ./db_bench --benchmarks=fillseq,readrandom --write_buffer_size=268435456 --writes=250000 --num=250000 --reads=500000 --seed=1723056275 2>&1 | grep "readrandom"; done;) | awk '{ t += $5; c++; print } END { print 1.0 * t / c }'; Main: 608146 PR with paranoid_memory_checks=false: 607727 (- %0.07) PR with paranoid_memory_checks=true: 521889 (-%14.2) ``` * Memtable-only sequential scan ops/sec: ``` (for I in $(seq 1 50); do ./db_bench--benchmarks=fillseq,readseq[-X10] --write_buffer_size=268435456 --num=1000000 --seed=1723056275 2>1 | grep "\[AVG 10 runs\]"; done;) | awk '{ t += $6; c++; print; } END { printf "%.0f\n", 1.0 * t / c }'; Main: 9180077 PR with paranoid_memory_checks=false: 9536241 (+%3.8) PR with paranoid_memory_checks=true: 7653934 (-%16.6) ``` * Memtable-only reverse scan ops/sec: ``` (for I in $(seq 1 20); do ./db_bench --benchmarks=fillseq,readreverse[-X10] --write_buffer_size=268435456 --num=1000000 --seed=1723056275 2>1 | grep "\[AVG 10 runs\]"; done;) | awk '{ t += $6; c++; print; } END { printf "%.0f\n", 1.0 * t / c }'; Main: 1285719 PR with integrity_checks=false: 1431626 (+%11.3) PR with integrity_checks=true: 811031 (-%36.9) ``` The `readrandom` benchmark shows no regression. The scanning benchmarks show improvement that I can't explain. Reviewed By: pdillinger Differential Revision: D60414267 Pulled By: cbi42 fbshipit-source-id: a70b0cbeea131f1a249a5f78f9dc3a62dacfaa91 |
|
Jay Huh | 273b3eadf0 |
Add Remote Compaction Installation Callback Function (#12940)
Summary: Add an optional callback function upon remote compaction temp output installation. This will be internally used for setting the final status in the Offload Infra. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12940 Test Plan: Unit Test added ``` ./compaction_service_test ``` _Also internally tested by manually merging into internal code base_ Reviewed By: anand1976 Differential Revision: D61419157 Pulled By: jaykorean fbshipit-source-id: 66831685bc403949c26bfc65840dd1900d2a5a67 |
|
Yu Zhang | 295326b6ee |
Best efforts recovery recover seqno prefix (#12938)
Summary: This PR make best efforts recovery more permissive by allowing it to recover incomplete Version that presents a valid point in time view from the user's perspective. Currently, a Version is only valid and saved if all files consisting that Version can be found. With this change, if only a suffix of L0 files (and their associated blob files) are missing, a valid Version is also available to be saved and recover to. Note that we don't do this if the column family was atomically flushed. Because atomic flush also need a consistent view across the column families, we cannot guarantee that if we are recovering to incomplete version. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12938 Test Plan: Existing tests and added unit tests. Reviewed By: anand1976 Differential Revision: D61414381 Pulled By: jowlyzhang fbshipit-source-id: f9b73deb34d35ad696ab42315928b656d586262a |
|
Peter Dillinger | 4d3518951a |
Option to decouple index and filter partitions (#12939)
Summary: Partitioned metadata blocks were introduced back in 2017 to deal more gracefully with large DBs where RAM is relatively scarce and some data might be much colder than other data. The feature allows metadata blocks to compete for memory in the block cache against data blocks while alleviating tail latencies and thrash conditions that can arise with large metadata blocks (sometimes megabytes each) that can arise with large SST files. In general, the cost to partitioned metadata is more CPU in accesses (especially for filters where more binary search is needed before hashing can be used) and a bit more memory fragmentation and related overheads. However the feature has always had a subtle limitation with a subtle effect on performance: index partitions and filter partitions must be cut at the same time, regardless of which wins the space race (hahaha) to metadata_block_size. Commonly filters will be a few times larger than indexes, so index partitions will be under-sized compared to filter (and data) blocks. While this does affect fragmentation and related overheads a bit, I suspect the bigger impact on performance is in the block cache. The coupling of the partition cuts would be defensible if the binary search done to find the filter block was used (on filter hit) to short-circuit binary search to an index partition, but that optimization has not been developed. Consider two metadata blocks, an under-sized one and a normal-sized one, covering proportional sections of the key space with the same density of read queries. The under-sized one will be more prone to eviction from block cache because it is used less often. This is unfair because of its despite its proportionally smaller cost of keeping in block cache, and most of the cost of a miss to re-load it (random IO) is not proportional to the size (similar latency etc. up to ~32KB). ## This change Adds a new table option decouple_partitioned_filters allows filter blocks and index blocks to be cut independently. To make this work, the partitioned filter block builder needs to know about the previous key, to generate an appropriate separator for the partition index. In most cases, BlockBasedTableBuilder already has easy access to the previous key to provide to the filter block builder. This change includes refactoring to pass that previous key to the filter builder when available, with the filter building caching the previous key itself when unavailable, such as during compression dictionary training and some unit tests. Access to the previous key eliminates the need to track the previous prefix, which results in a small SST construction CPU win in prefix filtering cases, regardless of coupling, and possibly a small regression for some non-prefix cases, regardless of coupling, but still overall improvement especially with https://github.com/facebook/rocksdb/issues/12931. Suggested follow-up: * Update confusing use of "last key" to refer to "previous key" * Expand unit test coverage with parallel compression and dictionary training * Consider an option or enhancement to alleviate under-sized metadata blocks "at the end" of an SST file due to no coordination or awareness of when files are cut. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12939 Test Plan: unit tests updated. Also did some unit test runs with "hard wired" usage of parallel compression and dictionary training code paths to ensure they were working. Also ran blackbox_crash_test for a while with the new feature. ## SST write performance (CPU) Using the same testing setup as in https://github.com/facebook/rocksdb/issues/12931 but with -decouple_partitioned_filters=1 in the "after" configuration, which benchmarking shows makes almost no difference in terms of SST write CPU. "After" vs. "before" this PR ``` -partition_index_and_filters=0 -prefix_size=0 -whole_key_filtering=1 923691 vs. 924851 (-0.13%) -partition_index_and_filters=0 -prefix_size=8 -whole_key_filtering=0 921398 vs. 922973 (-0.17%) -partition_index_and_filters=0 -prefix_size=8 -whole_key_filtering=1 902259 vs. 908756 (-0.71%) -partition_index_and_filters=1 -prefix_size=8 -whole_key_filtering=0 917932 vs. 916901 (+0.60%) -partition_index_and_filters=1 -prefix_size=8 -whole_key_filtering=0 912755 vs. 907298 (+0.60%) -partition_index_and_filters=1 -prefix_size=8 -whole_key_filtering=1 899754 vs. 892433 (+0.82%) ``` I think this is a pretty good trade, especially in attracting more movement toward partitioned configurations. ## Read performance Let's see how decoupling affects read performance across various degrees of memory constraint. To simplify LSM structure, we're using FIFO compaction. Since decoupling will overall increase metadata block size, we control for this somewhat with an extra "before" configuration with larger metadata block size setting (8k instead of 4k). Basic setup: ``` (for CS in 0300 1200; do TEST_TMPDIR=/dev/shm/rocksdb1 ./db_bench -benchmarks=fillrandom,flush,readrandom,block_cache_entry_stats -num=5000000 -duration=30 -disable_wal=1 -write_buffer_size=30000000 -bloom_bits=10 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0 -partition_index_and_filters=1 -statistics=1 -cache_size=${CS}000000 -metadata_block_size=4096 -decouple_partitioned_filters=1 2>&1 | tee results-$CS; done) ``` And read ops/s results: ```CSV Cache size MB,After/decoupled/4k,Before/4k,Before/8k 3,15593,15158,12826 6,16295,16693,14134 10,20427,20813,18459 20,27035,26836,27384 30,33250,31810,33846 60,35518,32585,35329 100,36612,31805,35292 300,35780,31492,35481 1000,34145,31551,35411 1100,35219,31380,34302 1200,35060,31037,34322 ``` If you graph this with log scale on the X axis (internal link: https://pxl.cl/5qKRc), you see that the decoupled/4k configuration is essentially the best of both the before/4k and before/8k configurations: handles really tight memory closer to the old 4k configuration and handles generous memory closer to the old 8k configuration. Reviewed By: jowlyzhang Differential Revision: D61376772 Pulled By: pdillinger fbshipit-source-id: fc2af2aee44290e2d9620f79651a30640799e01f |
|
Jay Huh | b99baa5ae7 |
Do not add unprep_seqs when WriteImpl() fails in unprepared txn (#12927)
Summary: With the following scenario, we see assertion failure in write_unprepared_txn 1. The write is the first one in the transaction (`log_number_` is not set yet - https://github.com/facebook/rocksdb/blob/main/utilities/transactions/write_unprepared_txn.cc#L376-L379) 2. `WriteToWAL()` fails inside `db_impl_->WriteImpl()` due to fault injection. `last_log_number_`is 0. https://github.com/facebook/rocksdb/blob/main/utilities/transactions/write_unprepared_txn.cc#L386 3. `prepare_batch_cnt_` is still added to `unprep_seqs_` https://github.com/facebook/rocksdb/blob/main/utilities/transactions/write_unprepared_txn.cc#L395-L398 4. When the transaction gets destructed after failed, it expects `log_number_ > 0` if `unprep_seqs_` is not empty - https://github.com/facebook/rocksdb/blob/main/utilities/transactions/write_unprepared_txn.cc#L54-L55. Step 3 is wrong. `unprep_seqs_` is for the ordered list of unprep sequence numbers that we have already written to the DB. If `db_impl_->WriteImpl()` failed, `unprep_seqs_` shouldn't be set. `prepare_seq` value would be wrong anyway. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12927 Test Plan: Following command steadily repros the issue. This no longer fails after the fix ``` ./db_stress \ --WAL_size_limit_MB=0 \ --WAL_ttl_seconds=60 \ --acquire_snapshot_one_in=10000 \ --adaptive_readahead=1 \ --adm_policy=2 \ --advise_random_on_open=0 \ --allow_data_in_errors=True \ --allow_fallocate=1 \ --async_io=1 \ --auto_readahead_size=0 \ --avoid_flush_during_recovery=0 \ --avoid_flush_during_shutdown=1 \ --avoid_unnecessary_blocking_io=0 \ --backup_max_size=104857600 \ --backup_one_in=100000 \ --batch_protection_bytes_per_key=8 \ --bgerror_resume_retry_interval=100 \ --block_align=1 \ --block_protection_bytes_per_key=8 \ --block_size=16384 \ --bloom_before_level=7 \ --bloom_bits=4.1280979878467345 \ --bottommost_compression_type=none \ --bottommost_file_compaction_delay=600 \ --bytes_per_sync=262144 \ --cache_index_and_filter_blocks=1 \ --cache_index_and_filter_blocks_with_high_priority=0 \ --cache_size=33554432 \ --cache_type=lru_cache \ --charge_compression_dictionary_building_buffer=1 \ --charge_file_metadata=1 \ --charge_filter_construction=1 \ --charge_table_reader=1 \ --check_multiget_consistency=0 \ --check_multiget_entity_consistency=1 \ --checkpoint_one_in=0 \ --checksum_type=kXXH3 \ --clear_column_family_one_in=0 \ --compact_files_one_in=1000000 \ --compact_range_one_in=1000000 \ --compaction_pri=4 \ --compaction_readahead_size=1048576 \ --compaction_ttl=0 \ --compress_format_version=1 \ --compressed_secondary_cache_ratio=0.0 \ --compressed_secondary_cache_size=0 \ --compression_checksum=1 \ --compression_max_dict_buffer_bytes=4294967295 \ --compression_max_dict_bytes=16384 \ --compression_parallel_threads=8 \ --compression_type=none \ --compression_use_zstd_dict_trainer=1 \ --compression_zstd_max_train_bytes=0 \ --continuous_verification_interval=0 \ --create_timestamped_snapshot_one_in=0 \ --daily_offpeak_time_utc=04:00-08:00 \ --data_block_index_type=0 \ --db=$db \ --db_write_buffer_size=8388608 \ --default_temperature=kHot \ --default_write_temperature=kUnknown \ --delete_obsolete_files_period_micros=21600000000 \ --delpercent=5 \ --delrangepercent=0 \ --destroy_db_initially=0 \ --detect_filter_construct_corruption=0 \ --disable_file_deletions_one_in=10000 \ --disable_manual_compaction_one_in=10000 \ --disable_wal=0 \ --dump_malloc_stats=1 \ --enable_checksum_handoff=1 \ --enable_compaction_filter=0 \ --enable_custom_split_merge=1 \ --enable_do_not_compress_roles=0 \ --enable_index_compression=1 \ --enable_memtable_insert_with_hint_prefix_extractor=0 \ --enable_pipelined_write=0 \ --enable_sst_partitioner_factory=0 \ --enable_thread_tracking=1 \ --enable_write_thread_adaptive_yield=1 \ --error_recovery_with_no_fault_injection=1 \ --exclude_wal_from_write_fault_injection=0 \ --expected_values_dir=$exp \ --fail_if_options_file_error=0 \ --fifo_allow_compaction=1 \ --file_checksum_impl=crc32c \ --fill_cache=1 \ --flush_one_in=1000 \ --format_version=2 \ --get_all_column_family_metadata_one_in=1000000 \ --get_current_wal_file_one_in=0 \ --get_live_files_apis_one_in=1000000 \ --get_properties_of_all_tables_one_in=100000 \ --get_property_one_in=1000000 \ --get_sorted_wal_files_one_in=0 \ --hard_pending_compaction_bytes_limit=274877906944 \ --high_pri_pool_ratio=0 \ --index_block_restart_interval=15 \ --index_shortening=0 \ --index_type=2 \ --ingest_external_file_one_in=0 \ --initial_auto_readahead_size=524288 \ --inplace_update_support=0 \ --iterpercent=10 \ --key_len_percent_dist=1,30,69 \ --key_may_exist_one_in=100000 \ --kill_random_test=888887 \ --last_level_temperature=kCold \ --level_compaction_dynamic_level_bytes=1 \ --lock_wal_one_in=1000000 \ --log2_keys_per_lock=10 \ --log_file_time_to_roll=0 \ --log_readahead_size=0 \ --long_running_snapshots=1 \ --low_pri_pool_ratio=0 \ --lowest_used_cache_tier=1 \ --manifest_preallocation_size=5120 \ --manual_wal_flush_one_in=0 \ --mark_for_compaction_one_file_in=10 \ --max_auto_readahead_size=0 \ --max_background_compactions=20 \ --max_bytes_for_level_base=10485760 \ --max_key=100000 \ --max_key_len=3 \ --max_log_file_size=1048576 \ --max_manifest_file_size=1073741824 \ --max_sequential_skip_in_iterations=2 \ --max_total_wal_size=0 \ --max_write_batch_group_size_bytes=16 \ --max_write_buffer_number=10 \ --max_write_buffer_size_to_maintain=0 \ --memtable_insert_hint_per_batch=0 \ --memtable_max_range_deletions=1000 \ --memtable_prefix_bloom_size_ratio=0.1 \ --memtable_protection_bytes_per_key=8 \ --memtable_whole_key_filtering=1 \ --memtablerep=skip_list \ --metadata_charge_policy=0 \ --metadata_read_fault_one_in=32 \ --metadata_write_fault_one_in=0 \ --min_write_buffer_number_to_merge=2 \ --mmap_read=1 \ --mock_direct_io=False \ --nooverwritepercent=1 \ --num_file_reads_for_auto_readahead=2 \ --open_files=-1 \ --open_metadata_read_fault_one_in=0 \ --open_metadata_write_fault_one_in=0 \ --open_read_fault_one_in=0 \ --open_write_fault_one_in=0 \ --ops_per_thread=20000000 \ --optimize_filters_for_hits=1 \ --optimize_filters_for_memory=0 \ --optimize_multiget_for_io=0 \ --paranoid_file_checks=1 \ --partition_filters=0 \ --partition_pinning=0 \ --pause_background_one_in=1000000 \ --periodic_compaction_seconds=1 \ --prefix_size=5 \ --prefixpercent=5 \ --prepopulate_block_cache=0 \ --preserve_internal_time_seconds=60 \ --progress_reports=0 \ --promote_l0_one_in=0 \ --read_amp_bytes_per_bit=32 \ --read_fault_one_in=1000 \ --readahead_size=0 \ --readpercent=45 \ --recycle_log_file_num=1 \ --reopen=20 \ --report_bg_io_stats=1 \ --reset_stats_one_in=10000 \ --sample_for_compression=0 \ --secondary_cache_fault_one_in=0 \ --sync=0 \ --sync_fault_injection=0 \ --table_cache_numshardbits=6 \ --target_file_size_base=524288 \ --target_file_size_multiplier=2 \ --test_batches_snapshots=0 \ --top_level_index_pinning=0 \ --txn_write_policy=2 \ --uncache_aggressiveness=126 \ --universal_max_read_amp=4 \ --unordered_write=0 \ --unpartitioned_pinning=1 \ --use_adaptive_mutex=0 \ --use_adaptive_mutex_lru=1 \ --use_attribute_group=1 \ --use_delta_encoding=1 \ --use_direct_io_for_flush_and_compaction=0 \ --use_direct_reads=0 \ --use_full_merge_v1=0 \ --use_get_entity=0 \ --use_merge=1 \ --use_multi_cf_iterator=0 \ --use_multi_get_entity=0 \ --use_multiget=1 \ --use_optimistic_txn=0 \ --use_put_entity_one_in=0 \ --use_timed_put_one_in=0 \ --use_txn=1 \ --use_write_buffer_manager=0 \ --user_timestamp_size=0 \ --value_size_mult=32 \ --verification_only=0 \ --verify_checksum=1 \ --verify_checksum_one_in=1000000 \ --verify_compression=0 \ --verify_db_one_in=10000 \ --verify_file_checksums_one_in=1000000 \ --verify_iterator_with_expected_state_one_in=5 \ --verify_sst_unique_id_in_manifest=1 \ --wal_bytes_per_sync=0 \ --wal_compression=none \ --write_buffer_size=4194304 \ --write_dbid_to_manifest=1 \ --write_fault_one_in=128 \ --writepercent=35 ``` Reviewed By: cbi42 Differential Revision: D61048774 Pulled By: jaykorean fbshipit-source-id: 22200d55fd0b22b68732b12516e681a6c6e2c601 |
|
anand76 | c21fe1a47f |
Add ticker stats for read corruption retries (#12923)
Summary: Add a couple of ticker stats for corruption retry count and successful retries. This PR also eliminates an extra read attempt when there's a checksum mismatch in a block read from the prefetch buffer. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12923 Test Plan: Update existing tests Reviewed By: jowlyzhang Differential Revision: D61024687 Pulled By: anand1976 fbshipit-source-id: 3a08403580ab244000e0d480b7ee0f5a03d76b06 |
|
Changyu Bi | b32d899482 |
Fix MultiGet dropping memtable kv checksum corruption (#12842)
Summary: Corruption status returned by `GetFromTable()` could be overwritten here: |
|
Yu Zhang | d12aaf23ca |
Fix file deletions in DestroyDB not rate limited (#12891)
Summary: Make `DestroyDB` slowly delete files if it's configured and enabled via `SstFileManager`. It's currently not available mainly because of DeleteScheduler's logic related to tracked total_size_ and total_trash_size_. These accounting and logic should not be applied to `DestroyDB`. This PR adds a `DeleteUnaccountedDBFile` util for this purpose which deletes files without accounting it. This util also supports assigning a file to a specified trash bucket so that user can later wait for a specific trash bucket to be empty. For `DestroyDB`, files with more than 1 hard links will be deleted immediately. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12891 Test Plan: Added unit tests, existing tests. Reviewed By: anand1976 Differential Revision: D60300220 Pulled By: jowlyzhang fbshipit-source-id: 8b18109a177a3a9532f6dc2e40e08310c08ca3c7 |
|
Levi Tamasi | 2e8a1a14ef |
Fix a data race affecting the background error status (#12910)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/12910 There is currently a call to `GetBGError()` in `DBImpl::WriteImplWALOnly()` where the DB mutex is (incorrectly) not held, leading to a data race. Technically, we could acquire the mutex here but instead, the patch removes the affected check altogether, since the same check is already performed (in a thread-safe manner) in the subsequent call to `PreprocessWrite()`. Reviewed By: cbi42 Differential Revision: D60682008 fbshipit-source-id: 54b67975dcf57d67c068cac71e8ada09a1793ec5 |
|
Changyu Bi | 8be824e316 |
Use compensated file size for intra-L0 compaction (#12878)
Summary: In leveled compaction, we pick intra-L0 compaction instead of L0->Lbase whenever L0 size is small. When L0 files contain many deletions, it makes more sense to compact then down instead of accumulating tombstones in L0. This PR uses compensated_file_size when computing L0 size for determining intra-L0 compaction. Also scale down the limit on total L0 size further to be more cautious about accumulating data in L0. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12878 Test Plan: updated unit test. Reviewed By: hx235 Differential Revision: D59932421 Pulled By: cbi42 fbshipit-source-id: 9de973ac51eb7df81b38b8c68110072b1aa06321 |
|
anand76 | 55877d8893 |
Make transaction name conflict check more robust (#12895)
Summary: The `PessimisticTransaction::SetName()` code checks for an existing txn of the given name before registering the new txn. However, this is not atomic, which could result in a race condition if two txns try to register with the same name. Both might succeed and lead to unpredictable behavior. This PR makes the test and set atomic. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12895 Reviewed By: pdillinger Differential Revision: D60460482 Pulled By: anand1976 fbshipit-source-id: e8afeb2356e1b8f4e8df785cb73532739f82579d |
|
Yu Zhang | d94c2adc28 |
Add entry for bug fix in #12882 (#12892)
Summary: As titled. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12892 Reviewed By: hx235 Differential Revision: D60400651 Pulled By: jowlyzhang fbshipit-source-id: 2dd60c2287143f464ecab0de859715af6ab3a825 |
|
Hui Xiao | 15d9988ab2 |
Update history and version for 9.5.fb release (#12880)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/12880 Reviewed By: jaykorean, jowlyzhang Differential Revision: D60057955 Pulled By: hx235 fbshipit-source-id: 1c599a5334aff1f424bb473275efe4349b17d41d |
|
Hui Xiao | 349b1ec08f |
Fix duplicate WAL entries caused by write after error recovery (#12873)
Summary: **Context/Summary:** We recently discovered a case where write of the same key right after error recovery of a previous failed write of the same key finishes causes two same WAL entries, violating our assertion. This is because we don't advance seqno on failed write and reuse the same WAL containing the failed write for the new write if the memtable at the time is empty. This PR reuses the flush path for an empty memtable to switch WAL and update min WAL to keep in error recovery flush as well as updates the INFO log message for clarity. ``` 2024/07/17-15:01:32.271789 327757 (Original Log Time 2024/07/17-15:01:25.942234) [/flush_job.cc:1017] [default] [JOB 2] Level-0 flush table https://github.com/facebook/rocksdb/issues/9: 0 bytes OK It's an empty SST file from a successful flush so won't be kept in the DB 2024/07/17-15:01:32.271798 327757 (Original Log Time 2024/07/17-15:01:32.269954) [/memtable_list.cc:560] [default] Level-0 commit flush result of table https://github.com/facebook/rocksdb/issues/9 started 2024/07/17-15:01:32.271802 327757 (Original Log Time 2024/07/17-15:01:32.271217) [/memtable_list.cc:760] [default] Level-0 commit flush result of table https://github.com/facebook/rocksdb/issues/9: memtable https://github.com/facebook/rocksdb/issues/1 done ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/12873 Test Plan: New UT that failed before this PR with following assertion failure (i.e, duplicate WAL entries) and passes after ``` db_wal_test: db/write_batch.cc:2254: rocksdb::Status rocksdb::{anonymous}::MemTableInserter::PutCFImpl(uint32_t, const rocksdb::Slice&, const rocksdb::Slice&, rocksdb::ValueType, RebuildTxnOp, const ProtectionInfoKVOS64*) [with RebuildTxnOp = rocksdb::{anonymous}::MemTableInserter::PutCF(uint32_t, const rocksdb::Slice&, const rocksdb::Slice&)::<lambda(rocksdb::WriteBatch*, uint32_t, const rocksdb::Slice&, const rocksdb::Slice&)>; uint32_t = unsigned int; rocksdb::ProtectionInfoKVOS64 = rocksdb::ProtectionInfoKVOS<long unsigned int>]: Assertion `seq_per_batch_' failed. ``` Reviewed By: anand1976 Differential Revision: D59884468 Pulled By: hx235 fbshipit-source-id: 5d854b719092552c69727a979f269fb7f6c39756 |
|
Changyu Bi | c064ac3bc5 |
Avoid opening table files and reading table properties under mutex (#12879)
Summary: InitInputTableProperties() can open and do IOs and is called under mutex_. This PR removes it from FinalizeInputInfo(). It is now called in CompactionJob::Run() and BuildCompactionJobInfo() (called in NotifyOnCompactionBegin()) without holding mutex_. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12879 Test Plan: existing unit tests. Added assert in GetInputTableProperties() to ensure that input_table_properties_ is initialized whenever it's called. Reviewed By: hx235 Differential Revision: D59933195 Pulled By: cbi42 fbshipit-source-id: c8089e13af8567fa3ab4b94d9ec384ae98ab2ec8 |
|
Changyu Bi | 4384dd5eee |
Support ingesting SST files generated by a live DB (#12750)
Summary: ... to enable use cases like using RocksDB to merge sort data for ingestion. A new file ingestion option `IngestExternalFileOptions::allow_db_generated_files` is introduced to allows users to ingest SST files generated by live DBs instead of SstFileWriter. For now this only works if the SST files being ingested have zero as their largest sequence number AND do not overlap with any data in the DB (so we can assign seqno 0 which matches the seqno of all ingested keys). The feature is marked the option as experimental for now. Main changes needed to enable this: - ignore CF id mismatch during ingestion - ignore the missing external file version table property Rest of the change is mostly in new unit tests. A previous attempt is in https://github.com/facebook/rocksdb/issues/5602. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12750 Test Plan: - new unit tests Reviewed By: ajkr, jowlyzhang Differential Revision: D58396673 Pulled By: cbi42 fbshipit-source-id: aae513afad7b1ff5d4faa48104df5f384926bf03 |
|
anand76 | 0fca5e31b4 |
Fix race between manifest error recovery and file ingestion (#12871)
Summary: This PR fixes an assertion failure in `DBImpl::ResumeImpl` - `assert(!versions_->descriptor_log_)`. In `VersionSet`, `descriptor_log_` has a pointer to the current MANIFEST writer. When there's an error updating the manifest, `descriptor_log_` is reset, and the error recovery thread checks `io_status()` in `VersionSet` and attempts to write a new MANIFEST. If another DB manipulation happens at the same time (like external file ingestion, column family manipulation etc), it calls `LogAndApply`, which also attempts to write a new MANIFEST. The assertion in `ResumeImpl` might fail in this case since the other MANIFEST writer may have updated `descriptor_log_`. To prevent the assertion, this fix updates both `io_status_` and `descriptor_log_` while holding the DB mutex. The other option would have been to simply remove the assert. But I think its important to have it to ensure the invariant that `io_status_` is cleared if the MANIFEST is written successfully, and this fix makes things easier to reason about. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12871 Test Plan: Existing tests and crash test Reviewed By: hx235 Differential Revision: D59926947 Pulled By: anand1976 fbshipit-source-id: af9ad18da3e29fc62c7ec2e30e0738aa33d4e5f1 |
|
Peter Dillinger | 93b163d1a2 |
Fix major bug with prefixes, SeekForPrev, and partitioned filters (#12872)
Summary: Basically, the fix in https://github.com/facebook/rocksdb/issues/8137 was incomplete (and I missed it in the review), because if `whole_key_filtering` is false, then `last_prefix_str_` will never be set to non-empty and the fix doesn't work. Also related to https://github.com/facebook/rocksdb/issues/5835. This is intended as a safe, simple fix that will regress CPU efficiency slightly (for `whole_key_filtering=false` cases, because of extra prefix string copies during flush & compaction). An efficient fix is not possible without some substantial refactoring. Also in this PR: new test DBBloomFilterTest.FilterNumEntriesCoalesce tests an adjacent code path that was previously untested for its effect of ensuring the number of unique prefixes and keys is tracked properly when both prefixes and whole keys are going into a filter. (Test fails when either of the two code segments checking for duplicates is disabled.) In addition, the same test would fail before the main bug fix here because the code would inappropriately add the empty string to the filter (because of unmodified `last_prefix_str_`). Pull Request resolved: https://github.com/facebook/rocksdb/pull/12872 Test Plan: In addition to DBBloomFilterTest.FilterNumEntriesCoalesce, extended DBBloomFilterTest.SeekForPrevWithPartitionedFilters to cover the broken case. (Mostly whitespace change.) Reviewed By: jowlyzhang Differential Revision: D59873793 Pulled By: pdillinger fbshipit-source-id: 2a7b7f09ca73dc188fb4dab833826ad6da7ebb11 |
|
anand76 | 5aa675457e |
Fix unhandled MANIFEST write errors (#12865)
Summary: The failure of `WriteCurrentStateToManifest()` in `VersionSet::ProcessManifestWrites()` was not handled properly. If it failed, `manifest_io_status` was not updated, leading to `manifest_file_number_` being updated to the newly created manifest even though its bad. This would lead to the bad manifest immediately getting deleted, and also the good manifest (referenced by `CURRENT`) getting deleted by obsolete file deletion because of `manifest_file_number_` not referencing its number. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12865 Reviewed By: hx235 Differential Revision: D59782940 Pulled By: anand1976 fbshipit-source-id: f752fb9a1c23fd3d734616e273613cbac204301b |
|
Hui Xiao | 4ff35afb42 |
Fix a bug where `OnErrorRecoveryBegin()` is not called before auto-recovery (#12860)
Summary:
**Context/Summary:**
`*auto_recovery` needs to be set true in order for `OnErrorRecoveryBegin()` to be called before auto-recovery
|
|
Konstantin Ilin | 5ecb92760a |
Create C API function to iterate over WriteBatch for custom Column Families (#12718)
Summary: Create C API function for iterating over WriteBatch for custom Column Families Adding function to C API that exposes column family specific methods to iterate over WriteBatch: put_cf, delete_cf and merge_cf. This is required when the one needs to read changes for any non-default column family. Without that functionality it is impossible to iterate over changes in WAL that are relevant to custom column families. Fixes https://github.com/facebook/rocksdb/issues/12790 Testing: Added WriteBatch iteration test to "columnfamilies" section of C API unit tests Pull Request resolved: https://github.com/facebook/rocksdb/pull/12718 Reviewed By: cbi42 Differential Revision: D59483601 Pulled By: ajkr fbshipit-source-id: b68b900636304528a38620a8c3ad82fdce4b60cb |
|
Peter Dillinger | 0bb939611d |
Avoid unnecessary work in internal calls to GetSortedWalFiles (#12831)
Summary: We are seeing a number of crash test failures coming from checkpoint and backup code, likely from WalManager::GetSortedWalFiles -> ... -> WalManager::ReadFirstLine and this code path is not needed, because we don't need to know the sequence numbers of WAL files going into a checkpoint or backup. We can minimize the impact of whatever inconsistency is causing that problem by not relying on it where it's not needed. Similarly, when we only need a roughly accurate set of current WAL files, we don't need to query all the archived WAL files (and redundantly the live ones again). So this reduces filesystem queries and DB mutex acquires in creating backups and checkpoints. Needed follow-up: Figure out what is causing various failures with an apparent inconsistency where GetSortedWalFiles fails on reading a WAL file. If it's an injected failure, perhaps it's not propagating that injected failure appropriately. It might also be an inconsistency between what the DB knows is flushed and what WalManager reads from the filesystem (which we know is dubious and should be phased out, which this is arguably another step toward). Or completing that phase-out might solve the problem without a full diagnosis. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12831 Test Plan: existing tests (easily caught when I went too far in initally developing this change) Update to BackupUsingDirectIO test so that there's a WAL file in what is backed up. (Was relying on some oddity.) Reviewed By: cbi42 Differential Revision: D59252649 Pulled By: pdillinger fbshipit-source-id: 7ad4187a1c70caa59a6d6c1c643ef95232b929f5 |
|
HypenZou | 5bb7f95ed6 |
Don't take archived log size into account when calculating log size for flush (#12680)
Summary: **Context/Summary:** It seems unreasonable to take the archived log size into account when calculating log size **for flush** in method CreateCheckpoint. If the user sets WAL_ttl_seconds or WAL_size_limit_MB, the argument _log_size_for_flush_ can easily be reached due to the size of the archived dir. As a result, the flush may always be triggered. **Test** corverd by ./checkpoint_test Pull Request resolved: https://github.com/facebook/rocksdb/pull/12680 Reviewed By: jaykorean Differential Revision: D59097904 Pulled By: ajkr fbshipit-source-id: 0ed29c1b078d8f40b85288541b008e00dbc517d3 |
|
Yu Zhang | ff204d8ecd |
Add entry for #12803: fix race between event listener and error handler (#12809)
Summary: As titled. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12809 Reviewed By: hx235 Differential Revision: D58974154 Pulled By: jowlyzhang fbshipit-source-id: 7e44b54d9fa3bfbd58a4154a2c7e91aec905c34b |
|
Changyu Bi | 748f74aca3 |
Update main branch for 9.4 release (#12802)
Summary:
Main branch cut at
|
|
Yu Zhang | c73cf7a878 |
Add CompactForTieringCollector to support automatically trigger compaction for tiering use case (#12760)
Summary: This PR adds user property collector factory `CompactForTieringCollectorFactory` to support observe SST file and mark it as need compaction for fast tracking data to the proper tier. A triggering ratio `compaction_trigger_ratio_` can be configured to achieve the following: 1) Setting the ratio to be equal to or smaller than 0 disables this collector 2) Setting the ratio to be within (0, 1] will write the number of observed eligible entries into a user property and marks a file as need-compaction when aforementioned condition is met. 3) Setting the ratio to be higher than 1 can be used to just writes the user table property, and not mark any file as need compaction. For a column family that does not enable tiering feature, even if an effective configuration is provided, this collector is still disabled. For a file that is already on the last level, this collector is also disabled. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12760 Test Plan: Added unit tests Reviewed By: pdillinger Differential Revision: D58734976 Pulled By: jowlyzhang fbshipit-source-id: 6daab2c4f62b5c6689c3c03e3b3907bbbe6b7a81 |
|
Peter Dillinger | 3abcba8470 |
Propagate more ReadOptions to ApproximateOffsetOf/Sizes (#12764)
Summary: Unknown why these would ignore options like deadline and read_tier. Setting total_order_seek=true is unnecessary because of the disable_prefix_seek (= true) parameter to NewIndexIterator. This is only used by the hash index, which uses total order seek if either the ReadOption or the parameter is true. The parameter is arguably redundant with the total_order_seek option, meaning it could be eliminated, but I think this case is exceptional (compared to e.g. no_io): * Prefix seek is particular to user iterators, though might be usable, carefully, for other read operations. * The historical default of total_order_seek=false in a sense is "wrong result by default" so cannot be interpreted as an intent to force prefix seek in an operation for which it might be usual or give bad results. Also added a generic release note to cover this and related PRs. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12764 Test Plan: existing tests Reviewed By: hx235 Differential Revision: D58474240 Pulled By: pdillinger fbshipit-source-id: 79014d9822ba8f09d57ce4524363aa0973017b68 |
|
Peter Dillinger | 0646ec6e2d |
Ensure Close() before LinkFile() for WALs in Checkpoint (#12734)
Summary: POSIX semantics for LinkFile (hard links) allow linking a file that is still being written two, with both the source and destination showing any subsequent writes to the source. This may not be practical semantics for some FileSystem implementations such as remote storage. They might only link the flushed or sync-ed file contents at time of LinkFile, or might even have undefined behavior if LinkFile is called on a file still open for write (not yet "sealed"). This change builds on https://github.com/facebook/rocksdb/issues/12731 to bring more hygiene to our handling of WAL files in Checkpoint. Specifically, we now Close WAL files as soon as they are either (a) inactive and fully synced, or (b) inactive and obsolete (so maybe never fully synced), rather than letting Close() happen in handling obsolete files (maybe a background thread). This should not be a performance issue as Close() should be trivial cost relative to other IO ops, but just in case: * We don't Close() while holding a mutex, to avoid blocking, and * The old behavior is available with a new kill switch option `background_close_inactive_wals`. Stacked on https://github.com/facebook/rocksdb/issues/12731 Pull Request resolved: https://github.com/facebook/rocksdb/pull/12734 Test Plan: Extended existing unit test, especially adding a hygiene check to FaultInjectionTestFS to detect LinkFile() on a file still open for writes. FaultInjectionTestFS already has relevant tracking data, and tests can opt out of the new check, as in a smoke test I have left for the old, deprecated functionality `background_close_inactive_wals=true`. Also ran lengthy blackbox_crash_test to ensure the hygiene check is OK with the crash test. (The only place I can find we use LinkFile in production is Checkpoint.) Reviewed By: cbi42 Differential Revision: D58295284 Pulled By: pdillinger fbshipit-source-id: 64d90ed8477e2366c19eaf9c4c5ad60b82cac5c6 |
|
Hui Xiao | 390fc55ba1 |
Revert PR 12684 and 12556 (#12738)
Summary: **Context/Summary:** a better API design is decided lately so we decided to revert these two changes. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12738 Test Plan: - CI Reviewed By: ajkr Differential Revision: D58162165 Pulled By: hx235 fbshipit-source-id: 9bbe4d2fe9fbe39213f4cf137a2d419e6ffb8e16 |
|
Peter Dillinger | 98393f0139 |
Fix Checkpoint hard link of inactive but unsynced WAL (#12731)
Summary: Background: there is one active WAL file but there can be several more WAL files in various states. Those other WALs are always in a "flushed" state but could be on the `logs_` list not yet fully synced. We currently allow any WAL that is not the active WAL to be hard-linked when creating a Checkpoint, as although it might still be open for write, we are not appending any more data to it. The problem is that a created Checkpoint is supposed to be fully synced on return of that function, and a hard-linked WAL in the state described above might not be fully synced. (Through some prudence in https://github.com/facebook/rocksdb/issues/10083, it would synced if using track_and_verify_wals_in_manifest=true.) The fix is a step toward a long term goal of removing the need to query the filesystem to determine WAL files and their state. (I consider it dubious any time we independently read from or query metadata from a file we have open for writing, as this makes us more susceptible to FileSystem deficiencies or races.) More specifically: * Detect which WALs might not be fully synced, according to our DBImpl metadata, and prevent hard linking those (with `trim_to_size=true` from `GetLiveFilesStorageInfo()`. And while we're at it, use our known flushed sizes for those WALs. * To avoid a race between that and GetSortedWalFiles(), track a maximum needed WAL number for the Checkpoint/GetLiveFilesStorageInfo. * Because of the level of consistency provided by those two, we no longer need to consider syncing as part of the FlushWAL in GetLiveFilesStorageInfo. (We determine the max WAL number consistent with the manifest file size, while holding DB mutex. Should make track_and_verify_wals_in_manifest happy.) This makes the premise of test PutRaceWithCheckpointTrackedWalSync obsolete (sync point callback no longer hit) so the test is removed, with crash test as backstop for related issues. See https://github.com/facebook/rocksdb/issues/10185 Stacked on https://github.com/facebook/rocksdb/issues/12729 Pull Request resolved: https://github.com/facebook/rocksdb/pull/12731 Test Plan: Expanded an existing test, which now fails before fix. Also long runs of blackbox_crash_test with amplified checkpoint frequency. Reviewed By: cbi42 Differential Revision: D58199629 Pulled By: pdillinger fbshipit-source-id: 376e55f4a2b082cd2adb6408a41209de14422382 |