Summary:
**Context/Summary:**
In-place memtable updates (inplace_update_support) is not compatible with concurrent writes (allow_concurrent_memtable_write). So we disallow this combination in crash test
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12550
Test Plan: CI
Reviewed By: jaykorean
Differential Revision: D56204269
Pulled By: hx235
fbshipit-source-id: 06608f2591db5e37470a1da6afcdfd2701781c2d
Summary:
**Context/Summary:**
This PR includes some public DB APIs not tested in crash/stress yet can be added in a straightforward way.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12541
Test Plan:
- Locally run crash test heavily stressing on these new APIs
- CI
Reviewed By: jowlyzhang
Differential Revision: D56164892
Pulled By: hx235
fbshipit-source-id: 8bb568c3e65aec39d642987033f1d76c52f69bd8
Summary:
**Context/Summary:**
`inplace_update_support=true` is not tested in crash/stress test. Since it's not compatible with snapshots like compaction_filter, we need to sanitize its value in presence of snapshots-related options. A minor refactoring is added to centralize such sanitization in db_crashtest.py - see `check_multiget_consistency` and `check_multiget_entity_consistency`
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12535
Test Plan: CI
Reviewed By: ajkr
Differential Revision: D56102978
Pulled By: hx235
fbshipit-source-id: 2e2ab6685a65123b14a321b99f45f60bc6509c6b
Summary:
Context/Summary:
We need a `nvm_sec_cache` when `kAdmPolicyThreeQueue` is used otherwise a nullptr cache will be accessed causing us segfault in https://github.com/facebook/rocksdb/pull/12521
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12524
Test Plan: - Re-enabled `kAdmPolicyThreeQueue` and rehearsed stress test that failed before this fix and pass after
Reviewed By: jowlyzhang
Differential Revision: D55997093
Pulled By: hx235
fbshipit-source-id: e1c6f1015091b4cff0ce6a3fff981d5dece52a62
Summary:
**Context/Summary**
This policy leads to segfault in `CompressedCacheSetCapacityThread` with some build/compilation. Before figuring out the why, disable it for now.
**Test**
Rehearse stress test that failed before the fix but passes after
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12521
Reviewed By: jowlyzhang
Differential Revision: D55942399
Pulled By: hx235
fbshipit-source-id: 85f28e50d596dcfd4a316481570b78fdce58ed0b
Summary:
**Context/Summary:**
This is to improve our crash test coverage.
Bonus change:
- Added the missing Options string mapping for `CacheTier::kVolatileCompressedTier`
- Deprecated crash test options `enable_tiered_storage` mainly for setting `last_level_temperature` which is now covered in crash test by itself
- Intensified `verify_checksum_one_in\verify_file_checksums_one_in` as I found out these together with new coverage surface more issues
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12508
Test Plan: CI to look out for trivial failures
Reviewed By: jowlyzhang
Differential Revision: D55768594
Pulled By: hx235
fbshipit-source-id: 9b829da0309a7db3fcdb17992a524dd64498325c
Summary:
Since some internal user might be interested in using this feature.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12506
Test Plan:
The option was disabled in stress test due to causing failures.
I've ran a round of crash tests internally and there was no failure due to parallel compression. Will monitor if more runs cause failures. So we will know at least how it's broken and decide to fix them or reverse the change.
Reviewed By: jowlyzhang
Differential Revision: D55747552
Pulled By: cbi42
fbshipit-source-id: ae5cda78c338b8b58f651c557d9b70790362444d
Summary:
**Context/Summary:**
`wal_bytes_per_sync > 0` can sync newer WAL but not an older WAL by its nature. This creates a hole in synced WAL data. By our crash test, we recently discovered that our DB can recover past that hole. This resulted in crash-recovery-verification error. Before we fix that recovery behavior, we will temporarily disable `wal_bytes_per_sync` in crash test
Bonus: updated the API to make the nature of this option more explicitly documented
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12489
Test Plan: More stabilized crash test
Reviewed By: ajkr
Differential Revision: D55531589
Pulled By: hx235
fbshipit-source-id: 6dea6486420dc0f50550d488c15652f93972a0ea
Summary:
Previously it was uninitialized. Setting `checksum_handoff_file_types` will cause `kCRC32c` checksums to be passed down in the `DataVerificationInfo`, so it makes sense for `kCRC32c` to be the default.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12485
Test Plan:
ran `db_stress` in a way that failed before. Building with ASAN was needed to ensure the uninitialized bytes are nonzero according to `malloc_fill_byte` (default 0xbe)
```
$ COMPILE_WITH_ASAN=1 make -j28 db_stress
...
$ ./db_stress -sync_fault_injection=1 -enable_checksum_handoff=true
```
Reviewed By: jaykorean
Differential Revision: D55450587
Pulled By: ajkr
fbshipit-source-id: 53dc829b86e49b3fa80570032e83af0bb12adaad
Summary:
This option was previously disabled due to a bug in the recovery logic. The recovery code in `DBImpl::RecoverLogFiles` couldn't tell if an EoF reported by the log reader was really an EoF or a possible corruption that made a record look like an old log record. To fix this, the log reader now explicitly reports when it encounters what looks like an old record. The recovery code treats it as a possible corruption, and uses the next sequence number in the WAL to determine if it should continue replaying the WAL.
This PR also fixes a couple of bugs that log file recycling exposed in the backup and checkpoint path.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12403
Test Plan:
1. Add new unit tests to verify behavior upon corruption
2. Re-enable disabled tests for verifying recycling behavior
Reviewed By: ajkr
Differential Revision: D54544824
Pulled By: anand1976
fbshipit-source-id: 12f5ce39bd6bc0d63b0bc6432dc4db510e0e802a
Summary:
with https://github.com/facebook/rocksdb/issues/12414 enabling `ReadOptions::pin_data`, this bug surfaced as corrupted per key-value checksum during crash test. `saved_key_.GetUserKey()` could be pinned user key, so DBIter should not overwrite it.
In one case, it only surfaces when iterator skips many keys of the same user key. To stress that code path, this PR also added `max_sequential_skip_in_iterations` to crash test.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12451
Test Plan:
- Set ReadOptions::pin_data to true, the bug can be reproed quickly with `./db_stress --persist_user_defined_timestamps=1 --user_timestamp_size=8 --writepercent=35 --delpercent=4 --delrangepercent=1 --iterpercent=20 --nooverwritepercent=1 --prefix_size=8 --prefixpercent=10 --readpercent=30 --memtable_protection_bytes_per_key=8 --block_protection_bytes_per_key=2 --clear_column_family_one_in=0`.
- Set max_sequential_skip_in_iterations to 1 for the other occurrence of the bug.
Reviewed By: jowlyzhang
Differential Revision: D55003766
Pulled By: cbi42
fbshipit-source-id: 23e1049129456684dafb028b6132b70e0afc07fb
Summary:
since it been causing a few crash tests failures, I suspect it'll be easy to repro locally. Also fixed how to print its corruption message so it does not crash with output cannot be utf-8 decoded.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12431
Reviewed By: hx235
Differential Revision: D54881023
Pulled By: cbi42
fbshipit-source-id: 47208a637cd69b30d2545154849405e37db62ed3
Summary:
**Context/Summary:**
We are doing a sweep in all public options, including but not limited to the `Options`, `Read/WriteOptions`, `IngestExternalFileOptions`, cache options.., to find and add the uncovered ones into db crash. The options included in this PR require minimum changes to db crash other than adding the options themselves.
A bonus change: to surface new issues by improved coverage in stderror, we decided to fail/terminate crash test for manual compactions (CompactFiles, CompactRange()) on meaningful errors. See https://github.com/facebook/rocksdb/pull/12414/files#diff-5c4ced6afb6a90e27fec18ab03b2cd89e8f99db87791b4ecc6fa2694284d50c0R2528-R2532, https://github.com/facebook/rocksdb/pull/12414/files#diff-5c4ced6afb6a90e27fec18ab03b2cd89e8f99db87791b4ecc6fa2694284d50c0R2330-R2336 for more.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12414
Test Plan:
- Run `python3 ./tools/db_crashtest.py --simple blackbox` for 10 minutes to ensure no trivial failure
- Run `python3 tools/db_crashtest.py --simple blackbox --compact_files_one_in=1 --compact_range_one_in=1 --read_fault_one_in=1 --write_fault_one_in=1 --interval=50` for a while to ensure the bonus change does not result in trivial crash/termination of stress test
Reviewed By: ajkr, jowlyzhang, cbi42
Differential Revision: D54691774
Pulled By: hx235
fbshipit-source-id: 50443dfb6aaabd8e24c79a2e42b68c6de877be88
Summary:
This PR updates `VersionEditHandlerPointInTime` to recover all or none of the updates in an AtomicGroup. This makes best-effort recovery properly handle atomic flushes during recovery, so the features are now allowed to both be enabled at once.
The new logic requires that AtomicGroups do not contain column family additions or removals. AtomicGroups are currently written for atomic flush, which does not include such edits.
Column family additions or removals are recovered independently of AtomicGroups. The new logic needs to be aware of removal, though, so that a dropped CF does not prevent completion of an AtomicGroup recovery.
The new logic treats each AtomicGroup as if it contains updates for all existing column families, even though it is possible to create AtomicGroups that only affect a subset of column families. This simplifies the logic at the expense of recovering less data in certain edge case scenarios.
The usage of `MaybeCreateVersion()` is pretty tricky. The goal is to create a barrier at the start of an AtomicGroup such that all valid states up to that point will be applied to `versions_`. Here is a summary.
- `MaybeCreateVersion(..., false)` creates a `Version` on a negative edge trigger (transition from valid to invalid). It was previously called when applying each update. Now, it is only called when applying non-AtomicGroup updates.
- `MaybeCreateVersion(..., true)` creates a `Version` on a positive level trigger (valid state). It was previously called only at the end of iteration. Now, it is additionally called before processing an AtomicGroup.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12406
Reviewed By: jaykorean, cbi42
Differential Revision: D54494904
Pulled By: ajkr
fbshipit-source-id: 0114a9fe1d04b471d086dcab5978ea8a3a56ad52
Summary:
Some of the errors like data race and heap-after-use are error out based on crash test reporting them as error by relying on stderr. So reverting back to original form unless we come up with a more reliable solution to error out.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12335
Reviewed By: cbi42
Differential Revision: D53534781
Pulled By: akankshamahajan15
fbshipit-source-id: b19aa560d1560ac2281f7bc04e13961ed751f178
Summary:
For the user defined timestamps in memtable only feature, some special handling for range deletion blocks are needed since both the key (start_key) and the value (end_key) of a range tombstone can contain user-defined timestamps. Handling for the key is taken care of in the same way as the other data blocks in the block based table. This PR adds the special handling needed for the value (end_key) part. This includes:
1) On the write path, when L0 SST files are first created from flush, user-defined timestamps are removed from an end key of a range tombstone. There are places where it's logically removed (replaced with a min timestamp) because there is still logic with the running comparator that expects a user key that contains timestamp. And in the block based builder, it is eventually physically removed before persisted in a block.
2) On the read path, when range deletion block is being read, we artificially pad a min timestamp to the end key of a range tombstone in `BlockBasedTableReader`.
3) For file boundary `FileMetaData.largest`, we artificially pad a max timestamp to it if it contains a range deletion sentinel. Anytime when range deletion end_key is used to update file boundaries, it's using max timestamp instead of the range tombstone's actual timestamp to mark it as an exclusive end. d69628e6ce/db/dbformat.h (L923-L935)
This max timestamp is removed when in memory `FileMetaData.largest` is persisted into Manifest, we pad it back when it's read from Manifest while handling related `VersionEdit` in `VersionEditHandler`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12254
Test Plan: Added unit test and enabled this feature combination's stress test.
Reviewed By: cbi42
Differential Revision: D52965527
Pulled By: jowlyzhang
fbshipit-source-id: e8315f8a2c5268e2ae0f7aec8012c266b86df985
Summary:
While working on Meta's internal test triaging process, I found that `db_crashtest.py` was printing out `stdout` and `stderr` altogether. Adding an option to print `stderr` separately so that it's easy to extract only `stderr` from the test run.
`print_stderr_separately` is introduced as an optional parameter with default value `False` to keep the existing behavior as is (except a few minor changes).
Minor changes to the existing behavior
- We no longer print `stderr has error message:` and `***` prefix to each line. We simply print `stderr:` before printing `stderr` if stderr is printed in stdout and print `stderr` as is.
- We no longer print `times error occurred in output is ...` which doesn't appear to have any values
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12301
Test Plan:
**Default Behavior (blackbox)**
Run printed everything as is
```
$> python3 tools/db_crashtest.py blackbox --simple --max_key=25000000 --write_buffer_size=4194304 2> /tmp/error.log
Running blackbox-crash-test with
interval_between_crash=120
total-duration=6000
...
Integrated BlobDB: blob files enabled 0, min blob size 0, blob file size 268435456, blob compression type NoCompression, blob GC enabled 0, cutoff 0.250000, force threshold 1.000000, blob compaction readahead size 0, blob file starting level 0
Integrated BlobDB: blob cache disabled
DB path: [/tmp/jewoongh/rocksdb_crashtest_blackboxwh7yxpec]
(Re-)verified 0 unique IDs
2024/01/29-09:16:30 Initializing worker threads
Crash-recovery verification passed :)
2024/01/29-09:16:35 Starting database operations
2024/01/29-09:16:35 Starting verification
Stress Test : 543.600 micros/op 8802 ops/sec
: Wrote 0.00 MB (0.27 MB/sec) (50% of 10 ops)
: Wrote 5 times
: Deleted 1 times
: Single deleted 0 times
: 4 read and 0 found the key
: Prefix scanned 0 times
: Iterator size sum is 0
: Iterated 0 times
: Deleted 0 key-ranges
: Range deletions covered 0 keys
: Got errors 0 times
: 0 CompactFiles() succeed
: 0 CompactFiles() did not succeed
stderr:
WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Error : jewoongh injected test error This is not a real failure.
Verification failed :(
```
Nothing in stderr
```
$> cat /tmp/error.log
```
**Default Behavior (whitebox)**
Run printed everything as is
```
$> python3 tools/db_crashtest.py whitebox --simple --max_key=25000000 --write_buffer_size=4194304 2> /tmp/error.log
Running whitebox-crash-test with
total-duration=10000
...
(Re-)verified 571 unique IDs
2024/01/29-09:33:53 Initializing worker threads
Crash-recovery verification passed :)
2024/01/29-09:35:16 Starting database operations
2024/01/29-09:35:16 Starting verification
Stress Test : 97248.125 micros/op 10 ops/sec
: Wrote 0.00 MB (0.00 MB/sec) (12% of 8 ops)
: Wrote 1 times
: Deleted 0 times
: Single deleted 0 times
: 4 read and 1 found the key
: Prefix scanned 1 times
: Iterator size sum is 120868
: Iterated 4 times
: Deleted 0 key-ranges
: Range deletions covered 0 keys
: Got errors 0 times
: 0 CompactFiles() succeed
: 0 CompactFiles() did not succeed
stderr:
WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Error : jewoongh injected test error This is not a real failure.
New cache capacity = 4865393
Verification failed :(
TEST FAILED. See kill option and exit code above!!!
```
Nothing in stderr
```
$> cat /tmp/error.log
```
**New option added (blackbox)**
```
$> python3 tools/db_crashtest.py blackbox --simple --max_key=25000000 --write_buffer_size=4194304 --print_stderr_separately 2> /tmp/error.log
Running blackbox-crash-test with
interval_between_crash=120
total-duration=6000
...
Integrated BlobDB: blob files enabled 0, min blob size 0, blob file size 268435456, blob compression type NoCompression, blob GC enabled 0, cutoff 0.250000, force threshold 1.000000, blob compaction readahead size 0, blob file starting level 0
Integrated BlobDB: blob cache disabled
DB path: [/tmp/jewoongh/rocksdb_crashtest_blackbox7ybna32z]
(Re-)verified 0 unique IDs
Compaction filter factory: DbStressCompactionFilterFactory
2024/01/29-09:05:39 Initializing worker threads
Crash-recovery verification passed :)
2024/01/29-09:05:46 Starting database operations
2024/01/29-09:05:46 Starting verification
Stress Test : 235.917 micros/op 16000 ops/sec
: Wrote 0.00 MB (0.16 MB/sec) (16% of 12 ops)
: Wrote 2 times
: Deleted 1 times
: Single deleted 0 times
: 9 read and 0 found the key
: Prefix scanned 0 times
: Iterator size sum is 0
: Iterated 0 times
: Deleted 0 key-ranges
: Range deletions covered 0 keys
: Got errors 0 times
: 0 CompactFiles() succeed
: 0 CompactFiles() did not succeed
```
stderr printed separately
```
$> cat /tmp/error.log
WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Error : jewoongh injected test error This is not a real failure.
New cache capacity = 19461571
Verification failed :(
```
**New option added (whitebox)**
```
$> python3 tools/db_crashtest.py whitebox --simple --max_key=25000000 --write_buffer_size=4194304 --print_stderr_separately 2> /tmp/error.log
Running whitebox-crash-test with
total-duration=10000
...
Integrated BlobDB: blob files enabled 0, min blob size 0, blob file size 268435456, blob compression type NoCompression, blob GC enabled 0, cutoff 0.250000, force threshold 1.000000, blob compaction readahead size 0, blob file starting level 0
Integrated BlobDB: blob cache disabled
DB path: [/tmp/jewoongh/rocksdb_crashtest_whiteboxtwj0ihn6]
(Re-)verified 157 unique IDs
2024/01/29-09:39:59 Initializing worker threads
Crash-recovery verification passed :)
2024/01/29-09:40:16 Starting database operations
2024/01/29-09:40:16 Starting verification
Stress Test : 742.474 micros/op 11801 ops/sec
: Wrote 0.00 MB (0.27 MB/sec) (36% of 19 ops)
: Wrote 7 times
: Deleted 1 times
: Single deleted 0 times
: 8 read and 0 found the key
: Prefix scanned 0 times
: Iterator size sum is 0
: Iterated 4 times
: Deleted 0 key-ranges
: Range deletions covered 0 keys
: Got errors 0 times
: 0 CompactFiles() succeed
: 0 CompactFiles() did not succeed
TEST FAILED. See kill option and exit code above!!!
```
stderr printed separately
```
$> cat /tmp/error.log
WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Error : jewoongh injected test error This is not a real failure.
Error : jewoongh injected test error This is not a real failure.
Error : jewoongh injected test error This is not a real failure.
New cache capacity = 4865393
Verification failed :(
```
Reviewed By: akankshamahajan15
Differential Revision: D53187491
Pulled By: jaykorean
fbshipit-source-id: 76f9100d08b96d014e41b7b88b206d69f0ae932b
Summary:
Right now crash_test relies on std::errors too to check for only errors/failures along with verification. However, that's not a reliable solution and many internal services logs benign errors/warnings in which case our test script fails.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12265
Test Plan: Keep std::errors but printout instead of failing and will monitor crash tests internally to see if there is any scenario which solely relies on std::error, in which case stress tests can be improve.
Reviewed By: ajkr, cbi42
Differential Revision: D52967000
Pulled By: akankshamahajan15
fbshipit-source-id: 5328c8b69480c7946fe6a9c72f9ffeede70ac2ad
Summary:
Currently, we treat the long-running whitebox_crash_test as passing. However, we were not cleaning up after ourselves when we killed the running test for running too long, which often caused out-of-space errors in subsequent tests (e.g., blackbox_crash_test after whitebox_crash_test).
Unless we want to start treating these timeouts as failures and need the DB output for investigation now, we should properly clean up the tmp dir.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12248
Test Plan:
```
$> make crash_test -j
```
Reviewed By: ajkr
Differential Revision: D52885342
Pulled By: jaykorean
fbshipit-source-id: 7c1f2ca7cf03d0705bb14155ee44d5d7a411c132
Summary:
We test LockWAL() and UnlockWAL() by checking that latest sequence number is not changed: 1a1f9f1660/db_stress_tool/db_stress_test_base.cc (L920-L937). With writeprepared transaction, sequence number can be advanced in SwitchMemtable::WriteRecoverableState() when writing recoverable state: 1a1f9f1660/db/db_impl/db_impl_write.cc (L1560)
This PR disables LockWAL() tests for writeprepared transaction for now. We probably need to change how we test LockWAL() for writeprepared before re-enabling this test.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12221
Reviewed By: ajkr
Differential Revision: D52677076
Pulled By: cbi42
fbshipit-source-id: 27ee694878edf63e8f4ad52f769d4db401f511bc
Summary:
This feature combination is not fully working yet. Disable them so the stress tests have less noise.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12218
Reviewed By: cbi42
Differential Revision: D52643957
Pulled By: jowlyzhang
fbshipit-source-id: 8815a18a3b5814cad4f7ec41f3fb94869302081e
Summary:
**Context/Summary:**
Continued from https://github.com/facebook/rocksdb/pull/12127, we can randomly reduce the # max key to coerce more operations on the same key. My experimental run shows it surfaced more issue than just https://github.com/facebook/rocksdb/pull/12127.
I also randomly reduce the related parameters, write buffer size and target file base, to adapt to randomly lower number of # max key. This creates 4 situations of testing, 3 of which are new:
1. **high** # max key with **high** write buffer size and target file base (existing)
2. **high** # max key with **low** write buffer size and target file base (new, will go through some rehearsal testing to ensure we don't run out of space with many files)
3. **low** # max key with **high** write buffer size and target file base (new, keys will stay in memory longer)
4. **low** # max key with **low** write buffer size and target file base (new, experimental runs show it surfaced even more issues)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12148
Test Plan:
- [Ongoing] Rehearsal stress test
- Monitor production stress test
Reviewed By: jaykorean
Differential Revision: D52174980
Pulled By: hx235
fbshipit-source-id: bd5e11280826819ca9314c69bbbf05d481c6d105
Summary:
This PR adds initial stress testing for the user-defined timestamps in memtable only feature. Each flavor of the `*_ts` crash test get a 1 in 3 chance to run with timestamps not persisted, this setting is initialized once and kept consistent across the following re-runs.
This initial stress test included these things besides disabling incompatible feature combinations to make the test run more stably:
1) It currently only run test methods that validates db state with expected state. Not the ones that validate db state by comparing result from one API to another API. Such as `TestMultiGet` (compared with `Get`), similarly `TestMultiGetEntity`, `TestIterate` (compare src iterator to a control iterator). Due to timestamps being removed, results from one API to another API is not directly comparable as it is now. More test logic to handle that need to be added, will do that in a follow up.
2) Even when comparing db state to expected state, sometimes the db can receive `InvalidArgument` too due to timestamps getting flushed and removed. Added some logic to handle that.
3) When timestamps are not persisted, we don't try to read with older timestamp. Since that's making it easier to get `InvalidArgument`. And this capability is not yet needed by our customer so it's disabled for now.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12124
Test Plan: running multiple flavor of this test on continuous run for sometime before checkin
Reviewed By: ltamasi
Differential Revision: D51916267
Pulled By: jowlyzhang
fbshipit-source-id: 3f3eb5f9618d05d296062820e0ef5cb8edc7c2b2
Summary:
**Context/Summary:**
My experimental stress runs with more frequent "xxx_one_in" surfaced a couple interesting bugs/issues with RocksDB or crash test framework in the past. We now consider changing the default value so they are run more frequently in production testing environment.
Increase frequency by 2 orders of magnitude for most parameters, except for error-prone features e.g, manual compaction and file ingestion (increased by 3 orders) and expensive features e.g, checksum verification (increased by 1 order)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12127
Test Plan: Monitor CI to see if it did surface more interesting bugs/issues. If not, we may consider intensify even more.
Reviewed By: pdillinger
Differential Revision: D51954235
Pulled By: hx235
fbshipit-source-id: 92046cb7c52a37212f19ab7965b40f77b90b08b1
Summary:
This is a simple refactor for the crash test script to put shared logic for parsing stderr into a function. There is no functional change.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12109
Test Plan: manually tested the script
Reviewed By: ajkr
Differential Revision: D51692172
Pulled By: jowlyzhang
fbshipit-source-id: d346d64e981d9c489c380ff6ce33296a224b5877
Summary:
I have finally tracked down and fixed a bug affecting AutoHCC that was causing CI crash test assertion failures in AutoHCC when using secondary cache, but I was only able to reproduce locally a couple of times, after very long runs/repetitions.
It turns out that the essential feature used by secondary cache to trigger the bug is Insert without keeping a handle, which is otherwise rarely used in RocksDB and not incorporated into cache_bench (also used for targeted correctness stress testing) until this change (new option `-blind_insert_percent`).
The problem was in copying some logic from FixedHCC that makes the entry "sharable" but unreferenced once populated, if no reference is to be saved. The problem in AutoHCC is that we can only add the entry to a chain after it is in the sharable state, and must be removed from the chain while in the "under (de)construction" state and before it is back in the "empty" state. Also, it is possible for Lookup to find entries that are not connected to any chain, by design for efficiency, and for Release to erase_if_last_ref. Therefore, we could have
* Thread 1 starts to Insert a cache entry without keeping ref, and pauses before adding to the chain.
* Thread 2 finds it with Lookup optimizations, and then does Release with `erase_if_last_ref=true` causing it to trigger erasure on the entry. It successfully locks the home chain for the entry and purges any entries pending erasure. It is OK that this entry is not found on the chain, as another thread is allowed to remove it from the chain before we are able to (but after is it marked for (de)construction). And after the purge of the chain, the entry is marked empty.
* Thread 1 resumes in adding the slot (presumed entry) to the home chain for what was being inserted, but that now violates invariants and sets up a race or double-chain-reference as another thread could insert a new entry in the slot and try to insert into a different chain.
This is easily fixed by holding on to a reference until inserted onto the chain.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12046
Test Plan:
As I don't have a reliable local reproducer, I triggered 20 runs of internal CI on fbcode_blackbox_crash_test that were previously failing in AutoHCC with about 1/3 probability, and they all passed.
Also re-enabling AutoHCC in the crash test with this change. (Revert https://github.com/facebook/rocksdb/issues/12000)
Reviewed By: jowlyzhang
Differential Revision: D51016979
Pulled By: pdillinger
fbshipit-source-id: 3840fb829d65b97c779d8aed62a4a4a433aeff2b
Summary:
db_stress flag `verify_iterator_with_expected_state_one_in` is only enabled for in crash test if --simple flag is set. This PR enables it for all supported crash tests by enabling it by default. This adds coverage for --txn and --enable_ts crash tests.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12040
Test Plan:
ran crash tests that disabled this flag before for a few hours
```
python3 ./tools/db_crashtest.py blackbox --verify_iterator_with_expected_state_one_in=1 --txn --txn_write_policy=[0,1,2]
python3 ./tools/db_crashtest.py blackbox --verify_iterator_with_expected_state_one_in=1 --enable_ts
```
Reviewed By: ajkr, hx235
Differential Revision: D50980001
Pulled By: cbi42
fbshipit-source-id: 3daf6b4c32bdddc5df057240068162aa1a907587
Summary:
- Right now in blackbox test we don't exit if there are std::error as we do in whitebox crash tests. As result those errors are swallowed.
It only errors out if state is unexpected.
One example that was noticed in blackbox crash test -
```
stderr has error message:
***Error restoring historical expected values: Corruption: DB is older than any restorable expected state***
Running db_stress with pid=30454: /packages/rocksdb_db_stress_internal_repo/rocks_db_stress ....
```
- This diff also provided support to export files - db_crashtest.py file to be used by different repo.
Reviewed By: ajkr
Differential Revision: D50564889
fbshipit-source-id: 7bafbbc6179dc79467ca2b680fe83afc7850616a
Summary:
... until I can reproduce and resolve assertion failures (mostly in PurgeImplLocked) seen in crash test.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12000
Test Plan: make blackbox_crash_test
Reviewed By: hx235
Differential Revision: D50565984
Pulled By: pdillinger
fbshipit-source-id: 5eea1638ff2683c41b4f65ee1ffc2398071911e7
Summary:
When `secondary_cache_uri` is non-empty and the `cache_type` is not a tiered cache, then sanitize `compressed_secondary_cache_size` to 0.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11967
Test Plan: Run crash test
Reviewed By: akankshamahajan15
Differential Revision: D50346157
Pulled By: anand1976
fbshipit-source-id: 57bcbad2ec81fa736f1539a0a41ed6854ded2077
Summary:
Thanks ltamasi and ajkr for initial investigations on the test failure. Per the investigations, the following scenario is likely causing the test to fail.
1. Recovery is needed (could be any reason during crash test)
2. Trying to recover from the latest manifest fails (likely due to read error injection)
3. DB opens with recovery from the next manifest which is different from step 2.
4. Expected state is based on the manifest we tried and failed in step 2.
5. Two manifests used in step 2 and 3 are confirmed to have difference in LSM trees (Thanks ltamasi again for the finding).
```
2023/10/05-11:24:18.942189 56341 [db/version_set.cc:6079] Trying to recover from manifest: /dev/shm/rocksdb_test/rocksdb_crashtest_blackbox/MANIFEST-007184
...
2023/10/05-11:24:18.978007 56341 [db/version_set.cc:6079] Trying to recover from manifest: /dev/shm/rocksdb_test/rocksdb_crashtest_blackbox/MANIFEST-007180
```
```
[ltamasi@devbig1024.prn1 /tmp/x]$ ldb manifest_dump --hex --path=MANIFEST-007184_renamed_ > 2
[ltamasi@devbig1024.prn1 /tmp/x]$ ldb manifest_dump --hex --path=MANIFEST-007180_renamed_ > 1
[ltamasi@devbig1024.prn1 /tmp/x]$ diff 1 2
--- 1 2023-10-09 10:29:16.966215207 -0700
+++ 2 2023-10-09 10:29:11.984241645 -0700
@@ -13,7 +13,7 @@
7174:3950254[1875617 .. 2203952]['000000000003415B000000000000012B000000000000007D' seq:1906214, type:1 .. '000000000003CA59000000000000012B000000000000005C' seq:2039838, type:1]
7175:88060[2074748 .. 2203892]['000000000003CA6300000000000000CF78787878787878' seq:2167539, type:2 .. '000000000003D08F000000000000012B0000000000000130' seq:2112478, type:0]
--- level 6 --- version# 1 ---
- 7057:3132633[0 .. 2046144]['0000000000000009000000000000000978' seq:0, type:1 .. '0000000000005F8B000000000000012B00000000000002AC' seq:0, type:1]
+ 7219:2135565[0 .. 2046144]['0000000000000009000000000000000978' seq:0, type:1 .. '0000000000005F8B000000000000012B00000000000002AC' seq:0, type:1]
7061:827724[0 .. 2046131]['0000000000005F95000000000000000778787878787878' seq:0, type:1 .. '000000000000784F000000000000012B0000000000000113' seq:0, type:1]
6763:1352[0 .. 0]['000000000000784F000000000000012B0000000000000129' seq:0, type:1 .. '000000000000784F000000000000012B0000000000000129' seq:0, type:1]
7173:4812291[0 .. 2203957]['000000000000784F000000000000012B0000000000000138' seq:0, type:1 .. '0000000000020FAE787878787878' seq:0, type:1]
@@ -77,4 +77,4 @@
--- level 61 --- version# 1 ---
--- level 62 --- version# 1 ---
--- level 63 --- version# 1 ---
-next_file_number 7182 last_sequence 2203963 prev_log_number 0 max_column_family 0 min_log_number_to_keep 7015
+next_file_number 7221 last_sequence 2203963 prev_log_number 0 max_column_family 0 min_log_number_to_keep 7015
```
We have two options to fix this. Either skip verification against expected state or disable read injection when BE recovery is enabled. I chose to skip verification against expected state per discussion. (See comments in this PR)
Please note that some linter changes were included in this PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11938
Test Plan:
```
TEST_TMPDIR=/dev/shm/rocksdb make crash_test_with_best_efforts_recovery
```
Reviewed By: ltamasi
Differential Revision: D50136341
Pulled By: jaykorean
fbshipit-source-id: ac7434d592aebc148bfc3a4fcaa34936f136b95c
Summary:
This PR depends on https://github.com/facebook/rocksdb/issues/11879 . Enable write fault injection for the basic whitebox, blackbox, and cf_consistency modes. For other test modes like multiops_txn, best_efforts_recovery etc., leave it disabled for now until we can do more testing.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11924
Reviewed By: ajkr
Differential Revision: D50178252
Pulled By: anand1976
fbshipit-source-id: 5794f81c14cded1eb28762b2de818dfff1c1a34c
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11906
The patch adds stress test coverage for the wide-column aware `FullMergeV3` API by implementing a new `DBStressWideMergeOperator`. This operator is similar to `PutOperator` / `PutOperatorV2` in the sense that its result is based on the last merge operand; however, the merge result can be either a plain value or a wide-column entity, depending on the value base encoded into the operand and the value of the `use_put_entity_one_in` stress test parameter. Following the same rule for merge results that we do for writes ensures that the queries issued by the validation logic receive the expected results. The new operator is used instead of `PutOperatorV2` whenever `use_put_entity_one_in` is positive. Note that the patch also makes it possible to set `use_put_entity_one_in` and `use_merge` (but not `use_full_merge_v1`) at the same time, giving `use_put_entity_one_in` precedence, so the stress test will use `PutEntity` for writes passing the `use_put_entity_one_in` check described above and `Merge` for any other writes.
Reviewed By: jaykorean
Differential Revision: D49760024
fbshipit-source-id: 3893602c3e7935381b484f4f5026f1983e3a04a9
Summary:
Crash tests are failing with recent change of auto_readahead_size. Disable it in stress tests and enable it with fix to clear the crash tests failures.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11883
Reviewed By: pdillinger
Differential Revision: D49597854
Pulled By: akankshamahajan15
fbshipit-source-id: 0af8ca7414ee9b92f244ee0fb811579c3c052b41
Summary:
This PR contains two fixes:
1. disable write fault injection since it caused several other kinds of internal stress test failures. I'll try to fix those separately before enabling it again.
2. Fix segfault like
```
https://github.com/facebook/rocksdb/issues/5 0x000000000083dc43 in rocksdb::port::Mutex::Lock (this=0x30) at internal_repo_rocksdb/repo/port/port_posix.cc:80
80 internal_repo_rocksdb/repo/port/port_posix.cc: No such file or directory.
https://github.com/facebook/rocksdb/issues/6 0x0000000000465142 in rocksdb::MutexLock::MutexLock (mu=0x30, this=<optimized out>) at internal_repo_rocksdb/repo/util/mutexlock.h:37
37 internal_repo_rocksdb/repo/util/mutexlock.h: No such file or directory.
https://github.com/facebook/rocksdb/issues/7 rocksdb::FaultInjectionTestFS::DisableWriteErrorInjection (this=0x0) at internal_repo_rocksdb/repo/utilities/fault_injection_fs.h:505
505 internal_repo_rocksdb/repo/utilities/fault_injection_fs.h: No such file or directory.
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11859
Test Plan: db_stress with no fault injection: `./db_stress --write_fault_one_in=0 --read_fault_one_in=0 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --sync_fault_injection=0`
Reviewed By: jaykorean
Differential Revision: D49408247
Pulled By: cbi42
fbshipit-source-id: 0ca01f20e6e81bf52af77818b50d562ef7462165
Summary:
* db_crashtest.py now may set `write_fault_one_in` to 500 for blackbox and whitebox simple test.
* Error injection only applies to writing to SST files. Flush error will cause DB to pause background operations and auto-resume. Compaction error will just re-schedule later.
* File ingestion and back up tests are updated to check if the result status is due to an injected error.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11829
Test Plan:
a full round of whitebox simple and blackbox simple crash test
* `python3 ./tools/db_crashtest.py whitebox/blackbox --simple --write_fault_one_in=500`
Reviewed By: ajkr
Differential Revision: D49256962
Pulled By: cbi42
fbshipit-source-id: 68e0c9648d8e03bad39c7672b25d5500fc286d97
Summary:
An internal user wants to be able to dynamically switch between Bloom and Ribbon filters, without a custom FilterPolicy. Making `filter_policy` mutable would actually make issue https://github.com/facebook/rocksdb/issues/10079 worse, because it would be a race on a pointer field, not just on scalars.
As a reasonable compromise until that is fixed, I am enabling dynamic control over Bloom vs. Ribbon choice by making
RibbonFilterPolicy::bloom_before_level mutable, and doing that safely by using an atomic.
I've also slightly tweaked the interpretation of that field so that setting it to INT_MAX really means "always Bloom."
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11838
Test Plan: unit tests added/extended. crash test updated for SetOptions call and tested under TSAN with amplified probability (lower set_options_one_in).
Reviewed By: ajkr
Differential Revision: D49296284
Pulled By: pdillinger
fbshipit-source-id: e4251c077510df9a9c719876f482448c0d15402a
Summary:
- Fixed misspellings of "inject"
- Made user read errors retryable when `FLAGS_inject_error_severity == 1`
- Added compaction read errors when `FLAGS_read_fault_one_in > 0`. These are always retryable so that the DB will keep accepting writes
- Reenabled setting `compaction_readahead_size` in crash test. The reason for disabling it was to "keep the test clean", which is not a good enough reason to skip testing it
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11789
Test Plan:
With https://github.com/facebook/rocksdb/issues/11782 reverted, reproduced the bug:
- Build: `make -j56 db_stress`
- Command: `TEST_TMPDIR=/dev/shm python3 tools/db_crashtest.py blackbox --simple --write_buffer_size=524288 --target_file_size_base=524288 --max_bytes_for_level_base=2097152 --interval=10 --max_key=1000000`
- Output:
```
stderr has error message:
***put or merge error: Corruption: Compaction number of input keys does not match number of keys processed.***
```
Reviewed By: cbi42
Differential Revision: D48939994
Pulled By: ajkr
fbshipit-source-id: a1efb799efecdfd5d9cfd185e4a6321db8fccfbb
Summary:
Same as title
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11729
Test Plan: make crash_test -j32
Reviewed By: anand1976
Differential Revision: D48534820
Pulled By: akankshamahajan15
fbshipit-source-id: 3a2a28af98dfad164b82ddaaf9fddb94c53a652e
Summary:
In blackbox tests, db_stress command always run with timeout. Timeout can happen during validation, leaving some of the keys not checked. Since key validation is done in order, it is quite likely that keys those are towards to the end of the set are never validated. This PR adds a final execution, without timeout, to ensure validation is executed for all keys, at least once.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11592
Reviewed By: cbi42
Differential Revision: D48003998
Pulled By: hx235
fbshipit-source-id: 72543475a932f12cf0f57534b7e3b6e07e87080f
Summary:
For leveled compaction, RocksDB has a special kind of compaction with reason "kBottommmostFiles" that compacts bottommost level files to clear data held by snapshots (more detail in https://github.com/facebook/rocksdb/issues/3009). Such compactions can happen soon after a relevant snapshot is released. For some use cases, a bottommost file may contain only a small amount of keys that can be cleared, so compacting such a file has a high write amp. In addition, these bottommost files may be compacted in compactions with reason other than "kBottommmostFiles" if we wait for some time (so that enough data is ingested to trigger such a compaction). This PR introduces an option `bottommost_file_compaction_delay` to specify the delay of these bottommost level single file compactions.
* The main change is in `VersionStorageInfo::ComputeBottommostFilesMarkedForCompaction()` where we only add a file to `bottommost_files_marked_for_compaction_` if it oldest_snapshot is larger than its non-zero largest_seqno **and** the file is old enough. Note that if a file is not old enough but its largest_seqno is less than oldest_snapshot, we exclude it from the calculation of `bottommost_files_mark_threshold_`. This makes the change simpler, but such a file's eligibility for compaction will only be checked the next time `ComputeBottommostFilesMarkedForCompaction()` is called. This happens when a new Version is created (compaction, flush, SetOptions()...), a new enough snapshot is released (`VersionStorageInfo::UpdateOldestSnapshot()`) or when a compaction is picked and compaction score has to be re-calculated.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11701
Test Plan:
* Add two unit tests to test when bottommost_file_compaction_delay > 0.
* Ran crash test with the new option.
Reviewed By: jaykorean, ajkr
Differential Revision: D48331564
Pulled By: cbi42
fbshipit-source-id: c584f3dc5f6354fce3ed65f4c6366dc450b15ba8