Summary:
The following are risks associated with pointer-to-pointer reinterpret_cast:
* Can produce the "wrong result" (crash or memory corruption). IIRC, in theory this can happen for any up-cast or down-cast for a non-standard-layout type, though in practice would only happen for multiple inheritance cases (where the base class pointer might be "inside" the derived object). We don't use multiple inheritance a lot, but we do.
* Can mask useful compiler errors upon code change, including converting between unrelated pointer types that you are expecting to be related, and converting between pointer and scalar types unintentionally.
I can only think of some obscure cases where static_cast could be troublesome when it compiles as a replacement:
* Going through `void*` could plausibly cause unnecessary or broken pointer arithmetic. Suppose we have
`struct Derived: public Base1, public Base2`. If we have `Derived*` -> `void*` -> `Base2*` -> `Derived*` through reinterpret casts, this could plausibly work (though technical UB) assuming the `Base2*` is not dereferenced. Changing to static cast could introduce breaking pointer arithmetic.
* Unnecessary (but safe) pointer arithmetic could arise in a case like `Derived*` -> `Base2*` -> `Derived*` where before the Base2 pointer might not have been dereferenced. This could potentially affect performance.
With some light scripting, I tried replacing pointer-to-pointer reinterpret_casts with static_cast and kept the cases that still compile. Most occurrences of reinterpret_cast have successfully been changed (except for java/ and third-party/). 294 changed, 257 remain.
A couple of related interventions included here:
* Previously Cache::Handle was not actually derived from in the implementations and just used as a `void*` stand-in with reinterpret_cast. Now there is a relationship to allow static_cast. In theory, this could introduce pointer arithmetic (as described above) but is unlikely without multiple inheritance AND non-empty Cache::Handle.
* Remove some unnecessary casts to void* as this is allowed to be implicit (for better or worse).
Most of the remaining reinterpret_casts are for converting to/from raw bytes of objects. We could consider better idioms for these patterns in follow-up work.
I wish there were a way to implement a template variant of static_cast that would only compile if no pointer arithmetic is generated, but best I can tell, this is not possible. AFAIK the best you could do is a dynamic check that the void* conversion after the static cast is unchanged.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12308
Test Plan: existing tests, CI
Reviewed By: ltamasi
Differential Revision: D53204947
Pulled By: pdillinger
fbshipit-source-id: 9de23e618263b0d5b9820f4e15966876888a16e2
Summary:
Introduce some different range classes `UserKeyRange` and `UserKeyRangePtr` to be used by internal implementation. The `Range` class is used in both public APIs like `DB::GetApproximateSizes`, `DB::GetApproximateMemTableStats`, `DB::GetPropertiesOfTablesInRange` etc and internal implementations like `ColumnFamilyData::RangesOverlapWithMemtables`, `VersionSet::GetPropertiesOfTablesInRange`.
These APIs have different expectations of what keys this range class contain. Public API users are supposed to populate the range with the user keys without timestamp, in the same way that point lookup and range scan APIs' key input only expect the user key without timestamp. The internal APIs implementation expect a user key whose format is compatible with the user comparator, a.k.a a user key with the timestamp.
This PR contains:
1) introducing counterpart range class `UserKeyRange` `UserKeyRangePtr` for internal implementation while leave the existing `Range` and `RangePtr` class only for public APIs. Internal implementations are updated to use this new class instead.
2) add user-defined timestamp support for `DB::GetPropertiesOfTablesInRange` API and `DeleteFilesInRanges` API.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12071
Test Plan:
existing tests
Added test for `DB::GetPropertiesOfTablesInRange` and `DeleteFilesInRanges` APIs for when user-defined timestamp is enabled.
The change in external_file_ingestion_job doesn't have a user-defined timestamp enabled test case coverage, will add one in a follow up PR that adds file ingestion support for UDT.
Reviewed By: ltamasi
Differential Revision: D53292608
Pulled By: jowlyzhang
fbshipit-source-id: 9a9279e23c640a6d8f8232636501a95aef7638b8
Summary:
# Summary
Following up jowlyzhang 's comment in https://github.com/facebook/rocksdb/issues/12283 .
- Remove `ARG_TTL` from help which is not relevant to `multi_get` command
- Treat NotFound status as non-error case for both `Get` and `MultiGet` and updated the unit test, `ldb_test.py`
- Print key along with value in `multi_get` command
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12332
Test Plan:
**Unit Test**
```
$>python3 tools/ldb_test.py
...
Ran 25 tests in 17.447s
OK
```
**Manual Run**
```
$> ./ldb --db=/data/users/jewoongh/rocksdb_test/T173992396/rocksdb_crashtest_blackbox --hex multi_get 0x0000000000000009000000000000012B00000000000000D8 0x0000000000000009000000000000002678787878BEEF
0x0000000000000009000000000000012B00000000000000D8 ==> 0x47000000434241404F4E4D4C4B4A494857565554535251505F5E5D5C5B5A595867666564636261606F6E6D6C6B6A696877767574737271707F7E7D7C7B7A797807060504030201000F0E0D0C0B0A090817161514131211101F1E1D1C1B1A1918
Key not found: 0x0000000000000009000000000000002678787878BEEF
```
```
$> ./ldb --db=/data/users/jewoongh/rocksdb_test/T173992396/rocksdb_crashtest_blackbox --hex get 0x00000000000000090000000000
Key not found
```
Reviewed By: jowlyzhang
Differential Revision: D53450164
Pulled By: jaykorean
fbshipit-source-id: 9ccec78ad3695e65b1ed0c147c7cbac502a1bd48
Summary:
When using the Rocksdb Java API.
When we use Java code to call `db.compactRange (columnFamilyHandle, start, null)` which means we hope to perform range compaction on keys bigger than **start**.
we expected call to the corresponding C++ code : `db->compactRange (columnFamilyHandle, &start, nullptr)`
But in reality, what is being called is
`db ->compactRange (columnFamilyHandle,start,"")`
The problem here is the `null` in Java are not converted to `nullptr`, but rather to `""`, which may result in some unexpected results
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12328
Reviewed By: jowlyzhang
Differential Revision: D53432749
Pulled By: cbi42
fbshipit-source-id: eeadd19d05667230568668946d2ef1d5b2568268
Summary:
info_log gets an error logged when wal_dir or a db_path/cf_path is missing. Under this condition, the directory is created later (in DBImpl::Recover -> Directories::SetDirectories) with no error status returned.
To avoid error spam in logs, change these to a descriptive "header" log entry.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12326
Test Plan: manual with DBBasicTest.DBCloseAllDirectoryFDs which exercises this code
Reviewed By: jowlyzhang
Differential Revision: D53374743
Pulled By: pdillinger
fbshipit-source-id: 32d1ce18809da13a25bdd6183d661f66a3b6a111
Summary:
The option is introduced in https://github.com/facebook/rocksdb/issues/10835 to allow disabling the new compaction behavior if it's not safe. The option is enabled by default and there has not been a need to disable it. So it should be safe to remove now.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12323
Reviewed By: ajkr
Differential Revision: D53330336
Pulled By: cbi42
fbshipit-source-id: 36eef4664ac96b3a7ed627c48bd6610b0a7eafc5
Summary:
The option is introduced in https://github.com/facebook/rocksdb/issues/10655 to allow reverting to old behavior. The option is enabled by default and there has not been a need to disable it. Remove it for 9.0 release. Also fixed and improved a few unit tests that depended on setting this option to false.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12325
Test Plan: existing tests.
Reviewed By: hx235
Differential Revision: D53369430
Pulled By: cbi42
fbshipit-source-id: 0ec2440ca8d88db7f7211c581542c7581bd4d3de
Summary:
I've always found this name difficult to read, because it sounds like it's for collecting int(eger)
table properties.
I'm fixing this now to set up for a change that I have stubbed out in the public API (table_properties.h):
a new adapter function `TablePropertiesCollector::AsInternal()` that allows RocksDB-provided
TablePropertiesCollectors (such as CompactOnDeletionCollector) to implement the easier-to-upgrade
internal interface while still (superficially) implementing the public interface. In addition to added flexibility,
this should be a performance improvement as the adapter class UserKeyTablePropertiesCollector can be
avoided for such cases where a RocksDB-provided collector is used (AsInternal() returns non-nullptr).
table_properties.h is the only file with changes that aren't simple find-replace renaming.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12320
Test Plan: existing tests, CI
Reviewed By: ajkr
Differential Revision: D53336945
Pulled By: pdillinger
fbshipit-source-id: 02535bcb30bbfb00e29e8478af62e5dad50a63b8
Summary:
The RocksDB correctness testing has recently discovered a possible, but very unlikely, correctness issue with MultiGet. The issue happens when all of the below conditions are met -
1. Duplicate keys in a MultiGet batch
2. Key matches the last key in a non-zero, non-bottommost level file
3. Final value is not in the file (merge operand, not snapshot visible etc)
4. Multiple entries exist for the key in the file spanning more than 1 data block. This can happen due to snapshots, which would force multiple versions of the key in the file, and they may spill over to another data block
5. Lookup attempt in the SST for the first of the duplicates fails with IO error on a data block (NOT the first data block, but the second or subsequent uncached block), but no errors for the other duplicates
6. Value or merge operand for the key is present in the very next level
The problem is, in FilePickerMultiGet, when looking up keys in a level we use FileIndexer and the overlapping file in the current level to determine the search bounds for that key in the file list in the next level. If the next level is empty, the search bounds are reset and we do a full binary search in the next non-empty level's LevelFilesBrief. However, under the conditions https://github.com/facebook/rocksdb/issues/1 and https://github.com/facebook/rocksdb/issues/2 listed above, only the first of the duplicates has its next-level search bounds updated, and the remaining duplicates are skipped.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12295
Test Plan: Add unit tests that fail an assertion or return wrong result without the fix
Reviewed By: hx235
Differential Revision: D53187634
Pulled By: anand1976
fbshipit-source-id: a5eadf4fede9bbdec784cd993b15e3341436d1ea
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12324
We are trying to use rocksdb inside Hedwig. This is causing some builds to fail D53033764. Hence fixing -Wsuggest-destructor-override warning.
Reviewed By: jowlyzhang
Differential Revision: D53328538
fbshipit-source-id: d5b9865442de049b18f9ed086df5fa58fb8880d5
Summary:
sst_dump --command=check can now compare number of keys in a file with num_entries in table property and reports corruption is there is a mismatch.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12322
Test Plan:
- new unit test for API `SstFileDumper::ReadSequential`
- ran sst_dump on a good and a bad file:
```
sst_dump --file=./32316112.sst
options.env is 0x7f68bfcb5000
Process ./32316112.sst
Sst file format: block-based
from [] to []
sst_dump --file=./32316115.sst
options.env is 0x7f6d0d2b5000
Process ./32316115.sst
Sst file format: block-based
from [] to []
./32316115.sst: Corruption: Table property has num_entries = 6050408 but scanning the table returns 6050406 records.
```
Reviewed By: jowlyzhang
Differential Revision: D53320481
Pulled By: cbi42
fbshipit-source-id: d84c996346a9575a5a2ea5f5fb09a9d3ee672cd6
Summary:
`check_flush_compaction_key_order` option was introduced for the key order checking online validation. It gave users the ability to disable the validation without downgrade in case the validation caused inefficiencies or false positives. Over time this validation has shown to be cheap and correct, so the option to disable it can now be removed.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12311
Reviewed By: cbi42
Differential Revision: D53233379
Pulled By: ajkr
fbshipit-source-id: 1384361104021d6e3e580dce2ec123f9f99ce637
Summary:
We should be consistent in how we check key range overlap in memtables and in sst files. While all the sst file key range overlap check compares the user key without timestamp, for example:
377eee77f8/db/version_set.cc (L129-L130)
This key range overlap check for memtable is comparing the whole user key. Currently it happen to achieve the same effect because this function is only called by `ExternalSstFileIngestionJob` and `DBImpl::CompactRange`, which takes a user key without timestamp as the range end, pad a max or min timestamp to it depending on whether the end is exclusive. So use `Compartor::Compare` here is working too, but we should update it to `Comparator::CompareWithoutTimestamp` to be consistent with all the other file key range overlapping check functions.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12315
Test Plan: existing tests
Reviewed By: ltamasi
Differential Revision: D53273456
Pulled By: jowlyzhang
fbshipit-source-id: c094ae1f0c195d52542124c4fb03fdca14241e85
Summary:
`replayer` could be `nullptr` if `!s.ok()` from an earlier failure. Also consider status returned from `record->Accept()`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12314
Test Plan: blackbox_crash_test run
Reviewed By: hx235
Differential Revision: D53241506
Pulled By: pdillinger
fbshipit-source-id: fd330417c23391ca819c3ee0f69e4156d81934dc
Summary:
To stop spamming our warning logs with normal behavior.
Also fix comment on `DisableFileDeletions()`.
In response to https://github.com/facebook/rocksdb/issues/12001 I've indicated my objection to granting legitimacy to force=true, but I'm not addressing that here and now. In short, the user shouldn't be asked to think about whether they want to use the *wrong* behavior. ;)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12310
Test Plan: existing tests
Reviewed By: jowlyzhang
Differential Revision: D53233117
Pulled By: pdillinger
fbshipit-source-id: 5d2aedb76b02b30f8a5fa5b436fc57fde5d40d6e
Summary:
As titled. This changes public API behavior, and subclasses of `WritableFile` and `FSWritableFile` need to explicitly provide an implementation for the `GetFileSize` method after this change.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12303
Reviewed By: ajkr
Differential Revision: D53205769
Pulled By: jowlyzhang
fbshipit-source-id: 2e613ca3650302913821b33159b742bdf1d24bc7
Summary:
RocksDB self throttles per-DB compaction parallelism until it detects compaction pressure. This PR adds pressure detection based on the number of files marked for compaction.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12306
Reviewed By: cbi42
Differential Revision: D53200559
Pulled By: ajkr
fbshipit-source-id: 63402ee336881a4539204d255960f04338ab7a0e
Summary:
and also fix comment/label on some MacOS CI jobs. Motivated by a crash test failure missing a definitive indicator of the genesis of the status:
```
file ingestion error: Operation failed. Try again.:
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12307
Test Plan: just cosmetic changes. These statuses should not arise frequently enough to be a performance issue (copying messages).
Reviewed By: jaykorean
Differential Revision: D53199529
Pulled By: pdillinger
fbshipit-source-id: ad83daaa5d80f75c9f81158e90fb6d9ecca33fe3
Summary:
As titled, the replacement tickers have been introduced in https://github.com/facebook/rocksdb/issues/11509 and in use since release 8.4. This PR completely removes the misspelled ones.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12302
Test Plan: CI tests
Reviewed By: jaykorean
Differential Revision: D53196935
Pulled By: jowlyzhang
fbshipit-source-id: 9c9d0d321247690db5edfdc52b4fecb2f1218979
Summary:
Provide support for FSBuffer for point lookups
It also add support for compaction and scan reads that goes through BlockFetcher when readahead/prefetching is not enabled.
Some of the compaction/Scan reads goes through FilePrefetchBuffer and some through BlockFetcher. This PR add support to use underlying file system scratch buffer for reads that go through BlockFetcher as for FilePrefetch reads, design is complicated to support this feature.
Design - In order to use underlying FileSystem provided scratch for Reads, it uses MultiRead with 1 request instead of Read API which required API change.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12266
Test Plan: Stress test using underlying file system scratch buffer internally.
Reviewed By: anand1976
Differential Revision: D53019089
Pulled By: akankshamahajan15
fbshipit-source-id: 4fe3d090d77363320e4b67186fd4d51c005c0961
Summary:
Moving some of the messages that we print out in `stderr` to `stdout` to make `stderr` more strictly related to errors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12304
Test Plan:
```
$> python3 tools/db_crashtest.py blackbox --simple --max_key=25000000 --write_buffer_size=4194304
```
**Before**
```
...
stderr:
WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Error : jewoongh injected test error This is not a real failure.
Verification failed :(
```
**After**
No longer seeing the `WARNING: prefix_size is non-zero but memtablerep != prefix_hash` message in stderr, but still appears in stdout.
```
WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Integrated BlobDB: blob files enabled 0, min blob size 0, blob file size 268435456, blob compression type NoCompression, blob GC enabled 0, cutoff 0.250000, force threshold 1.000000, blob compaction readahead size 0, blob file starting level 0
Integrated BlobDB: blob cache disabled
DB path: [/tmp/jewoongh/rocksdb_crashtest_blackboxzztp281q]
(Re-)verified 0 unique IDs
2024/01/29-11:57:51 Initializing worker threads
Crash-recovery verification passed :)
2024/01/29-11:57:58 Starting database operations
2024/01/29-11:57:58 Starting verification
Stress Test : 245.167 micros/op 10221 ops/sec
: Wrote 0.00 MB (0.10 MB/sec) (16% of 6 ops)
: Wrote 1 times
: Deleted 0 times
: Single deleted 0 times
: 4 read and 0 found the key
: Prefix scanned 0 times
: Iterator size sum is 0
: Iterated 3 times
: Deleted 0 key-ranges
: Range deletions covered 0 keys
: Got errors 0 times
: 0 CompactFiles() succeed
: 0 CompactFiles() did not succeed
stderr:
Error : jewoongh injected test error This is not a real failure.
Error : jewoongh injected test error This is not a real failure.
Error : jewoongh injected test error This is not a real failure.
Verification failed :(
```
Reviewed By: akankshamahajan15
Differential Revision: D53193587
Pulled By: jaykorean
fbshipit-source-id: 40d59f4c993c5ce043c571a207ccc9b74a0180c6
Summary:
For the user defined timestamps in memtable only feature, some special handling for range deletion blocks are needed since both the key (start_key) and the value (end_key) of a range tombstone can contain user-defined timestamps. Handling for the key is taken care of in the same way as the other data blocks in the block based table. This PR adds the special handling needed for the value (end_key) part. This includes:
1) On the write path, when L0 SST files are first created from flush, user-defined timestamps are removed from an end key of a range tombstone. There are places where it's logically removed (replaced with a min timestamp) because there is still logic with the running comparator that expects a user key that contains timestamp. And in the block based builder, it is eventually physically removed before persisted in a block.
2) On the read path, when range deletion block is being read, we artificially pad a min timestamp to the end key of a range tombstone in `BlockBasedTableReader`.
3) For file boundary `FileMetaData.largest`, we artificially pad a max timestamp to it if it contains a range deletion sentinel. Anytime when range deletion end_key is used to update file boundaries, it's using max timestamp instead of the range tombstone's actual timestamp to mark it as an exclusive end. d69628e6ce/db/dbformat.h (L923-L935)
This max timestamp is removed when in memory `FileMetaData.largest` is persisted into Manifest, we pad it back when it's read from Manifest while handling related `VersionEdit` in `VersionEditHandler`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12254
Test Plan: Added unit test and enabled this feature combination's stress test.
Reviewed By: cbi42
Differential Revision: D52965527
Pulled By: jowlyzhang
fbshipit-source-id: e8315f8a2c5268e2ae0f7aec8012c266b86df985
Summary:
While working on Meta's internal test triaging process, I found that `db_crashtest.py` was printing out `stdout` and `stderr` altogether. Adding an option to print `stderr` separately so that it's easy to extract only `stderr` from the test run.
`print_stderr_separately` is introduced as an optional parameter with default value `False` to keep the existing behavior as is (except a few minor changes).
Minor changes to the existing behavior
- We no longer print `stderr has error message:` and `***` prefix to each line. We simply print `stderr:` before printing `stderr` if stderr is printed in stdout and print `stderr` as is.
- We no longer print `times error occurred in output is ...` which doesn't appear to have any values
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12301
Test Plan:
**Default Behavior (blackbox)**
Run printed everything as is
```
$> python3 tools/db_crashtest.py blackbox --simple --max_key=25000000 --write_buffer_size=4194304 2> /tmp/error.log
Running blackbox-crash-test with
interval_between_crash=120
total-duration=6000
...
Integrated BlobDB: blob files enabled 0, min blob size 0, blob file size 268435456, blob compression type NoCompression, blob GC enabled 0, cutoff 0.250000, force threshold 1.000000, blob compaction readahead size 0, blob file starting level 0
Integrated BlobDB: blob cache disabled
DB path: [/tmp/jewoongh/rocksdb_crashtest_blackboxwh7yxpec]
(Re-)verified 0 unique IDs
2024/01/29-09:16:30 Initializing worker threads
Crash-recovery verification passed :)
2024/01/29-09:16:35 Starting database operations
2024/01/29-09:16:35 Starting verification
Stress Test : 543.600 micros/op 8802 ops/sec
: Wrote 0.00 MB (0.27 MB/sec) (50% of 10 ops)
: Wrote 5 times
: Deleted 1 times
: Single deleted 0 times
: 4 read and 0 found the key
: Prefix scanned 0 times
: Iterator size sum is 0
: Iterated 0 times
: Deleted 0 key-ranges
: Range deletions covered 0 keys
: Got errors 0 times
: 0 CompactFiles() succeed
: 0 CompactFiles() did not succeed
stderr:
WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Error : jewoongh injected test error This is not a real failure.
Verification failed :(
```
Nothing in stderr
```
$> cat /tmp/error.log
```
**Default Behavior (whitebox)**
Run printed everything as is
```
$> python3 tools/db_crashtest.py whitebox --simple --max_key=25000000 --write_buffer_size=4194304 2> /tmp/error.log
Running whitebox-crash-test with
total-duration=10000
...
(Re-)verified 571 unique IDs
2024/01/29-09:33:53 Initializing worker threads
Crash-recovery verification passed :)
2024/01/29-09:35:16 Starting database operations
2024/01/29-09:35:16 Starting verification
Stress Test : 97248.125 micros/op 10 ops/sec
: Wrote 0.00 MB (0.00 MB/sec) (12% of 8 ops)
: Wrote 1 times
: Deleted 0 times
: Single deleted 0 times
: 4 read and 1 found the key
: Prefix scanned 1 times
: Iterator size sum is 120868
: Iterated 4 times
: Deleted 0 key-ranges
: Range deletions covered 0 keys
: Got errors 0 times
: 0 CompactFiles() succeed
: 0 CompactFiles() did not succeed
stderr:
WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Error : jewoongh injected test error This is not a real failure.
New cache capacity = 4865393
Verification failed :(
TEST FAILED. See kill option and exit code above!!!
```
Nothing in stderr
```
$> cat /tmp/error.log
```
**New option added (blackbox)**
```
$> python3 tools/db_crashtest.py blackbox --simple --max_key=25000000 --write_buffer_size=4194304 --print_stderr_separately 2> /tmp/error.log
Running blackbox-crash-test with
interval_between_crash=120
total-duration=6000
...
Integrated BlobDB: blob files enabled 0, min blob size 0, blob file size 268435456, blob compression type NoCompression, blob GC enabled 0, cutoff 0.250000, force threshold 1.000000, blob compaction readahead size 0, blob file starting level 0
Integrated BlobDB: blob cache disabled
DB path: [/tmp/jewoongh/rocksdb_crashtest_blackbox7ybna32z]
(Re-)verified 0 unique IDs
Compaction filter factory: DbStressCompactionFilterFactory
2024/01/29-09:05:39 Initializing worker threads
Crash-recovery verification passed :)
2024/01/29-09:05:46 Starting database operations
2024/01/29-09:05:46 Starting verification
Stress Test : 235.917 micros/op 16000 ops/sec
: Wrote 0.00 MB (0.16 MB/sec) (16% of 12 ops)
: Wrote 2 times
: Deleted 1 times
: Single deleted 0 times
: 9 read and 0 found the key
: Prefix scanned 0 times
: Iterator size sum is 0
: Iterated 0 times
: Deleted 0 key-ranges
: Range deletions covered 0 keys
: Got errors 0 times
: 0 CompactFiles() succeed
: 0 CompactFiles() did not succeed
```
stderr printed separately
```
$> cat /tmp/error.log
WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Error : jewoongh injected test error This is not a real failure.
New cache capacity = 19461571
Verification failed :(
```
**New option added (whitebox)**
```
$> python3 tools/db_crashtest.py whitebox --simple --max_key=25000000 --write_buffer_size=4194304 --print_stderr_separately 2> /tmp/error.log
Running whitebox-crash-test with
total-duration=10000
...
Integrated BlobDB: blob files enabled 0, min blob size 0, blob file size 268435456, blob compression type NoCompression, blob GC enabled 0, cutoff 0.250000, force threshold 1.000000, blob compaction readahead size 0, blob file starting level 0
Integrated BlobDB: blob cache disabled
DB path: [/tmp/jewoongh/rocksdb_crashtest_whiteboxtwj0ihn6]
(Re-)verified 157 unique IDs
2024/01/29-09:39:59 Initializing worker threads
Crash-recovery verification passed :)
2024/01/29-09:40:16 Starting database operations
2024/01/29-09:40:16 Starting verification
Stress Test : 742.474 micros/op 11801 ops/sec
: Wrote 0.00 MB (0.27 MB/sec) (36% of 19 ops)
: Wrote 7 times
: Deleted 1 times
: Single deleted 0 times
: 8 read and 0 found the key
: Prefix scanned 0 times
: Iterator size sum is 0
: Iterated 4 times
: Deleted 0 key-ranges
: Range deletions covered 0 keys
: Got errors 0 times
: 0 CompactFiles() succeed
: 0 CompactFiles() did not succeed
TEST FAILED. See kill option and exit code above!!!
```
stderr printed separately
```
$> cat /tmp/error.log
WARNING: prefix_size is non-zero but memtablerep != prefix_hash
Error : jewoongh injected test error This is not a real failure.
Error : jewoongh injected test error This is not a real failure.
Error : jewoongh injected test error This is not a real failure.
New cache capacity = 4865393
Verification failed :(
```
Reviewed By: akankshamahajan15
Differential Revision: D53187491
Pulled By: jaykorean
fbshipit-source-id: 76f9100d08b96d014e41b7b88b206d69f0ae932b
Summary:
In C++, `extern` is redundant in a number of cases:
* "Global" function declarations and definitions
* "Global" variable definitions when already declared `extern`
For consistency and simplicity, I've removed these in code that *we own*. In a couple of cases, I removed obsolete declarations, and for MagicNumber constants, I have consolidated the declarations into a header file (format.h)
as standard best practice would prescribe.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12300
Test Plan: no functional changes, CI
Reviewed By: ajkr
Differential Revision: D53148629
Pulled By: pdillinger
fbshipit-source-id: fb8d927959892e03af09b0c0d542b0a3b38fd886
Summary:
Right now crash_test relies on std::errors too to check for only errors/failures along with verification. However, that's not a reliable solution and many internal services logs benign errors/warnings in which case our test script fails.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12265
Test Plan: Keep std::errors but printout instead of failing and will monitor crash tests internally to see if there is any scenario which solely relies on std::error, in which case stress tests can be improve.
Reviewed By: ajkr, cbi42
Differential Revision: D52967000
Pulled By: akankshamahajan15
fbshipit-source-id: 5328c8b69480c7946fe6a9c72f9ffeede70ac2ad
Summary:
When is RocksDB is opened with Column Family descriptors, the default column family must be set properly. If it was not, then the flush operation will fail.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12167
Reviewed By: ajkr
Differential Revision: D53104007
Pulled By: cbi42
fbshipit-source-id: dffa8e34a4b2a438553ee4ea308f3fa2e22e46f7
Summary:
... to include the actual numbers of processed and expected records, and the file number for input files. The purpose is to be able to find the offending files even when the relevant LOG file is gone.
Another change is to check the record count even when `compaction_verify_record_count` is false, and log a warning message without setting corruption status if there is a mismatch. This is consistent with how we check the record count for flush.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12297
Test Plan:
print the status message in `DBCompactionTest.VerifyRecordCount`
```
before
Corruption: Compaction number of input keys does not match number of keys processed.
after
Compaction number of input keys does not match number of keys processed. Expected 20 but processed 10. Compaction summary: Base version 4 Base level 0, inputs: [11(2156B) 9(2156B)]
```
Reviewed By: ajkr
Differential Revision: D53110130
Pulled By: cbi42
fbshipit-source-id: 6325cbfb8f71f25ce37f23f8277ebe9264863c3b
Summary:
**Context/Summary:**
The rate_limiter_priority passed to SequentialFileReader is now passed down to underlying file system. This allows the priority associated with backup/restore SST reads to be exposed to FS.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12296
Test Plan: - Modified existing UT
Reviewed By: pdillinger
Differential Revision: D53100368
Pulled By: hx235
fbshipit-source-id: b4a28917efbb1b0d16f9d1c2b38769bffcff0f34
Summary:
https://github.com/facebook/rocksdb/issues/12267 apparently introduced a data race in test code where a background read of estimated_compaction_needed_bytes while holding the DB mutex could race with forground write for testing purposes. This change adds the DB mutex to those writes.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12294
Test Plan: 1000 TSAN runs of test (massively fails before change, passes after)
Reviewed By: ajkr
Differential Revision: D53095483
Pulled By: pdillinger
fbshipit-source-id: 13fcb383ebad313dabe39eb8f9085c34d370b54a
Summary:
**Context/Summary:**
We recently found out some code paths in flush and compaction aren't rate-limited when they should.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12290
Test Plan: existing UT**
Reviewed By: anand1976
Differential Revision: D53066103
Pulled By: hx235
fbshipit-source-id: 9dc4cab5f841230d18e5504dc480ac523e9d3950
Summary:
After https://github.com/facebook/rocksdb/issues/12253 this function has crashed in the crash test, in its call to `std::copy`. I haven't reproduced the crash directly, but `std::copy` probably has undefined behavior if the starting iterator is after the ending iterator, which was possible. I've fixed the logic to deal with that case and to add an assertion to check that precondition of `std::copy` (which appears can be unchecked by `std::copy` itself even with UBSAN+ASAN).
Also added some unit tests etc. that were unfinished for https://github.com/facebook/rocksdb/issues/12253, and slightly tweak SeqnoToTimeMapping::EnforceMaxTimeSpan handling of zero time span case.
This is intended for patching 8.11.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12293
Test Plan: tests added. Will trigger ~20 runs of the crash test job that saw the crash. https://fburl.com/ci/5iiizvfa
Reviewed By: jowlyzhang
Differential Revision: D53090422
Pulled By: pdillinger
fbshipit-source-id: 69d60b1847d9c7e4ae62b153011c2040405db461
Summary:
The test has been failing with
```
[ RUN ] DBCompactionTest.BottomPriCompactionCountsTowardConcurrencyLimit
db/db_compaction_test.cc:9661: Failure
Expected equality of these values:
0u
Which is: 0
env_->GetThreadPoolQueueLen(Env::Priority::LOW)
Which is: 1
```
This can happen when thread pool queue len is checked before `test::SleepingBackgroundTask::DoSleepTask` is scheduled.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12289
Reviewed By: ajkr
Differential Revision: D53064300
Pulled By: cbi42
fbshipit-source-id: 9ed1b714243880f82bd1cc1584b402ac9cf57507
Summary:
While ingesting multiple external files with key range overlap, current flow go through the lsm tree to do a search for a target level and later discard that result by defaulting back to L0. This PR improves this by just skip the search altogether.
The other change is to remove default to L0 for the combination of universal compaction + force global sequence number, which was initially added to meet a pre https://github.com/facebook/rocksdb/issues/7421 invariant.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12284
Test Plan:
Added unit test:
./external_sst_file_test --gtest_filter="*IngestFileWithGlobalSeqnoAssignedUniversal*"
Reviewed By: ajkr
Differential Revision: D53072238
Pulled By: jowlyzhang
fbshipit-source-id: 30943e2e284a7f23b495c0ea4c80cb166a34a8ac
Summary:
The current implementation of the ldb_cmd tool involves commenting out the user-passed column_family_descriptors, resulting in the tool consistently constructing its column_family_descriptors from the pre-existing OPTIONS file.
The proposed fix prioritizes user-passed column family descriptors, ensuring they take precedence over those specified in the OPTIONS file. This modification enhances the tool's adaptability and responsiveness to user configurations.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12261
Reviewed By: cbi42
Differential Revision: D52965877
Pulled By: ajkr
fbshipit-source-id: 334a83a8e1004c271b19e7ca09381a0e7cf87b03
Summary:
While investigating test failures due to the inconsistency between `Get()` and `MultiGet()`, I realized that LDB currently doesn't support `MultiGet()`. This PR introduces the `MultiGet()` support in LDB. Tested the command manually. Unit test will follow in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12283
Test Plan:
When key not found
```
$> ./ldb --db=/data/users/jewoongh/rocksdb_test/T173992396/rocksdb_crashtest_blackbox --hex multi_get 0x0000000000000009000000000000012B00000000000002AB
Status for key 0x0000000000000009000000000000012B00000000000002AB: NotFound:
```
Compare the same key with get
```
$> ./ldb --db=/data/users/jewoongh/rocksdb_test/T173992396/rocksdb_crashtest_blackbox --hex get 0x0000000000000009000000000000012B00000000000002AB
Failed: Get failed: NotFound:
```
Multiple keys not found
```
$> ./ldb --db=/data/users/jewoongh/rocksdb_test/T173992396/rocksdb_crashtest_blackbox --hex multi_get 0x0000000000000009000000000000012B00000000000002AB 0x0000000000000009000000000000012B00000000000002AC Status for key 0x0000000000000009000000000000012B00000000000002AB: NotFound:
Status for key 0x0000000000000009000000000000012B00000000000002AC: NotFound:
```
One of the keys found
```
$> ./ldb --db=/data/users/jewoongh/rocksdb_test/T173992396/rocksdb_crashtest_blackbox --hex multi_get 0x0000000000000009000000000000012B00000000000002AB 0x00000000000000090000000000000026787878787878
Status for key 0x0000000000000009000000000000012B00000000000002AB: NotFound:
0x22000000262724252A2B28292E2F2C2D32333031363734353A3B38393E3F3C3D02030001060704050A0B08090E0F0C0D12131011161714151A1B18191E1F1C1D
```
All of the keys found
```
$> ./ldb --db=/data/users/jewoongh/rocksdb_test/T173992396/rocksdb_crashtest_blackbox --hex multi_get 0x0000000000000009000000000000012B00000000000000D8 0x00000000000000090000000000000026787878787878 15:57:03
0x47000000434241404F4E4D4C4B4A494857565554535251505F5E5D5C5B5A595867666564636261606F6E6D6C6B6A696877767574737271707F7E7D7C7B7A797807060504030201000F0E0D0C0B0A090817161514131211101F1E1D1C1B1A1918
0x22000000262724252A2B28292E2F2C2D32333031363734353A3B38393E3F3C3D02030001060704050A0B08090E0F0C0D12131011161714151A1B18191E1F1C1D
```
Reviewed By: hx235
Differential Revision: D53048519
Pulled By: jaykorean
fbshipit-source-id: a6217905464c5f460a222e2b883bdff47b9dd9c7
Summary:
This PR adds automatic checks in the `PendingExpectedValue` class to make sure it's either committed or rolled back before being destructed.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12244
Reviewed By: hx235
Differential Revision: D52853794
Pulled By: jowlyzhang
fbshipit-source-id: 1dcd7695f2c52b79695be0abe11e861047637dc4