Summary:
This PR adds support to programmatically iterate a raw table file with an iterator returned by `SstFileReader::NewTableIterator`. For third party tools to use to observe SST files created by RocksDB.
The original feature request was from this merge request: https://github.com/facebook/rocksdb/pull/12370
Since keys returned by raw table iterators are internal keys, this PR also adds a struct `ParsedEntryInfo` and util method `ParseEntry` to support user to parse internal key. `GetInternalKeyForSeek`, and `GetInternalKeyForSeekForPrev` to support users to create internal keys for seek operations with this raw table iterator.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12385
Test Plan: Added unit tests
Reviewed By: cbi42
Differential Revision: D55662855
Pulled By: jowlyzhang
fbshipit-source-id: 0716a173ee95924fbd4e1f9b6cccf06525c40049
Summary:
`nullptr` is typesafe. `0` and `NULL` are not. In the future, only `nullptr` will be allowed.
This diff helps us embrace the future _now_ in service of enabling `-Wzero-as-null-pointer-constant`.
Reviewed By: dmm-fb
Differential Revision: D55559752
fbshipit-source-id: 9f1edc836ded919022c4b53722f6f86208fecf8d
Summary:
`-Wextra-semi` or `-Wextra-semi-stmt`
If the code compiles, this is safe to land.
Reviewed By: palmje
Differential Revision: D55534619
fbshipit-source-id: 26f3c35a51b38a3cbfa12a6f76a2bb783a7b4d8e
Summary:
`-Wextra-semi` or `-Wextra-semi-stmt`
If the code compiles, this is safe to land.
Reviewed By: palmje
Differential Revision: D55534622
fbshipit-source-id: dfff34924da6f2cdad34ed21f8f08a9bab9189a7
Summary:
**Context/Summary:**
`wal_bytes_per_sync > 0` can sync newer WAL but not an older WAL by its nature. This creates a hole in synced WAL data. By our crash test, we recently discovered that our DB can recover past that hole. This resulted in crash-recovery-verification error. Before we fix that recovery behavior, we will temporarily disable `wal_bytes_per_sync` in crash test
Bonus: updated the API to make the nature of this option more explicitly documented
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12489
Test Plan: More stabilized crash test
Reviewed By: ajkr
Differential Revision: D55531589
Pulled By: hx235
fbshipit-source-id: 6dea6486420dc0f50550d488c15652f93972a0ea
Summary:
**Context/Summary:**
We recently discovered that `CompactRange(change_level=true, target_level=0)` can possibly refit more than 1 files to L0. This refitting can cause read performance regression as we need to go through every file in L0, corruption in some edge case and false positive corruption caught by force consistency check. We decided to explicitly disallow such behavior.
A related change to OptionChangeMigration():
- When migrating to FIFO with `compaction_options_fifo.max_table_files_size > 0`, RocksDB will [CompactRange() all the to-be-migrate data into a couple L0 files](https://github.com/facebook/rocksdb/blob/main/utilities/option_change_migration/option_change_migration.cc#L164-L169) to avoid dropping all the data upon migration finishes when the migrated data is larger than max_table_files_size. This is achieved by first compacting all the data into a couple non-L0 files and refitting those files from non-L0 to L0 if needed. In that way, only some data instead of all data will be dropped immediately after migration to FIFO with a max_table_files_size.
- Since this type of refitting behavior is disallowed from now on, we won't do this trick anymore and explicitly state such risk in API comment.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12481
Test Plan:
- New UT
- Modified UT
Reviewed By: cbi42
Differential Revision: D55351178
Pulled By: hx235
fbshipit-source-id: 9d8854f2f81d7e8aff859c3a4e53b7d688048e80
Summary:
Errors were being swallowed in `BlockBasedTable::MultiGet` under some circumstances, such as error when parsing the internal key from the block, or IO error when reading the blob value. We need to set the status for the key to the observed error.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12486
Test Plan: Run db_stress and verify the expected error failure before, and no failures after the change.
Reviewed By: jaykorean, ajkr
Differential Revision: D55483940
Pulled By: anand1976
fbshipit-source-id: 493e44db507d5db45e8d1ef2e67808d2c4046318
Summary:
Previously it was uninitialized. Setting `checksum_handoff_file_types` will cause `kCRC32c` checksums to be passed down in the `DataVerificationInfo`, so it makes sense for `kCRC32c` to be the default.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12485
Test Plan:
ran `db_stress` in a way that failed before. Building with ASAN was needed to ensure the uninitialized bytes are nonzero according to `malloc_fill_byte` (default 0xbe)
```
$ COMPILE_WITH_ASAN=1 make -j28 db_stress
...
$ ./db_stress -sync_fault_injection=1 -enable_checksum_handoff=true
```
Reviewed By: jaykorean
Differential Revision: D55450587
Pulled By: ajkr
fbshipit-source-id: 53dc829b86e49b3fa80570032e83af0bb12adaad
Summary:
As a follow up for https://github.com/facebook/rocksdb/issues/12422 , this PR includes the following two changes.
- Removal of `direction_` in the MultiCfIterator
- Use of Member Func Template instead of `std::function`
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12465
Test Plan:
```
./multi_cf_iterator_test
```
Reviewed By: pdillinger, ltamasi
Differential Revision: D55208448
Pulled By: jaykorean
fbshipit-source-id: 8b3167c1d59839d076afc29097b5ad21a453460a
Summary:
ScopedArenaIterator is not an iterator. It is a pointer wrapper. And we don't need a custom implemented pointer wrapper when std::unique_ptr can be instantiated with what we want.
So this adds ScopedArenaPtr<T> to replace those uses.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12470
Test Plan: CI (including ASAN/UBSAN)
Reviewed By: jowlyzhang
Differential Revision: D55254362
Pulled By: pdillinger
fbshipit-source-id: cc96a0b9840df99aa807f417725e120802c0ae18
Summary:
Fix the heap use after free bug caused by freeing the file system IO buffer in `BlockFetcher::ReadBlock()` instead of the caller.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12464
Test Plan: Update the `DBIOCorruptionTest` tests
Reviewed By: akankshamahajan15
Differential Revision: D55206920
Pulled By: anand1976
fbshipit-source-id: fd6b608a61cd229b20c1e5f348ff3cc92328de0f
Summary:
This option was previously disabled due to a bug in the recovery logic. The recovery code in `DBImpl::RecoverLogFiles` couldn't tell if an EoF reported by the log reader was really an EoF or a possible corruption that made a record look like an old log record. To fix this, the log reader now explicitly reports when it encounters what looks like an old record. The recovery code treats it as a possible corruption, and uses the next sequence number in the WAL to determine if it should continue replaying the WAL.
This PR also fixes a couple of bugs that log file recycling exposed in the backup and checkpoint path.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12403
Test Plan:
1. Add new unit tests to verify behavior upon corruption
2. Re-enable disabled tests for verifying recycling behavior
Reviewed By: ajkr
Differential Revision: D54544824
Pulled By: anand1976
fbshipit-source-id: 12f5ce39bd6bc0d63b0bc6432dc4db510e0e802a
Summary:
This PR contains a few follow ups from https://github.com/facebook/rocksdb/issues/12419 and https://github.com/facebook/rocksdb/issues/12428 including:
1) Handle a special case for `WriteBatch::TimedPut`. When the user specified write time is `std::numeric_limits<uint64_t>::max()`, it's not treated as an error, but it instead creates and writes a regular `Put` entry.
2) Update the `InternalIterator::write_unix_time` APIs to handle `kTypeValuePreferredSeqno` entries.
3) FlushJob is updated to use the seqno to time mapping copy in `SuperVersion`. FlushJob currently copy the DB's seqno to time mapping while holding db mutex and only copies the part of interest, a.k.a, the part that only goes back to the earliest sequence number of the to-be-flushed memtables. While updating FlushJob to use the mapping copy in `SuperVersion`, it's given access to the full mapping to help cover the need to convert `kTypeValuePreferredSeqno`'s write time to preferred seqno as much as possible.
Test plans:
Added unit tests
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12455
Reviewed By: pdillinger
Differential Revision: D55165422
Pulled By: jowlyzhang
fbshipit-source-id: dc022653077f678c24661de5743146a74cce4b47
Summary:
fixes https://github.com/facebook/rocksdb/issues/12409
### Issue
ZSTD_TrainDictionary [[link](a53ed91691/table/block_based/block_based_table_builder.cc (L1894))] runs for SSTFileWriter::Finish even when bottommost_compression option is set to kNoCompression. This reduces throughput for SstFileWriter::Finish
We construct rocksdb options using ZSTD compression for levels including 2 and above. For levels 0 and 1, we set it to kNoCompression. We also set zstd_max_train_bytes to a non-zero positive value (which is applicable for levels with ZSTD compression enabled). These options are used for the database and also passed to SstFileWriter for creating sst files to be later added to that database. Since the BlockBasedTableBuilder::Finish [[link](a53ed91691/table/block_based/block_based_table_builder.cc (L1892))] only checks for zstd_max_train_bytes to be non-zero positive value, it runs ZSTD_TrainDictionary even when it shouldn't since SSTFileWriter is operating at bottommost level
### Fix
If compression_type is set to kNoCompression, then don't run ZSTD_TrainDictionary and dictionary building
### Testing
I see we have tests for sst file writer with compression type set/unset. Let me know if it isn't covered and I can extend
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12453
Reviewed By: cbi42
Differential Revision: D55030484
Pulled By: ajkr
fbshipit-source-id: 834de2174c2b087d61bf045ca1ae29f337b821a7
Summary:
Fixing the not-checked status failure as in https://github.com/facebook/rocksdb/actions/runs/8334988399/job/22809612148.
When the status is not ok() for any reason, we do not check the `wal_read_status` because it's not necessary. It's causing the test failure when running with `ASSERT_STATUS_CHECKED=1`
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12460
Test Plan: Existing tests
Reviewed By: ajkr
Differential Revision: D55104844
Pulled By: jaykorean
fbshipit-source-id: 919b1fddca835494f9087c51c4da6eabc9e8df2b
Summary:
`-Wextra-semi` or `-Wextra-semi-stmt`
If the code compiles, this is safe to land.
Reviewed By: palmje
Differential Revision: D55087322
fbshipit-source-id: ca4db7285444306d6c91545cd2c33483dfe05385
Summary:
`-Wextra-semi` or `-Wextra-semi-stmt`
If the code compiles, this is safe to land.
Reviewed By: palmje
Differential Revision: D54362227
fbshipit-source-id: ac634ba34f9351ba559c4ed96448f51d6ef33175
Summary:
On file systems that support storage level data checksum and reconstruction, retry SST block reads for point lookups, scans, and flush and compaction if there's a checksum mismatch on the initial read. A file system can indicate its support by setting the `FSSupportedOps::kVerifyAndReconstructRead` bit in `SupportedOps`.
Tests:
Add new unit tests
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12427
Reviewed By: ajkr
Differential Revision: D55025941
Pulled By: anand1976
fbshipit-source-id: dbd990cb75e03f756c8a66d42956f645c0b6d55e
Summary:
Update `compaction_service_test` to make sure remote compaction works with multiple column family set up. Minor refactor to get rid of duplicate code
Fixing one quick bug in the existing test util: Test util's `FilesPerLevel` didn't honor `cf_id` properly)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12430
Test Plan:
```
./compaction_service_test
```
Reviewed By: ajkr
Differential Revision: D54883035
Pulled By: jaykorean
fbshipit-source-id: 83b4f6f566fed5c4824bfef7de01074354a72b44
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12442
The patch deduplicates and unifies the logic of `WriteBatchWithIndex::{Get,GetEntity}FromBatch` using templates and makes some small code hygiene improvements, including consistently clearing the output value in the various non-success cases.
Reviewed By: jaykorean
Differential Revision: D54922935
fbshipit-source-id: c92e89f905a3c80cef57c2c840f49f806629238f
Summary:
This PR continues https://github.com/facebook/rocksdb/issues/12153 by implementing the missing `Iterator` APIs - `Seek()`, `SeekForPrev()`, `SeekToLast()`, and `Prev`. A MaxHeap Implementation has been added to handle the reverse direction.
The current implementation does not include upper/lower bounds yet. These will be added in subsequent PRs. The API is still marked as under construction and will be lifted after being added to the stress test.
Please note that changing the iterator direction in the middle of iteration is expensive, as it requires seeking the element in each iterator again in the opposite direction and rebuilding the heap along the way. The first `Next()` after `SeekForPrev()` requires changing the direction under the current implementation. We may optimize this in later PRs.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12422
Test Plan: The `multi_cf_iterator_test` has been extended to cover the API implementations.
Reviewed By: pdillinger
Differential Revision: D54820754
Pulled By: jaykorean
fbshipit-source-id: 9eb741508df0f7bad598fb8e6bd5cdffc39e81d1
Summary:
with https://github.com/facebook/rocksdb/issues/12414 enabling `ReadOptions::pin_data`, this bug surfaced as corrupted per key-value checksum during crash test. `saved_key_.GetUserKey()` could be pinned user key, so DBIter should not overwrite it.
In one case, it only surfaces when iterator skips many keys of the same user key. To stress that code path, this PR also added `max_sequential_skip_in_iterations` to crash test.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12451
Test Plan:
- Set ReadOptions::pin_data to true, the bug can be reproed quickly with `./db_stress --persist_user_defined_timestamps=1 --user_timestamp_size=8 --writepercent=35 --delpercent=4 --delrangepercent=1 --iterpercent=20 --nooverwritepercent=1 --prefix_size=8 --prefixpercent=10 --readpercent=30 --memtable_protection_bytes_per_key=8 --block_protection_bytes_per_key=2 --clear_column_family_one_in=0`.
- Set max_sequential_skip_in_iterations to 1 for the other occurrence of the bug.
Reviewed By: jowlyzhang
Differential Revision: D55003766
Pulled By: cbi42
fbshipit-source-id: 23e1049129456684dafb028b6132b70e0afc07fb
Summary:
This PR adds support to return data's approximate unix write time in the iterator property API. The general implementation is:
1) If the entry comes from a SST file, the sequence number to time mapping recorded in that file's table properties will be used to deduce the entry's write time from its sequence number. If no such recording is available, `std::numeric_limits<uint64_t>::max()` is returned to indicate the write time is unknown except if the entry's sequence number is zero, in which case, 0 is returned. This also means that even if `preclude_last_level_data_seconds` and `preserve_internal_time_seconds` can be toggled off between DB reopens, as long as the SST file's table property has the mapping available, the entry's write time can be deduced and returned.
2) If the entry comes from memtable, we will use the DB's sequence number to write time mapping to do similar things. A copy of the DB's seqno to write time mapping is kept in SuperVersion to allow iterators to have lock free access. This also means a new `SuperVersion` is installed each time DB's seqno to time mapping updates, which is originally proposed by Peter in https://github.com/facebook/rocksdb/issues/11928 . Similarly, if the feature is not enabled, `std::numeric_limits<uint64_t>::max()` is returned to indicate the write time is unknown.
Needed follow up:
1) The write time for `kTypeValuePreferredSeqno` should be special cased, where it's already specified by the user, so we can directly return it.
2) Flush job can be updated to use DB's seqno to time mapping copy in the SuperVersion.
3) Handle the case when `TimedPut` is called with a write time that is `std::numeric_limits<uint64_t>::max()`. We can make it a regular `Put`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12428
Test Plan: Added unit test
Reviewed By: pdillinger
Differential Revision: D54967067
Pulled By: jowlyzhang
fbshipit-source-id: c795b1b7ec142e09e53f2ed3461cf719833cb37a
Summary:
Thanks ltamasi for pointing out this bug.
We were incorrectly overwriting `Status::Incomplete` with `Status::OK` after a table cache miss failed to open the file due to the read being memory-only (`kBlockCacheTier`). The fix is to simply stop overwriting the status.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12443
Reviewed By: cbi42
Differential Revision: D54930128
Pulled By: ajkr
fbshipit-source-id: 52f912a2e93b46e71d79fc5968f8ca35b299213d
Summary:
The use case is similar to `MergeOperator::ShouldMerge()` for `Get()`: preventing reads into LSM components for merge operands that are of no interest to the user. `MergeOperator::ShouldMerge()` cannot be reused here because:
- Its name does not make sense in the context of `GetMergeOperands()` since `GetMergeOperands()` never invokes merge
- The callback is part of the `MergeOperator`, but an option specific to the read operation makes more sense to me
If there are any ideas for an API design that covers both `MergeOperator::ShouldMerge()`'s use cases and `GetMergeOperandsOptions::continue_cb`'s use cases, that would be ideal, but for now this is what I came up with.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12438
Reviewed By: hx235
Differential Revision: D54914669
Pulled By: ajkr
fbshipit-source-id: 5f3ff78d3890adc0b1b74bedf3921221930ce63a
Summary:
... that is more hygienic as an "optional reference" than a raw pointer, and likely more efficient than
std::optional<std::reference_wrapper<T>>.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12447
Test Plan: unit test included (with manual verification that "must not compile" sections currently do not)
Reviewed By: jowlyzhang
Differential Revision: D54957917
Pulled By: pdillinger
fbshipit-source-id: bbd89218df803617b1a170ebddc9e56c9b52bf93
Summary:
Crash tests were failing due to data race in accessing `purge_wal_files_last_run_`. This PR changes it to atomic.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12439
Test Plan:
- existing UT
- not able to repro with `python3 tools/db_crashtest.py whitebox --simple --max_key=25000000 --WAL_ttl_seconds=1` and TSAN yet, will monitor internal crash tests
Reviewed By: anand1976
Differential Revision: D54920817
Pulled By: cbi42
fbshipit-source-id: 80ee026b1785ad5dba11295ed35c88889df5f5a6
Summary:
This PR adds support for `TimedPut` API. We introduced a new type `kTypeValuePreferredSeqno` for entries added to the DB via the `TimedPut` API.
The life cycle of such an entry on the write/flush/compaction paths are:
1) It is initially added to memtable as:
`<user_key, seq, kTypeValuePreferredSeqno>: {value, write_unix_time}`
2) When it's flushed to L0 sst files, it's converted to:
`<user_key, seq, kTypeValuePreferredSeqno>: {value, preferred_seqno}`
when we have easy access to the seqno to time mapping.
3) During compaction, if certain conditions are met, we swap in the `preferred_seqno` and the entry will become:
`<user_key, preferred_seqno, kTypeValue>: value`. This step helps fast track these entries to the cold tier if they are eligible after the sequence number swap.
On the read path:
A `kTypeValuePreferredSeqno` entry acts the same as a `kTypeValue` entry, the unix_write_time/preferred seqno part packed in value is completely ignored.
Needed follow ups:
1) The seqno to time mapping accessible in flush needs to be extended to cover the `write_unix_time` for possible `kTypeValuePreferredSeqno` entries. This also means we need to track these `write_unix_time` in memtable.
2) Compaction filter support for the new `kTypeValuePreferredSeqno` type for feature parity with other `kTypeValue` and equivalent types.
3) Stress test coverage for the feature
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12419
Test Plan: Added unit tests
Reviewed By: pdillinger
Differential Revision: D54920296
Pulled By: jowlyzhang
fbshipit-source-id: c8b43f7a7c465e569141770e93c748371ff1da9e
Summary:
we saw crash test fail with
```
lru_cache.cc:249: void rocksdb::lru_cache::LRUCacheShard::LRU_Remove(rocksdb::lru_cache::LRUHandle *): Assertion `high_pri_pool_usage_ >= e->total_charge' failed.
```
One cause for this is that `lru_low_pri_` pointer is not updated in `LRU_insert()` before we try to balance high pri and low pri pool in `MaintainPoolSize();`. A repro unit test is provided.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12429
Test Plan:
Not able to reproduce the failure with db_stress yet.
`./lru_cache_test --gtest_filter="*InsertAfterReducingCapacity*`. It fails the assertion before this PR.
Reviewed By: pdillinger
Differential Revision: D54908919
Pulled By: cbi42
fbshipit-source-id: f485fdbc0ea61c8092a0be5fe561a59c15c78fd3
Summary:
**Context/Summary:**
Previously manual compaction stress test failures won't terminate stress test. https://github.com/facebook/rocksdb/pull/12414 made more manual compaction failures terminate the stress test for signal boosting. A downside to that PR: some tolerable manual compaction stress test failures also unnecessarily terminate stress test.
Ideally we should exclude exactly those tolerable errors (left as a TODO) from being able to terminate. For now we approximate those errors by Aborted(), InvalidArgument(), NotSupported() etc. It's still an improvement to pre-https://github.com/facebook/rocksdb/pull/12414 situation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12437
Test Plan: - No more tolerable manual compaction stress test failures terminating stress test.
Reviewed By: cbi42
Differential Revision: D54913010
Pulled By: hx235
fbshipit-source-id: c43fa79d8f9c1c8b4f8786f8f46508b0ad619a9e
Summary:
`logger_` can be destructed in `ResetLogger()` so we should access them under `mutex_`. Similarly `status_` can be updated only under `mutex_` or in constructor.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12436
Test Plan: I tried running tsan crash test with log_file_time_to_roll = 2, but not able to repro yet. Will monitor internal crash tests.
Reviewed By: hx235
Differential Revision: D54916371
Pulled By: cbi42
fbshipit-source-id: 4a3e3b40fbc2ae242598afdbd4bed5fb8ccf8d65
Summary:
Issue https://github.com/facebook/rocksdb/issues/12421 describes a regression in the migration from CircleCI to GitHub Actions in which failing build steps no longer fail Windows CI jobs. In GHA with pwsh (new preferred powershell command), only the last non-builtin command (or something like that) affects the overall success/failure result, and failures in external commands do not exit the script, even with `$ErrorActionPreference = 'Stop'` and `$PSNativeCommandErrorActionPreference = $true`. Switching to `powershell` causes some obscure failure (not seen in CircleCI) about the `-Lo` option to `curl`.
Here we work around this using the only reasonable-but-ugly way known: explicitly check the result after every non-trivial build step. This leaves us highly susceptible to future regressions with unchecked build steps in the future, but a clean solution is not known.
This change also fixes the build errors that were allowed to creep in because of the CI regression. Also decreased the unnecessarily long running time of DBWriteTest.WriteThreadWaitNanosCounter.
For background, this problem explicitly contradicts GitHub's documentation, and GitHub has known about the problem for more than a year, with no evidence of caring or intending to fix. https://github.com/actions/runner-images/issues/6668 Somehow CircleCI doesn't have this problem. And even though cmd.exe and powershell have been perpetuating DOS-isms for decades, they still seem to be a somewhat active "hot mess" when it comes to sensible, consistent, and documented behavior.
Fixes https://github.com/facebook/rocksdb/issues/12421
A history of some things I tried in development is here: https://github.com/facebook/rocksdb/compare/main...pdillinger:rocksdb:debug_windows_ci_orig
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12426
Test Plan: CI, including https://github.com/facebook/rocksdb/issues/12434 where I have temporarily enabled other Windows builds on PR with this change
Reviewed By: cbi42
Differential Revision: D54903698
Pulled By: pdillinger
fbshipit-source-id: 116bcbebbbf98f347c7cf7dfdeebeaaed7f76827
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12424
The PR adds a wide-column point lookup API `GetEntityFromBatch` to `WriteBatchWithIndex`. Similarly to APIs like `DB::GetEntity`, this new API returns wide-column entities as-is, and wraps plain values in an entity with a single column (the anonymous default column). Also, similarly to `WriteBatchWithIndex::GetFromBatch`, it only reads data from the batch itself.
Reviewed By: jaykorean
Differential Revision: D54826535
fbshipit-source-id: 92604f3ebd90fe1afbd36f2d2194b7dee0011efa
Summary:
since it been causing a few crash tests failures, I suspect it'll be easy to repro locally. Also fixed how to print its corruption message so it does not crash with output cannot be utf-8 decoded.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12431
Reviewed By: hx235
Differential Revision: D54881023
Pulled By: cbi42
fbshipit-source-id: 47208a637cd69b30d2545154849405e37db62ed3
Summary:
**Context/Summary:**
We are doing a sweep in all public options, including but not limited to the `Options`, `Read/WriteOptions`, `IngestExternalFileOptions`, cache options.., to find and add the uncovered ones into db crash. The options included in this PR require minimum changes to db crash other than adding the options themselves.
A bonus change: to surface new issues by improved coverage in stderror, we decided to fail/terminate crash test for manual compactions (CompactFiles, CompactRange()) on meaningful errors. See https://github.com/facebook/rocksdb/pull/12414/files#diff-5c4ced6afb6a90e27fec18ab03b2cd89e8f99db87791b4ecc6fa2694284d50c0R2528-R2532, https://github.com/facebook/rocksdb/pull/12414/files#diff-5c4ced6afb6a90e27fec18ab03b2cd89e8f99db87791b4ecc6fa2694284d50c0R2330-R2336 for more.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12414
Test Plan:
- Run `python3 ./tools/db_crashtest.py --simple blackbox` for 10 minutes to ensure no trivial failure
- Run `python3 tools/db_crashtest.py --simple blackbox --compact_files_one_in=1 --compact_range_one_in=1 --read_fault_one_in=1 --write_fault_one_in=1 --interval=50` for a while to ensure the bonus change does not result in trivial crash/termination of stress test
Reviewed By: ajkr, jowlyzhang, cbi42
Differential Revision: D54691774
Pulled By: hx235
fbshipit-source-id: 50443dfb6aaabd8e24c79a2e42b68c6de877be88
Summary:
This is a temporary workaround for Cmake bug that only sets the correct CMAKE_SYSTEM_PROCESSOR for cross-compilation when target CMAKE_SYSTEM_NAME differs from CMAKE_HOST_NAME:
https://gitlab.kitware.com/cmake/cmake/-/issues/25640
Fix cross-compilation on macOS x86_64 CPUs with target CPU arm64 by manually setting CMAKE_SYSTEM_PROCESSOR to the cross-compilation target whenever CMAKE_SYSTEM_PROCESSOR doesn't match CMAKE_OSX_ARCHITECTURES after project() call. This is probably a Cmake bug that happens on macOS.
Closes https://github.com/facebook/rocksdb/issues/12239
The issue happens when RocksDB is built using the follwoing command:
cmake -G "Unix Makefiles" -DCMAKE_SYSTEM_PROCESSOR=arm64 ..
The build itself succeeds, but because Cmake wrongly sets CMAKE_SYSTEM_PROCESSOR to x86_64 instead of arm64 and causes crc32c_arm64.cc not to be compiled.
This in turn makes the project fails any linking with RocksDB:
```
Undefined symbols for architecture arm64:
"crc32c_arm64(unsigned int, unsigned char const*, unsigned long)", referenced from:
rocksdb::crc32c::ExtendARMImpl(unsigned int, char const*, unsigned long) in librocksdb.a(crc32c.cc.o)
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12240
Reviewed By: cbi42
Differential Revision: D54811365
Pulled By: pdillinger
fbshipit-source-id: 0958e3092806dadd2f61d582b7251af13a5f3f06
Summary:
https://github.com/facebook/rocksdb/issues/12397 attempted to make the test more honest about its failures, and they're really showing up in CI now (but not locally). Disable pending investigation
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12423
Test Plan: watch CI
Reviewed By: ltamasi
Differential Revision: D54817705
Pulled By: pdillinger
fbshipit-source-id: 4721834c49b225ac52d1a28ecb06b9d05de977b3
Summary:
Add `SstFileReader::VerifyNumEntries()` for this purpose. I added the same functionality to `sst_dump` in https://github.com/facebook/rocksdb/issues/12322. Since sst_file_reader.h is exposed to users while sst_dump.h is not, it seems more appropriate to add SST files related APIs here.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12418
Test Plan: `./sst_file_reader_test --gtest_filter="*VerifyNumEntries*"`
Reviewed By: jowlyzhang
Differential Revision: D54764271
Pulled By: cbi42
fbshipit-source-id: 22ebfe04bbb0b152762cee13d4210b147b36d3e9
Summary:
The most general `open()` method for each of RocksDB, TtlDB, OptimisticTransactionDB and TransactionDB should
- ensure the default CF is supplied in the list of descriptors
- cache the default CF handle
- store open CF handles for automatic close on DB close
The `close()` method in each of these DB subclasses should `close()` all the owned CF handles.
I can’t find a cleaner way to build some generalised open/close that does this for all DB subclasses, so it exists as cut and paste with variations in the 4 different DB subclasses.
Added some slightly paranoid testing that CF handles explicitly referred to as default in a list of CF handles in the general open methods, and the simple open that doesn’t supply a CF, end up reading and writing to the same CF. Prompted by the fact that this code is a bit opaque; the first returned handle is the DB.
As part of this, fix the bug where the Java side of `OptimisticsTransactionDB` was not setting up default column family; this was visible when setting up an iterator; add a test to validate that the iterator is OK. A single Java reference to the default column family was not being created in the OptimisticsTransactionDB RocksDB subclass; it should be created in all subclasses. The same problem had previously been fixed for TtlDB.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12417
Reviewed By: ajkr
Differential Revision: D54807643
Pulled By: pdillinger
fbshipit-source-id: 66f34e56a822a009a8f2018d401cf8940d91aa35
Summary:
Seen in https://github.com/facebook/rocksdb/actions/runs/8086592802/job/22096691572?pr=12388
```
[ RUN ] DBTestXactLogIterator.TransactionLogIteratorCheckWhenArchive
db/db_log_iter_test.cc:173:23: runtime error: member call on address 0x0000023956f0 which does not point to an object of type 'rocksdb::DBTestXactLogIterator'
0x0000023956f0: note: object is of type 'rocksdb::DBTestBase'
00 00 00 00 98 ae f7 da 75 7f 00 00 a0 5d 39 02 00 00 00 00 80 ff 39 02 00 00 00 00 95 00 00 00
^~~~~~~~~~~~~~~~~~~~~~~
vptr for 'rocksdb::DBTestBase'
UndefinedBehaviorSanitizer: undefined-behavior db/db_log_iter_test.cc:173:23 in
```
This is almost certainly caused by the sync point callback happening on asynchronous file deletion in the DB while the end of the test is reached and the destruction of the `DBTestXactLogIterator` has reached `DBTestBase::~DBTestBase()`. Either closing the DB or disabling sync points before the end of the test should suffice to fix, and we'll do both. And assert that the sync point callback is actually hit each time.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12397
Test Plan: unable to reproduce, but ran 1000 iterations of the test with UBSAN
Reviewed By: ltamasi
Differential Revision: D54326687
Pulled By: pdillinger
fbshipit-source-id: cc09a4dcd2f237d5b45d910364d6aa56bbd46d50
Summary:
Add a flag in `IOOptions` to request the file system to make best efforts to detect data corruption and reconstruct the data if possible. This will be used by RocksDB to retry a read if the previous read returns corrupt data (checksum mismatch). Add a new op to `FSSupportedOps` that, if supported, will trigger this behavior in RocksDB.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12408
Reviewed By: akankshamahajan15
Differential Revision: D54564218
Pulled By: anand1976
fbshipit-source-id: bc401dcd22a7d320bf63b5183c41714acdce39f5