Summary:
**Context/Summary:**
After injecting write error to WAL, we started to see crash recovery verification failure in prefix recovery. That's because the current tracing implementation traces every write before it writes to WAL even when the WAL write can fail with write error injection. One consequence of that is the traced writes in trace files does not corresponding to write sequence sequence anymore e.g, it has more traced writes that the actual assigned sequence number to successful writes. Therefore b4a84efb4e/db_stress_tool/expected_state.cc (L674) won't restore the ExpectedState to the correct sequence number we want.
Ideally, we should have a prepare-commit mechanism for tracing just like our ExpectedState so we can ignore the traced write if the write fails later. But for now, to simplify, we simply don't inject WAL error (and metadata write error cuz it could fail write when sync WAL dir fails)
To do so, we need to be able to exclude WAL from write injection but still allow sync fault injection in it to maintain its original sync fault testing coverage. This prompts us to decouple sync fault and write injection in FaultInjectionTestFS. And this is what this PR mainly about.
So now `FaultInjectionTestFS` works as the following:
- If direct_writable is true, then `FaultInjectionTestFS` is bypassed for writable file
- Otherwise, FaultInjectionTestFS` can buffer data for sync fault injection (if inject_unsynced_data_loss_ == true, global settings) and/or inject write error (if MaybeInjectThreadLocalError(), thread-local settings). WAL file can be optionally excluded from write injection
Bonus: better naming of relevant variables
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12797
Test Plan:
- The follow commands failed before this fix but passes after
```
python3 tools/db_crashtest.py --simple blackbox \
--interval=5 \
--preserve_unverified_changes=1 \
--threads=32 \
--disable_auto_compactions=1 \
--WAL_size_limit_MB=0 --WAL_ttl_seconds=0 --acquire_snapshot_one_in=0 --adaptive_readahead=0 --adm_policy=0 --advise_random_on_open=1 --allow_concurrent_memtable_write=0 --allow_data_in_errors=True --allow_fallocate=1 --async_io=0 --auto_readahead_size=0 --avoid_flush_during_recovery=1 --avoid_flush_during_shutdown=0 --avoid_unnecessary_blocking_io=0 --backup_max_size=104857600 --backup_one_in=0 --batch_protection_bytes_per_key=0 --bgerror_resume_retry_interval=1000000 --block_align=0 --block_protection_bytes_per_key=4 --block_size=16384 --bloom_before_level=2147483646 --bloom_bits=3.2003682301518492 --bottommost_compression_type=zlib --bottommost_file_compaction_delay=600 --bytes_per_sync=0 --cache_index_and_filter_blocks=1 --cache_index_and_filter_blocks_with_high_priority=1 --cache_size=33554432 --cache_type=fixed_hyper_clock_cache --charge_compression_dictionary_building_buffer=0 --charge_file_metadata=0 --charge_filter_construction=0 --charge_table_reader=1 --check_multiget_consistency=0 --check_multiget_entity_consistency=0 --checkpoint_one_in=0 --checksum_type=kxxHash64 --clear_column_family_one_in=0 --column_families=1 --compact_files_one_in=0 --compact_range_one_in=0 --compaction_pri=2 --compaction_readahead_size=0 --compaction_ttl=0 --compress_format_version=1 --compressed_secondary_cache_size=16777216 --compression_checksum=1 --compression_max_dict_buffer_bytes=549755813887 --compression_max_dict_bytes=16384 --compression_parallel_threads=1 --compression_type=none --compression_use_zstd_dict_trainer=1 --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --daily_offpeak_time_utc=00:00-23:59 --data_block_index_type=0 \
--db_write_buffer_size=0 --delete_obsolete_files_period_micros=0 --delpercent=0 --delrangepercent=0 --destroy_db_initially=0 --detect_filter_construct_corruption=0 --disable_file_deletions_one_in=0 --disable_manual_compaction_one_in=0 --disable_wal=0 --dump_malloc_stats=0 --enable_checksum_handoff=0 --enable_compaction_filter=0 --enable_custom_split_merge=0 --enable_do_not_compress_roles=1 --enable_index_compression=0 --enable_memtable_insert_with_hint_prefix_extractor=0 --enable_pipelined_write=0 --enable_sst_partitioner_factory=0 --enable_thread_tracking=0 --enable_write_thread_adaptive_yield=0 --error_recovery_with_no_fault_injection=0 --fail_if_options_file_error=0 --fifo_allow_compaction=1 --file_checksum_impl=xxh64 --fill_cache=0 --flush_one_in=100 --format_version=4 --get_all_column_family_metadata_one_in=0 --get_current_wal_file_one_in=0 --get_live_files_apis_one_in=0 --get_properties_of_all_tables_one_in=0 --get_property_one_in=0 --get_sorted_wal_files_one_in=0 --hard_pending_compaction_bytes_limit=274877906944 --high_pri_pool_ratio=0.5 --index_block_restart_interval=9 --index_shortening=1 --index_type=0 --ingest_external_file_one_in=0 --initial_auto_readahead_size=0 --inplace_update_support=0 --iterpercent=0 --key_len_percent_dist=1,30,69 --key_may_exist_one_in=0 --last_level_temperature=kUnknown --level_compaction_dynamic_level_bytes=1 --lock_wal_one_in=0 --log2_keys_per_lock=10 --log_file_time_to_roll=0 --log_readahead_size=16777216 --long_running_snapshots=0 --low_pri_pool_ratio=0 --lowest_used_cache_tier=2 --manifest_preallocation_size=0 --manual_wal_flush_one_in=0 --mark_for_compaction_one_file_in=0 --max_auto_readahead_size=524288 --max_background_compactions=1 --max_bytes_for_level_base=67108864 --max_key=1000 --max_key_len=3 --memtable_insert_hint_per_batch=0 --memtable_max_range_deletions=0 --memtable_prefix_bloom_size_ratio=0.5 --memtable_protection_bytes_per_key=8 --memtable_whole_key_filtering=0 --memtablerep=skip_list --metadata_charge_policy=0 --metadata_read_fault_one_in=0 --metadata_write_fault_one_in=0 --min_write_buffer_number_to_merge=1 --mmap_read=0 --mock_direct_io=False --nooverwritepercent=1 --num_file_reads_for_auto_readahead=0 --open_files=-1 --open_metadata_read_fault_one_in=0 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --ops_per_thread=20000000 \
--optimize_filters_for_hits=1 --optimize_filters_for_memory=1 --optimize_multiget_for_io=0 --paranoid_file_checks=1 --partition_filters=0 --partition_pinning=3 --pause_background_one_in=0 --periodic_compaction_seconds=0 --prefix_size=1 --prefixpercent=0 --prepopulate_block_cache=0 --preserve_internal_time_seconds=0 --progress_reports=0 --promote_l0_one_in=0 --read_amp_bytes_per_bit=0 --read_fault_one_in=0 --readahead_size=0 --readpercent=0 --recycle_log_file_num=0 --reopen=0 --report_bg_io_stats=0 --reset_stats_one_in=1000000 --sample_for_compression=5 --secondary_cache_fault_one_in=0 --secondary_cache_uri= --skip_stats_update_on_db_open=0 --snapshot_hold_ops=100000 --soft_pending_compaction_bytes_limit=68719476736 --sqfc_name=bar --sqfc_version=1 --sst_file_manager_bytes_per_sec=0 --sst_file_manager_bytes_per_truncate=0 --stats_dump_period_sec=10 --stats_history_buffer_size=0 --strict_bytes_per_sync=0 --subcompactions=1 --sync=0 --sync_fault_injection=1 --table_cache_numshardbits=0 --target_file_size_base=16777216 --target_file_size_multiplier=1 --test_batches_snapshots=0 --top_level_index_pinning=3 --uncache_aggressiveness=9890 --universal_max_read_amp=-1 --unpartitioned_pinning=3 --use_adaptive_mutex=0 --use_adaptive_mutex_lru=1 --use_attribute_group=0 --use_delta_encoding=0 --use_direct_io_for_flush_and_compaction=0 --use_direct_reads=0 --use_full_merge_v1=0 --use_get_entity=0 --use_merge=0 --use_multi_cf_iterator=0 --use_multi_get_entity=0 --use_multiget=0 --use_put_entity_one_in=0 --use_sqfc_for_range_queries=0 --use_timed_put_one_in=0 --use_write_buffer_manager=0 --user_timestamp_size=0 --value_size_mult=32 --verification_only=0 --verify_checksum=0 --verify_checksum_one_in=0 --verify_compression=1 --verify_db_one_in=0 --verify_file_checksums_one_in=0 --verify_iterator_with_expected_state_one_in=5 --verify_sst_unique_id_in_manifest=1 --wal_bytes_per_sync=0 --wal_compression=zstd --write_buffer_size=335544320 --write_dbid_to_manifest=1 --write_fault_one_in=100 --writepercent=100
```
- CI
Reviewed By: cbi42
Differential Revision: D58917145
Pulled By: hx235
fbshipit-source-id: b6397036bea035a92341c2b05fb01872db2153d7
Summary:
**Context/Summary:**
https://github.com/facebook/rocksdb/pull/12713 accidentally turned on fault injection in BatchedOpsStressTest and MultiOpsTxnsStressTest. Though this was meant to be an increased testing coverage, it also made our CI noisy. For now we decided to disable it before we manage to stabilize the CI and fix bugs surfaced in NonBatchedOpsStressTest which impacts more users.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12794
Test Plan: CI
Reviewed By: cbi42
Differential Revision: D58897598
Pulled By: hx235
fbshipit-source-id: 8094072ef1bff27d7825efed0876f365a31fef9c
Summary:
**Context:**
We currently have partial error injection:
- DB operation: all read, SST write
- DB open: all read, SST write, all metadata write.
This PR completes the error injection (with some limitations below):
- DB operation & open: all read, all write, all metadata write, all metadata read
**Summary:**
- Inject retryable metadata read, metadata write error concerning directory (e.g, dir sync, ) or file metadata (e.g, name, size, file creation/deletion...)
- Inject retryable errors to all major file types: random access file, sequential file, writable file
- Allow db stress test operations to handle above injected errors gracefully without crashing
- Change all error injection to thread-local implementation for easier disabling and enabling in the same thread. For example, we can control error handling thread to have no error injection. It's also cleaner in code.
- Limitation: compared to before, we now don't have write fault injection for backup/restore CopyOrCreateFiles work threads since they use anonymous background threads as well as read injection for db open bg thread
- Add a new flag to test error recovery without error injection so we can test the path where error recovery actually succeeds
- Some Refactory & fix to db stress test framework (see PR review comments)
- Fix some minor bugs surfaced (see PR review comments)
- Limitation: had to disable backup restore with metadata read/write injection since it surfaces too many testing issues. Will add it back later to focus on surfacing actual code/internal bugs first.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12713
Test Plan:
- Existing UT
- CI with no trivial error failure
Reviewed By: pdillinger
Differential Revision: D58326608
Pulled By: hx235
fbshipit-source-id: 011b5195aaeb6011641ae0a9194f7f2a0e325ad7
Summary:
Implemented two key segment extractors that satisfy the "segment prefix property," one with variable segment widths and one with fixed. Used these to create a couple of named configs and versions that are randomly selected by the crash test. On the read side, the required table_filter is set up everywhere I found the stress test uses iterator_upper_bound.
Writing filters on new SST files and applying filters on SST files to range queries are configured independently, to potentially help with isolating different sides of the functionality.
Not yet implemented / possible follow-up:
* Consider manipulating/skewing the query bounds to better exercise filters
* Not yet using categories in the extractors
* Not yet dynamically changing the filtering version
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12769
Test Plan: Some stress test trial runs, including with ASAN. Inserted some temporary probes to ensure code was being exercised (more or less) as intended.
Reviewed By: hx235
Differential Revision: D58547462
Pulled By: pdillinger
fbshipit-source-id: f7b1596dd668426268c5293ac17615f749703f52
Summary:
AttributeGroup is not yet supported in MultiOpsTxn Test. Disabling it for now.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12781
Test Plan: Disabling in the test
Reviewed By: hx235
Differential Revision: D58757042
Pulled By: jaykorean
fbshipit-source-id: 8c3c85376e6ec0d1c7027b83abeb91eddc64236f
Summary:
Instead of completely disallowing `MultiCfIterator` when one or more child iterators will do manual prefix iteration (as suggested in https://github.com/facebook/rocksdb/issues/12770 ), just let `MultiCfIterator` operate as is even when there's a possibility of undefined result from child iterators. If one or more child iterators cause the heap to be empty, just return early and `Valid()` will return false.
It is still possible that heap is not empty when one or more child iterators are returning wrong keys. Basically, MultiCfIterator behaves the same as what we described in https://github.com/facebook/rocksdb/wiki/Prefix-Seek#manual-prefix-iterating - "RocksDB will not return error when it is misused and the iterating result will be undefined."
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12773
Test Plan:
MultiCfIterator added back to the stress test
```
python3 tools/db_crashtest.py blackbox --simple --max_key=25000000 --write_buffer_size=4194304 --use_attribute_group=0 --use_put_entity_one_in=1 --use_multi_get=1 --use_multi_cf_iterator=1 --verify_iterator_with_expected_state_one_in=2
```
Reviewed By: cbi42
Differential Revision: D58612055
Pulled By: jaykorean
fbshipit-source-id: e0dd942bed98382c59d463412dd8f163e6790b93
Summary:
**Context/Summary:**
https://github.com/facebook/rocksdb/pull/12567 disabled reopen with un-synced data loss in crash test since we discovered un-synced WAL loss and we currently don't support prefix recovery in reopen. This PR explicitly sync WAL data before close to avoid such data loss case from happening and add back the testing coverage.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12746
Test Plan: CI
Reviewed By: ajkr
Differential Revision: D58326890
Pulled By: hx235
fbshipit-source-id: 0865f715e97c5948d7cb3aea62fe2a626cb6522a
Summary:
Currently, when files become obsolete, the block cache entries associated with them just age out naturally. With pure LRU, this is not too bad, as once you "use" enough cache entries to (re-)fill the cache, you are guranteed to have purged the obsolete entries. However, HyperClockCache is a counting clock cache with a somewhat longer memory, so could be more negatively impacted by previously-hot cache entries becoming obsolete, and taking longer to age out than newer single-hit entries.
Part of the reason we still have this natural aging-out is that there's almost no connection between block cache entries and the file they are associated with. Everything is hashed into the same pool(s) of entries with nothing like a secondary index based on file. Keeping track of such an index could be expensive.
This change adds a new, mutable CF option `uncache_aggressiveness` for erasing obsolete block cache entries. The process can be speculative, lossy, or unproductive because not all potential block cache entries associated with files will be resident in memory, and attempting to remove them all could be wasted CPU time. Rather than a simple on/off switch, `uncache_aggressiveness` basically tells RocksDB how much CPU you're willing to burn trying to purge obsolete block cache entries. When such efforts are not sufficiently productive for a file, we stop and move on.
The option is in ColumnFamilyOptions so that it is dynamically changeable for already-open files, and customizeable by CF.
Note that this block cache removal happens as part of the process of purging obsolete files, which is often in a background thread (depending on `background_purge_on_iterator_cleanup` and `avoid_unnecessary_blocking_io` options) rather than along CPU critical paths.
Notable auxiliary code details:
* Possibly fixing some issues with trivial moves with `only_delete_metadata`: unnecessary TableCache::Evict in that case and missing from the ObsoleteFileInfo move operator. (Not able to reproduce an current failure.)
* Remove suspicious TableCache::Erase() from VersionSet::AddObsoleteBlobFile() (TODO follow-up item)
Marked EXPERIMENTAL until more thorough validation is complete.
Direct stats of this functionality are omitted because they could be misleading. Block cache hit rate is a better indicator of benefit, and CPU profiling a better indicator of cost.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12694
Test Plan:
* Unit tests added, including refactoring an existing test to make better use of parameterized tests.
* Added to crash test.
* Performance, sample command:
```
for I in `seq 1 10`; do for UA in 300; do for CT in lru_cache fixed_hyper_clock_cache auto_hyper_clock_cache; do rm -rf /dev/shm/test3; TEST_TMPDIR=/dev/shm/test3 /usr/bin/time ./db_bench -benchmarks=readwhilewriting -num=13000000 -read_random_exp_range=6 -write_buffer_size=10000000 -bloom_bits=10 -cache_type=$CT -cache_size=390000000 -cache_index_and_filter_blocks=1 -disable_wal=1 -duration=60 -statistics -uncache_aggressiveness=$UA 2>&1 | grep -E 'micros/op|rocksdb.block.cache.data.(hit|miss)|rocksdb.number.keys.(read|written)|maxresident' | awk '/rocksdb.block.cache.data.miss/ { miss = $4 } /rocksdb.block.cache.data.hit/ { hit = $4 } { print } END { print "hit rate = " ((hit * 1.0) / (miss + hit)) }' | tee -a results-$CT-$UA; done; done; done
```
Averaging 10 runs each case, block cache data block hit rates
```
lru_cache
UA=0 -> hit rate = 0.327, ops/s = 87668, user CPU sec = 139.0
UA=300 -> hit rate = 0.336, ops/s = 87960, user CPU sec = 139.0
fixed_hyper_clock_cache
UA=0 -> hit rate = 0.336, ops/s = 100069, user CPU sec = 139.9
UA=300 -> hit rate = 0.343, ops/s = 100104, user CPU sec = 140.2
auto_hyper_clock_cache
UA=0 -> hit rate = 0.336, ops/s = 97580, user CPU sec = 140.5
UA=300 -> hit rate = 0.345, ops/s = 97972, user CPU sec = 139.8
```
Conclusion: up to roughly 1 percentage point of improved block cache hit rate, likely leading to overall improved efficiency (because the foreground CPU cost of cache misses likely outweighs the background CPU cost of erasure, let alone I/O savings).
Reviewed By: ajkr
Differential Revision: D57932442
Pulled By: pdillinger
fbshipit-source-id: 84a243ca5f965f731f346a4853009780a904af6c
Summary:
**Context/Summary:** a better API design is decided lately so we decided to revert these two changes.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12738
Test Plan: - CI
Reviewed By: ajkr
Differential Revision: D58162165
Pulled By: hx235
fbshipit-source-id: 9bbe4d2fe9fbe39213f4cf137a2d419e6ffb8e16
Summary:
We plan to re-enable the test after fixing the test.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12728
Test Plan: N/A. Disabling the test
Reviewed By: hx235
Differential Revision: D58071284
Pulled By: jaykorean
fbshipit-source-id: af6b45ec7654f9c7b40c36d3b59c7087e27a7af9
Summary:
Introduce `use_multi_cf_iterator`, and when it's set, use `CoalescingIterator` in `TestIterate()`. Because all the column families contain the same data in today's Stress Test, we can compare `CoalescingIterator` against any `DBIter` from any of the column families. Currently, coalescing logic verification is done by unit tests, but we can extend the stress test to support different data in different column families in the future.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12706
Test Plan:
```
python3 tools/db_crashtest.py blackbox --simple --max_key=25000000 --write_buffer_size=4194304 --use_attribute_group=0 --use_put_entity_one_in=1 --use_multi_get=1 --use_multi_cf_iterator=1
```
**More PRs to come**
- Use `AttributeGroupIterator` when both `use_multi_cf_iterator` and `use_attribute_group` are true
- Support `Refresh()` in `CoalescingIterator`
- Extend Stress Test to support different data in different CFs (Long-term)
Reviewed By: ltamasi
Differential Revision: D58020247
Pulled By: jaykorean
fbshipit-source-id: 8e2483b85cf2bb0f5a9bb44851601bbf063484ec
Summary:
As titled. This PR also makes the interactive query tool more permissive by allowing the user to continue to try out a different command after the previous command received some allowed errors, such as `Status::NotFound`, `Status::InvalidArgument`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12716
Test Plan:
Manually tested:
```
yuzhangyu@yuzhangyu-mbp rocksdb % ./ldb --db=$TEST_DB --key_hex --value_hex query
get 0x0000000000000000 --read_timestamp=1115559245398440
0x0000000000000000|timestamp:1115559245398440 ==> 0x07000000000102030C0D0E0F08090A0B14151617101112131C1D1E1F18191A1B24252627202122232C2D2E2F28292A2B34353637303132333C3D3E3F38393A3B
put 0x0000000000000000 0x0000
put 0x0000000000000000 => 0x0000 failed: Invalid argument: cannot call this method on column family default that enables timestamp
put 0x0000000000000000 aha 0x0000
put gets invalid argument: Invalid argument: user provided timestamp is not a valid uint64 value.
put 0x0000000000000000 1115559245398441 0x08000000000102030C0D0E0F08090A0B14151617101112131C1D1E1F18191A1B24252627202122232C2D2E2F28292A2B34353637303132333C3D3E3F38393A3B
put 0x0000000000000000 write_ts: 1115559245398441 => 0x08000000000102030C0D0E0F08090A0B14151617101112131C1D1E1F18191A1B24252627202122232C2D2E2F28292A2B34353637303132333C3D3E3F38393A3B succeeded
delete 0x0000000000000000
delete 0x0000000000000000 failed: Invalid argument: cannot call this method on column family default that enables timestamp
delete 0x0000000000000000 1115559245398442
delete 0x0000000000000000 write_ts: 1115559245398442 succeeded
get 0x0000000000000000 --read_timestamp=1115559245398442
get 0x0000000000000000 read_timestamp: 1115559245398442 status: NotFound:
get 0x0000000000000000 --read_timestamp=1115559245398441
0x0000000000000000|timestamp:1115559245398441 ==> 0x08000000000102030C0D0E0F08090A0B14151617101112131C1D1E1F18191A1B24252627202122232C2D2E2F28292A2B34353637303132333C3D3E3F38393A3B
count --from=0x0000000000000000 --to=0x0000000000000001
scan from 0x0000000000000000 to 0x0000000000000001failed: Invalid argument: cannot call this method on column family default that enables timestamp
count --from=0x0000000000000000 --to=0x0000000000000001 --read_timestamp=1115559245398442
0
count --from=0x0000000000000000 --to=0x0000000000000001 --read_timestamp=1115559245398441
1
```
Reviewed By: ltamasi
Differential Revision: D57992183
Pulled By: jowlyzhang
fbshipit-source-id: 720525de22412d16aa952870e088f2c371459ece
Summary:
Add the `--leader_path` option to specify the directory path of the leader for a follower RocksDB instance. This PR also adds a `count` command to the repl shell. While not specific to followers, it is useful for testing purposes.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12682
Reviewed By: jowlyzhang
Differential Revision: D57642296
Pulled By: anand1976
fbshipit-source-id: 53767d496ecadc363ff92cd958b8e15a7bf3b151
Summary:
These names are confusing with `Logger` etc. so moving to `WalFile` etc.
Other small, related name refactorings.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12695
Test Plan: Left most unit tests using old names as an API compatibility test. Non-test code compiles with deprecated names removed. No functional changes.
Reviewed By: ajkr
Differential Revision: D57747458
Pulled By: pdillinger
fbshipit-source-id: 7b77596b9c20d865d43b9dc66c30c8bd2b3b424f
Summary:
It should be no less than `level0_file_num_compaction_trigger`(which defaults to 4) when set to a positive value. Otherwise DB open will fail.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12701
Test Plan: crash test not failing DB open due to this option value.
Reviewed By: ajkr
Differential Revision: D57825062
Pulled By: cbi42
fbshipit-source-id: 22d8e12aeceb5cef815157845995a8448552e2d2
Summary:
As titled. For dumping wal files, since a mapping from column family id to the user comparator object is needed to print the timestamp in human readable format, option `[--db=<db_path>]` is added to `dump_wal` command to allow the user to choose to optionally open the DB as read only instance and dump the wal file with better timestamp formatting.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12690
Test Plan:
Manually tested
dump_wal:
[dump a wal file specified with --walfile]
```
>> ./ldb --walfile=$TEST_DB/000004.log dump_wal --print_value
>>1,1,28,13,PUT(0) : 0x666F6F0100000000000000 : 0x7631
(Column family id: [0] contained in WAL are not opened in DB. Applied default hex formatting for user key. Specify --db=<db_path> to open DB for better user key formatting if it contains timestamp.)
```
[dump with --db specified for better timestamp formatting]
```
>> ./ldb --walfile=$TEST_DB/000004.log dump_wal --db=$TEST_DB --print_value
>> 1,1,28,13,PUT(0) : 0x666F6F|timestamp:1 : 0x7631
```
dump:
[dump a file specified with --path]
```
>>./ldb --path=/tmp/rocksdbtest-501/column_family_test_75359_17910784957761284041/000004.log dump
Sequence,Count,ByteSize,Physical Offset,Key(s) : value
1,1,28,13,PUT(0) : 0x666F6F0100000000000000 : 0x7631
(Column family id: [0] contained in WAL are not opened in DB. Applied default hex formatting for user key. Specify --db=<db_path> to open DB for better user key formatting if it contains timestamp.)
```
[dump db specified with --db]
```
>> ./ldb --db=/tmp/rocksdbtest-501/column_family_test_75359_17910784957761284041 dump
>> foo|timestamp:1 ==> v1
Keys in range: 1
```
idump
```
./ldb --db=$TEST_DB idump
'foo|timestamp:1' seq:1, type:1 => v1
Internal keys in range: 1
```
Reviewed By: ltamasi
Differential Revision: D57755382
Pulled By: jowlyzhang
fbshipit-source-id: a0a2ef80c92801cbf7bfccc64769c1191824362e
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12688
As a first step of covering the wide-column transaction APIs, the patch adds `PutEntity` to the optimistic and pessimistic transaction stress tests (for the latter, only when the WriteCommitted policy is utilized). Other APIs and the multi-operation transaction test will be covered by subsequent PRs.
Reviewed By: jaykorean
Differential Revision: D57675781
fbshipit-source-id: bfe062ec5f6ab48641cd99a70f239ce4aa39299c
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12683
With optimistic transactions, the stress test parameter `txn_write_policy` is not applicable and is thus not set. When the parameter is subsequently checked, Python's dictionary `get` method returns `None`, which is not equal to zero. The net result of this is that currently, `sync_fault_injection` and `manual_wal_flush_one_in` are always disabled in optimistic transaction mode (most likely unintentionally).
Reviewed By: cbi42
Differential Revision: D57655339
fbshipit-source-id: 8b93a788f9b02307b6ea7b2129dc012271130334
Summary:
We recently noticed that some memtable flushed and file
ingestions could proceed during LockWAL, in violation of its stated
contract. (Note: we aren't 100% sure its actually needed by MySQL, but
we want it to be in a clean state nonetheless.)
Despite earlier skepticism that this could be done safely (https://github.com/facebook/rocksdb/issues/12666), I
found a place to wait to wait for LockWAL to be cleared before allowing
these operations to proceed: WaitForPendingWrites()
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12652
Test Plan:
Added to unit tests. Extended how db_stress validates LockWAL
and re-enabled combination of ingestion and LockWAL in crash test, in
follow-up to https://github.com/facebook/rocksdb/issues/12642
Ran blackbox_crash_test for a long while with relevant features
amplified.
Suggested follow-up: fix FaultInjectionTestFS to report file sizes
consistent with what the user has requested to be flushed.
Reviewed By: jowlyzhang
Differential Revision: D57622142
Pulled By: pdillinger
fbshipit-source-id: aef265fce69465618974b4ec47f4636257c676ce
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12677
The patch contains two fixes related to printing `PutEntity` records with `ldb dump_wal`:
1) It adds the key to the printout (it was missing earlier).
2) It restores the formatting flags of the output stream after dumping the wide-column structure so that any `hex` flag that might have been set does not affect subsequent printing of e.g. sequence numbers.
Reviewed By: jaykorean, jowlyzhang
Differential Revision: D57591295
fbshipit-source-id: af4e3e219f0082ad39bbdfd26f8c5a57ebb898be
Summary:
the value of `inplace_update_support` option need to be fixed across runs of db_stress on the same DB (https://github.com/facebook/rocksdb/issues/12577). My recent fix (https://github.com/facebook/rocksdb/issues/12673) regressed this behavior. Also fix some existing places where this does not hold.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12675
Test Plan: monitor crash tests related to `inplace_update_support`.
Reviewed By: hx235
Differential Revision: D57576375
Pulled By: cbi42
fbshipit-source-id: 75b1bd233f03e5657984f5d5234dbbb1ffc35c27
Summary:
With unsynced data loss, we replay traces to recover expected state to DB's latest sequence number. With `inplace_update_support`, the largest sequence number of memtable may not reflect the latest update. This is because inplace updates in memtable do not update sequence number. So we disable `inplace_update_support` where traces need to be replayed.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12673
Reviewed By: ltamasi
Differential Revision: D57512548
Pulled By: cbi42
fbshipit-source-id: 69278fe2e935874faf744d0ac4fd85263773c3ec
Summary:
As titled. A proper fix should probably be failing file ingestion if the DB is in a lock wal state as it promises to "Freezes the logical state of the DB".
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12642
Reviewed By: pdillinger
Differential Revision: D57235869
Pulled By: jowlyzhang
fbshipit-source-id: c70031463842220f865621eb6f53424df27d29e9
Summary:
Seeing way too many errors likely related to PromoteL0 from https://github.com/facebook/rocksdb/issues/12617, containing
```
Cannot delete table file #N from level 0 since it is on level X
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12651
Test Plan: watch crash test results
Reviewed By: hx235
Differential Revision: D57286208
Pulled By: pdillinger
fbshipit-source-id: f7f0560cc0804ca297373c8d20ebc34986cc19d0
Summary:
Adding AttributeGroup APIs in stress test. This contains the following changes only. More PRs to follow.
- Introduce `use_attribute_group` flag
- AttributeGroup `PutEntity()` and `GetEntity()` are now used per `use_attribute_group` flag in BatchOps, NonBatchOps and CfConsistency tests
In the next PRs I plan to add
- AttributeGroup `MultiGetEntity()` in Stress Test
- AttributeGroupIterator in Stress Test (along with CoalescingIterator)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12605
Test Plan:
NonBatchOps
```
python3 tools/db_crashtest.py blackbox --simple --max_key=25000000 --write_buffer_size=4194304 --use_attribute_group=1 --use_put_entity_one_in=1
```
BatchOps
```
python3 tools/db_crashtest.py blackbox --test_batches_snapshots=1 --max_key=25000000 --write_buffer_size=4194304 --use_attribute_group=1 --use_put_entity_one_in=1
```
CfConsistency Test
```
python3 tools/db_crashtest.py blackbox --cf_consistency --max_key=25000000 --write_buffer_size=4194304 --use_attribute_group=1 --use_put_entity_one_in=1
```
Reviewed By: ltamasi
Differential Revision: D56916768
Pulled By: jaykorean
fbshipit-source-id: 8555d9e0d05927740a10e4e8301e44beec59a6f5
Summary:
**Context/Summary:**
As titled. Bonus: found that PromoteL0 called with other concurrent PromoteL0 will return non-okay error so clarify the API.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12617
Test Plan: CI
Reviewed By: ajkr
Differential Revision: D56954428
Pulled By: hx235
fbshipit-source-id: 0e056153c515003fd241ffec59b0d8a27529db4c
Summary:
After https://github.com/facebook/rocksdb/issues/12624 seeing db_stress failures due to db_crashtest.py calling it with --prefixpercent=5 --enable_compaction_filter=1
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12627
Test Plan: watch crash test
Reviewed By: ajkr
Differential Revision: D57121592
Pulled By: pdillinger
fbshipit-source-id: 55727355a7662e67efcd22d7e353153e78e24f59
Summary:
This also updates WriteBatch's protection info to include write time since there are several places in memtable that by default protects the whole value slice.
This PR is stacked on https://github.com/facebook/rocksdb/issues/12543
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12559
Reviewed By: pdillinger
Differential Revision: D56308285
Pulled By: jowlyzhang
fbshipit-source-id: 5524339fe0dd6c918dc940ca2f0657b5f2111c56
Summary:
Mixed code from `MultiGetCommand` and `GetEntityCommand` to introduce `MultiGetEntityCommand`. Some minor fixes for the related subcommands are included.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12593
Reviewed By: jaykorean
Differential Revision: D56687147
Pulled By: ajkr
fbshipit-source-id: 2ad7b7ba8e05e990b43f2d1eb4990f746ce5f1ea
Summary:
Made `BlockBasedTableOptions::block_align` incompatible (i.e., APIs will return `Status::InvalidArgument`) with more ways of enabling compression: `CompactionOptions::compression`, `ColumnFamilyOptions::compression_per_level`, and `ColumnFamilyOptions::bottommost_compression`. Previously it was only incompatible with `ColumnFamilyOptions::compression`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12592
Reviewed By: hx235
Differential Revision: D56650862
Pulled By: ajkr
fbshipit-source-id: f5201602c2ce436e6d8d30893caa6a161a61f141
Summary:
`MultiOpsTxnsStressTest` relies on snapshot which is incompatible with `inplace_update_support`. TransactionDB uses snapshot too so we don't expect it to be used with `inplace_update_support` either.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12586
Test Plan:
```
python3 tools/db_crashtest.py whitebox --[test_multiops_txn|txn] --txn_write_policy=1
```
Reviewed By: hx235
Differential Revision: D56602769
Pulled By: cbi42
fbshipit-source-id: 8778541295f0af71e8ce912c8f872ab5cc607fc1
Summary:
**Context/Summary:**
Our crash test recently surfaced incompatibilities between DeleteRange and inplace_update_support. Incorrect read result will be returned after insertion into memtables already contain delete range data.
This PR is to clarify this in API and re-enable `inplace_update_support` in crash test with sanitization.
Ideally there should be a way to check memtable for delete range entry upon put under inplace_update_support = true
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12577
Test Plan: CI
Reviewed By: ajkr
Differential Revision: D56492556
Pulled By: hx235
fbshipit-source-id: 9e80e5c69dd708716619a266f41580959680c83b
Summary:
**Context/Summary:**
Our recent crash test failures show inplace_update_support can cause DB to return value inconsistent with expected state upon crash recovery if delete range was used in the previous run AND inplace_update_support=true is used in either previous or the current verification run. Since it's a bit hard to keep track of whether previous run has used delete range or not, I decided to temporarily disable inplace_update_support in crash test to keep crash test stabilized before figuring why these two features are incompatible and how to prevent such combination in crash test.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12574
Test Plan: Rehearsed many stress run with `inplace_update_support=0` and they passed
Reviewed By: jaykorean
Differential Revision: D56454951
Pulled By: hx235
fbshipit-source-id: 57f2ae6308bad7ed4077ddb9e658380742afa293
Summary:
A basic implementation of RocksDB follower mode, which opens a remote database (referred to as leader) on a distributed file system by tailing its MANIFEST. It leverages the secondary instance mode, but is different in some key ways -
1. It has its own directory with links to the leader's database
2. Periodically refreshes itself
3. (Future) Snapshot support
4. (Future) Garbage collection of obsolete links
5. (Long term) Memtable replication
There are two main classes implementing this functionality - `DBImplFollower` and `OnDemandFileSystem`. The former is derived from `DBImplSecondary`. Similar to `DBImplSecondary`, it implements recovery and catch up through MANIFEST tailing using the `ReactiveVersionSet`, but does not consider logs. In a future PR, we will implement memtable replication, which will eliminate the need to catch up using logs. In addition, the recovery and catch-up tries to avoid directory listing as repeated metadata operations are expensive.
The second main piece is the `OnDemandFileSystem`, which plugs in as an `Env` for the follower instance and creates the illusion of the follower directory as a clone of the leader directory. It creates links to SSTs on first reference. When the follower tails the MANIFEST and attempts to create a new `Version`, it calls `VerifyFileMetadata` to verify the size of the file, and optionally the unique ID of the file. During this process, links are created which prevent the underlying files from getting deallocated even if the leader deletes the files.
TODOs: Deletion of obsolete links, snapshots, robust checking against misconfigurations, better observability etc.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12540
Reviewed By: jowlyzhang
Differential Revision: D56315718
Pulled By: anand1976
fbshipit-source-id: d19e1aca43a6af4000cb8622a718031b69ebd97b
Summary:
**Context/Summary:**
In-place memtable updates (inplace_update_support) is not compatible with concurrent writes (allow_concurrent_memtable_write). So we disallow this combination in crash test
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12550
Test Plan: CI
Reviewed By: jaykorean
Differential Revision: D56204269
Pulled By: hx235
fbshipit-source-id: 06608f2591db5e37470a1da6afcdfd2701781c2d
Summary:
**Context/Summary:**
This PR includes some public DB APIs not tested in crash/stress yet can be added in a straightforward way.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12541
Test Plan:
- Locally run crash test heavily stressing on these new APIs
- CI
Reviewed By: jowlyzhang
Differential Revision: D56164892
Pulled By: hx235
fbshipit-source-id: 8bb568c3e65aec39d642987033f1d76c52f69bd8