8829ba9fe1
Summary: While working on Meta's internal test triaging process, I found that `db_crashtest.py` was printing out `stdout` and `stderr` altogether. Adding an option to print `stderr` separately so that it's easy to extract only `stderr` from the test run. `print_stderr_separately` is introduced as an optional parameter with default value `False` to keep the existing behavior as is (except a few minor changes). Minor changes to the existing behavior - We no longer print `stderr has error message:` and `***` prefix to each line. We simply print `stderr:` before printing `stderr` if stderr is printed in stdout and print `stderr` as is. - We no longer print `times error occurred in output is ...` which doesn't appear to have any values Pull Request resolved: https://github.com/facebook/rocksdb/pull/12301 Test Plan: **Default Behavior (blackbox)** Run printed everything as is ``` $> python3 tools/db_crashtest.py blackbox --simple --max_key=25000000 --write_buffer_size=4194304 2> /tmp/error.log Running blackbox-crash-test with interval_between_crash=120 total-duration=6000 ... Integrated BlobDB: blob files enabled 0, min blob size 0, blob file size 268435456, blob compression type NoCompression, blob GC enabled 0, cutoff 0.250000, force threshold 1.000000, blob compaction readahead size 0, blob file starting level 0 Integrated BlobDB: blob cache disabled DB path: [/tmp/jewoongh/rocksdb_crashtest_blackboxwh7yxpec] (Re-)verified 0 unique IDs 2024/01/29-09:16:30 Initializing worker threads Crash-recovery verification passed :) 2024/01/29-09:16:35 Starting database operations 2024/01/29-09:16:35 Starting verification Stress Test : 543.600 micros/op 8802 ops/sec : Wrote 0.00 MB (0.27 MB/sec) (50% of 10 ops) : Wrote 5 times : Deleted 1 times : Single deleted 0 times : 4 read and 0 found the key : Prefix scanned 0 times : Iterator size sum is 0 : Iterated 0 times : Deleted 0 key-ranges : Range deletions covered 0 keys : Got errors 0 times : 0 CompactFiles() succeed : 0 CompactFiles() did not succeed stderr: WARNING: prefix_size is non-zero but memtablerep != prefix_hash Error : jewoongh injected test error This is not a real failure. Verification failed :( ``` Nothing in stderr ``` $> cat /tmp/error.log ``` **Default Behavior (whitebox)** Run printed everything as is ``` $> python3 tools/db_crashtest.py whitebox --simple --max_key=25000000 --write_buffer_size=4194304 2> /tmp/error.log Running whitebox-crash-test with total-duration=10000 ... (Re-)verified 571 unique IDs 2024/01/29-09:33:53 Initializing worker threads Crash-recovery verification passed :) 2024/01/29-09:35:16 Starting database operations 2024/01/29-09:35:16 Starting verification Stress Test : 97248.125 micros/op 10 ops/sec : Wrote 0.00 MB (0.00 MB/sec) (12% of 8 ops) : Wrote 1 times : Deleted 0 times : Single deleted 0 times : 4 read and 1 found the key : Prefix scanned 1 times : Iterator size sum is 120868 : Iterated 4 times : Deleted 0 key-ranges : Range deletions covered 0 keys : Got errors 0 times : 0 CompactFiles() succeed : 0 CompactFiles() did not succeed stderr: WARNING: prefix_size is non-zero but memtablerep != prefix_hash Error : jewoongh injected test error This is not a real failure. New cache capacity = 4865393 Verification failed :( TEST FAILED. See kill option and exit code above!!! ``` Nothing in stderr ``` $> cat /tmp/error.log ``` **New option added (blackbox)** ``` $> python3 tools/db_crashtest.py blackbox --simple --max_key=25000000 --write_buffer_size=4194304 --print_stderr_separately 2> /tmp/error.log Running blackbox-crash-test with interval_between_crash=120 total-duration=6000 ... Integrated BlobDB: blob files enabled 0, min blob size 0, blob file size 268435456, blob compression type NoCompression, blob GC enabled 0, cutoff 0.250000, force threshold 1.000000, blob compaction readahead size 0, blob file starting level 0 Integrated BlobDB: blob cache disabled DB path: [/tmp/jewoongh/rocksdb_crashtest_blackbox7ybna32z] (Re-)verified 0 unique IDs Compaction filter factory: DbStressCompactionFilterFactory 2024/01/29-09:05:39 Initializing worker threads Crash-recovery verification passed :) 2024/01/29-09:05:46 Starting database operations 2024/01/29-09:05:46 Starting verification Stress Test : 235.917 micros/op 16000 ops/sec : Wrote 0.00 MB (0.16 MB/sec) (16% of 12 ops) : Wrote 2 times : Deleted 1 times : Single deleted 0 times : 9 read and 0 found the key : Prefix scanned 0 times : Iterator size sum is 0 : Iterated 0 times : Deleted 0 key-ranges : Range deletions covered 0 keys : Got errors 0 times : 0 CompactFiles() succeed : 0 CompactFiles() did not succeed ``` stderr printed separately ``` $> cat /tmp/error.log WARNING: prefix_size is non-zero but memtablerep != prefix_hash Error : jewoongh injected test error This is not a real failure. New cache capacity = 19461571 Verification failed :( ``` **New option added (whitebox)** ``` $> python3 tools/db_crashtest.py whitebox --simple --max_key=25000000 --write_buffer_size=4194304 --print_stderr_separately 2> /tmp/error.log Running whitebox-crash-test with total-duration=10000 ... Integrated BlobDB: blob files enabled 0, min blob size 0, blob file size 268435456, blob compression type NoCompression, blob GC enabled 0, cutoff 0.250000, force threshold 1.000000, blob compaction readahead size 0, blob file starting level 0 Integrated BlobDB: blob cache disabled DB path: [/tmp/jewoongh/rocksdb_crashtest_whiteboxtwj0ihn6] (Re-)verified 157 unique IDs 2024/01/29-09:39:59 Initializing worker threads Crash-recovery verification passed :) 2024/01/29-09:40:16 Starting database operations 2024/01/29-09:40:16 Starting verification Stress Test : 742.474 micros/op 11801 ops/sec : Wrote 0.00 MB (0.27 MB/sec) (36% of 19 ops) : Wrote 7 times : Deleted 1 times : Single deleted 0 times : 8 read and 0 found the key : Prefix scanned 0 times : Iterator size sum is 0 : Iterated 4 times : Deleted 0 key-ranges : Range deletions covered 0 keys : Got errors 0 times : 0 CompactFiles() succeed : 0 CompactFiles() did not succeed TEST FAILED. See kill option and exit code above!!! ``` stderr printed separately ``` $> cat /tmp/error.log WARNING: prefix_size is non-zero but memtablerep != prefix_hash Error : jewoongh injected test error This is not a real failure. Error : jewoongh injected test error This is not a real failure. Error : jewoongh injected test error This is not a real failure. New cache capacity = 4865393 Verification failed :( ``` Reviewed By: akankshamahajan15 Differential Revision: D53187491 Pulled By: jaykorean fbshipit-source-id: 76f9100d08b96d014e41b7b88b206d69f0ae932b |
||
---|---|---|
.circleci | ||
.github | ||
buckifier | ||
build_tools | ||
cache | ||
cmake | ||
coverage | ||
db | ||
db_stress_tool | ||
docs | ||
env | ||
examples | ||
file | ||
fuzz | ||
include/rocksdb | ||
java | ||
logging | ||
memory | ||
memtable | ||
microbench | ||
monitoring | ||
options | ||
plugin | ||
port | ||
table | ||
test_util | ||
third-party | ||
tools | ||
trace_replay | ||
unreleased_history | ||
util | ||
utilities | ||
.clang-format | ||
.gitignore | ||
.lgtm.yml | ||
.watchmanconfig | ||
AUTHORS | ||
CMakeLists.txt | ||
CODE_OF_CONDUCT.md | ||
CONTRIBUTING.md | ||
COPYING | ||
DEFAULT_OPTIONS_HISTORY.md | ||
DUMP_FORMAT.md | ||
HISTORY.md | ||
INSTALL.md | ||
LANGUAGE-BINDINGS.md | ||
LICENSE.Apache | ||
LICENSE.leveldb | ||
Makefile | ||
PLUGINS.md | ||
README.md | ||
TARGETS | ||
USERS.md | ||
Vagrantfile | ||
WINDOWS_PORT.md | ||
common.mk | ||
crash_test.mk | ||
issue_template.md | ||
rocksdb.pc.in | ||
src.mk | ||
thirdparty.inc |
README.md
RocksDB: A Persistent Key-Value Store for Flash and RAM Storage
RocksDB is developed and maintained by Facebook Database Engineering Team. It is built on earlier work on LevelDB by Sanjay Ghemawat (sanjay@google.com) and Jeff Dean (jeff@google.com)
This code is a library that forms the core building block for a fast key-value server, especially suited for storing data on flash drives. It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs between Write-Amplification-Factor (WAF), Read-Amplification-Factor (RAF) and Space-Amplification-Factor (SAF). It has multi-threaded compactions, making it especially suitable for storing multiple terabytes of data in a single database.
Start with example usage here: https://github.com/facebook/rocksdb/tree/main/examples
See the github wiki for more explanation.
The public interface is in include/
. Callers should not include or
rely on the details of any other header files in this package. Those
internal APIs may be changed without warning.
Questions and discussions are welcome on the RocksDB Developers Public Facebook group and email list on Google Groups.
License
RocksDB is dual-licensed under both the GPLv2 (found in the COPYING file in the root directory) and Apache 2.0 License (found in the LICENSE.Apache file in the root directory). You may select, at your option, one of the above-listed licenses.