Summary:
Through code inspection in debugging an apparent leak of ColumnFamilyData in the crash test, I found a case where too few UnrefAndTryDelete() could be called on a cfd. This fixes that case, which would fail like this in the new unit test:
```
db_flush_test: db/column_family.cc:1648:
rocksdb::ColumnFamilySet::~ColumnFamilySet(): Assertion `last_ref' failed.
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12176
Test Plan: unit test added
Reviewed By: cbi42
Differential Revision: D52417071
Pulled By: pdillinger
fbshipit-source-id: 4ee33c918409cf9c1968f138e273d3347a6cc8e5
Summary:
* Largely based on https://github.com/facebook/rocksdb/issues/12085 but grouped into one large workflow because of bad GHA UI design (see comments).
* Windows job details consolidated into an action file so that those jobs can easily move between per-pr-push and nightly.
* Simplify some handling of "CIRCLECI" environment and add "GITHUB_ACTIONS" in the same places
* For jobs that we want to go in pr-jobs or nightly there are disabled "candidate" workflows with draft versions of those jobs.
* ARM jobs are disabled waiting on full GHA support.
* build-linux-java-static needed some special attention to work, due to GLIBC compatibility issues (see comments).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12163
Test Plan:
Nightly jobs can be seen passing between these two links:
https://github.com/facebook/rocksdb/actions/runs/7266835435/job/19799390061?pr=12163https://github.com/facebook/rocksdb/actions/runs/7269697823/job/19807724471?pr=12163
And per-PR jobs of course passing on this PR.
Reviewed By: hx235
Differential Revision: D52335810
Pulled By: pdillinger
fbshipit-source-id: bbb95196f33eabad8cddf3c6b52f4413c80e034d
Summary:
I landed https://github.com/facebook/rocksdb/issues/12159 which had the below compiler error when using `-DROCKSDB_NAMESPACE`, which broke the CircleCI "build-linux-static_lib-alt_namespace-status_checked" job:
```
tools/ldb_cmd_test.cc:1213:21: error: 'rocksdb' does not name a type
1213 | int Compare(const rocksdb::Slice& a, const rocksdb::Slice& b) const override {
| ^~~~~~~
tools/ldb_cmd_test.cc:1213:35: error: expected unqualified-id before '&' token
1213 | int Compare(const rocksdb::Slice& a, const rocksdb::Slice& b) const override {
| ^
tools/ldb_cmd_test.cc:1213:35: error: expected ')' before '&' token
1213 | int Compare(const rocksdb::Slice& a, const rocksdb::Slice& b) const override {
| ~ ^
| )
tools/ldb_cmd_test.cc:1213:35: error: expected ';' at end of member declaration
1213 | int Compare(const rocksdb::Slice& a, const rocksdb::Slice& b) const override {
| ^
| ;
tools/ldb_cmd_test.cc:1213:37: error: 'a' does not name a type
1213 | int Compare(const rocksdb::Slice& a, const rocksdb::Slice& b) const override {
| ^
...
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12173
Test Plan:
```
$ make clean && make OPT="-DROCKSDB_NAMESPACE=alternative_rocksdb_ns" ldb_cmd_test -j56
```
Reviewed By: pdillinger
Differential Revision: D52373797
Pulled By: ajkr
fbshipit-source-id: 8597aaae65a5333831fef66d85072827c5fb1187
Summary:
According to this [Q&A](https://github.com/facebook/rocksdb/wiki/RocksDB-FAQ#:~:text=Q%3A%20If%20I%20use%20non%2Ddefault%20comparators%20or%20merge%20operators%2C%20can%20I%20still%20use%20ldb%20tool%3F), user should be able to use LDB with passing a customized comparator into the option.
In the process of opening DB in order to perform ldb commands, there is a exception saying comparator not match even if a option with customized comparator is provided. After initializing the column family to open DB, the `LDBCommand::OverrideBaseCFOptions` method does not update the comparator inside column family descriptor using the passed in options. This can cause a mismatch while doing version edit, and in function `ToggleUDT CompareComparator` it will failed and return a exception saying comparator not match.
Propose fix by updating the column family descriptor's option using the user passed in option. Also a test case is provided to illustrate the steps.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12159
Reviewed By: hx235
Differential Revision: D52267367
Pulled By: ajkr
fbshipit-source-id: c240f93f440e02cb485893de058a46c6dbf9654b
Summary:
**Context/Summary:**
Continued from https://github.com/facebook/rocksdb/pull/12127, we can randomly reduce the # max key to coerce more operations on the same key. My experimental run shows it surfaced more issue than just https://github.com/facebook/rocksdb/pull/12127.
I also randomly reduce the related parameters, write buffer size and target file base, to adapt to randomly lower number of # max key. This creates 4 situations of testing, 3 of which are new:
1. **high** # max key with **high** write buffer size and target file base (existing)
2. **high** # max key with **low** write buffer size and target file base (new, will go through some rehearsal testing to ensure we don't run out of space with many files)
3. **low** # max key with **high** write buffer size and target file base (new, keys will stay in memory longer)
4. **low** # max key with **low** write buffer size and target file base (new, experimental runs show it surfaced even more issues)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12148
Test Plan:
- [Ongoing] Rehearsal stress test
- Monitor production stress test
Reviewed By: jaykorean
Differential Revision: D52174980
Pulled By: hx235
fbshipit-source-id: bd5e11280826819ca9314c69bbbf05d481c6d105
Summary:
HyperClockCache is intended to mitigate performance problems under stress conditions (as well as optimizing average-case parallel performance). In LRUCache, the biggest such problem is lock contention when one or a small number of cache entries becomes particularly hot. Regardless of cache sharding, accesses to any particular cache entry are linearized against a single mutex, which is held while each access updates the LRU list. All HCC variants are fully lock/wait-free for accessing blocks already in the cache, which fully mitigates this contention problem.
However, HCC (and CLOCK in general) can exhibit extremely degraded performance under a different stress condition: when no (or almost no) entries in a cache shard are evictable (they are pinned). Unlike LRU which can find any evictable entries immediately (at the cost of more coordination / synchronization on each access), CLOCK has to search for evictable entries. Under the right conditions (almost exclusively MB-scale caches not GB-scale), the CPU cost of each cache miss could fall off a cliff and bog down the whole system.
To effectively mitigate this problem (IMHO), I'm introducing a new default behavior and tuning parameter for HCC, `eviction_effort_cap`. See the comments on the new config parameter in the public API.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12141
Test Plan:
unit test included
## Performance test
We can use cache_bench to validate no regression (CPU and memory) in normal operation, and to measure change in behavior when cache is almost entirely pinned. (TODO: I'm not sure why I had to get the pinned ratio parameter well over 1.0 to see truly bad performance, but the behavior is there.) Build with `make DEBUG_LEVEL=0 USE_CLANG=1 PORTABLE=0 cache_bench`. We also set MALLOC_CONF="narenas:1" for all these runs to essentially remove jemalloc variances from the results, so that the max RSS given by /usr/bin/time is essentially ideal (assuming the allocator minimizes fragmentation and other memory overheads well). Base command reproducing bad behavior:
```
./cache_bench -cache_type=auto_hyper_clock_cache -threads=12 -histograms=0 -pinned_ratio=1.7
```
```
Before, LRU (alternate baseline not exhibiting bad behavior):
Rough parallel ops/sec = 2290997
1088060 maxresident
Before, AutoHCC (bad behavior):
Rough parallel ops/sec = 141011 <- Yes, more than 10x slower
1083932 maxresident
```
Now let us sample a range of values in the solution space:
```
After, AutoHCC, eviction_effort_cap = 1:
Rough parallel ops/sec = 3212586
2402216 maxresident
After, AutoHCC, eviction_effort_cap = 10:
Rough parallel ops/sec = 2371639
1248884 maxresident
After, AutoHCC, eviction_effort_cap = 30:
Rough parallel ops/sec = 1981092
1131596 maxresident
After, AutoHCC, eviction_effort_cap = 100:
Rough parallel ops/sec = 1446188
1090976 maxresident
After, AutoHCC, eviction_effort_cap = 1000:
Rough parallel ops/sec = 549568
1084064 maxresident
```
I looks like `cap=30` is a sweet spot balancing acceptable CPU and memory overheads, so is chosen as the default.
```
Change to -pinned_ratio=0.85
Before, LRU:
Rough parallel ops/sec = 2108373
1078232 maxresident
Before, AutoHCC, averaged over ~20 runs:
Rough parallel ops/sec = 2164910
1077312 maxresident
After, AutoHCC, eviction_effort_cap = 30, averaged over ~20 runs:
Rough parallel ops/sec = 2145542
1077216 maxresident
```
The slight CPU improvement above is consistent with the cap, with no measurable memory overhead under moderate stress.
```
Change to -pinned_ratio=0.25 (low stress)
Before, AutoHCC, averaged over ~20 runs:
Rough parallel ops/sec = 2221149
1076540 maxresident
After, AutoHCC, eviction_effort_cap = 30, averaged over ~20 runs:
Rough parallel ops/sec = 2224521
1076664 maxresident
```
No measurable difference under normal circumstances.
Some tests repeated with FixedHCC, with similar results.
Reviewed By: anand1976
Differential Revision: D52174755
Pulled By: pdillinger
fbshipit-source-id: d278108031b1220c1fa4c89c5a9d34b7cf4ef1b8
Summary:
`Delayed` is set true in two cases. One is when `delay` is specified. Other one is in the `while` loop - cd21e4e69d/db/db_impl/db_impl_write.cc (L1876)
However start_time is not initialized in second case, resulting in time_delayed = immutable_db_options_.clock->NowMicros() - 0(start_time);
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12147
Test Plan: Existing CircleCI
Reviewed By: cbi42
Differential Revision: D52173481
Pulled By: akankshamahajan15
fbshipit-source-id: fb9183b24c191d287a1d715346467bee66190f98
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12143https://github.com/facebook/rocksdb/pull/11982 changed `WriteBatchWithIndex::MultiGetFromBatchDB` to preallocate space in the `autovector`s `key_contexts` and `merges` in order to prevent any reallocations, both as an optimization and in order to prevent pointers into the container from being invalidated during subsequent insertions. On second thought, this preallocation can actually be a pessimization in cases when only a small subset of keys require querying the underlying database. To prevent any memory regressions, the PR reverts this preallocation. In addition, it makes some small code hygiene improvements like incorporating the `PinnableWideColumns` object into `MergeTuple`.
Reviewed By: jaykorean
Differential Revision: D52136513
fbshipit-source-id: 21aa835084433feab27b501d9d1fc5434acea609
Summary:
Example:
```
cache/clock_cache.cc:56:7: error: fallthrough annotation in unreachable code [-Werror,-Wimplicit-fallthrough]
FALLTHROUGH_INTENDED;
^
./port/lang.h:10:30: note: expanded from macro 'FALLTHROUGH_INTENDED'
^
```
In clang < 14, this is annoyingly generated from -Wimplicit-fallthrough, but was changed to -Wunreachable-code-fallthrough (implied by -Wunreachable-code) in clang 14. See https://reviews.llvm.org/D107933 for how this nuisance pattern generated false positives similar to ours in the Linux kernel.
Just to underscore the ridiculousness of this warning, here an error is reported on the annotation, not the call to do_something(), depending on the constexpr value (https://godbolt.org/z/EvxqdPTdr):
```
#include <atomic>
void do_something();
void test(int v) {
switch (v) {
case 1:
if constexpr (std::atomic<long>::is_always_lock_free) {
return;
} else {
do_something();
[[fallthrough]];
}
case 2:
return;
}
}
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12144
Test Plan: Added the warning to our Makefile for USE_CLANG, which reproduced the warning-as-error as shown above, but is now fixed.
Reviewed By: jaykorean
Differential Revision: D52139615
Pulled By: pdillinger
fbshipit-source-id: ba967ae700c0916d1a478bc465cf917633e337d9
Summary:
RocksDB self throttles per-DB compaction parallelism until it detects compaction pressure. The pressure detection based on pending compaction bytes was only comparing against the slowdown trigger (`soft_pending_compaction_bytes_limit`). Online services tend to set that extremely high to avoid stalling at all costs. Perhaps they should have set it to zero, but we never documented that zero disables stalling so I have been telling everyone to increase it for years.
This PR adds pressure detection based on pending compaction bytes relative to the size of bottommost data. The size of bottommost data should be fairly stable and proportional to the logical data size
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12130
Reviewed By: hx235
Differential Revision: D52000746
Pulled By: ajkr
fbshipit-source-id: 7e1fd170901a74c2d4a69266285e3edf6e7631c7
Summary:
Sanitize the `secondary_cache` field in the `cache_opts` option of `TieredCacheOptions` to `nullptr` if set by the user. The nvm secondary cache should be directly set in `TieredCacheOptions`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12137
Reviewed By: akankshamahajan15
Differential Revision: D52063817
Pulled By: anand1976
fbshipit-source-id: 255116c665a9b908c8f44109a2d331d4b73e7591
Summary:
This PR adds initial stress testing for the user-defined timestamps in memtable only feature. Each flavor of the `*_ts` crash test get a 1 in 3 chance to run with timestamps not persisted, this setting is initialized once and kept consistent across the following re-runs.
This initial stress test included these things besides disabling incompatible feature combinations to make the test run more stably:
1) It currently only run test methods that validates db state with expected state. Not the ones that validate db state by comparing result from one API to another API. Such as `TestMultiGet` (compared with `Get`), similarly `TestMultiGetEntity`, `TestIterate` (compare src iterator to a control iterator). Due to timestamps being removed, results from one API to another API is not directly comparable as it is now. More test logic to handle that need to be added, will do that in a follow up.
2) Even when comparing db state to expected state, sometimes the db can receive `InvalidArgument` too due to timestamps getting flushed and removed. Added some logic to handle that.
3) When timestamps are not persisted, we don't try to read with older timestamp. Since that's making it easier to get `InvalidArgument`. And this capability is not yet needed by our customer so it's disabled for now.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12124
Test Plan: running multiple flavor of this test on continuous run for sometime before checkin
Reviewed By: ltamasi
Differential Revision: D51916267
Pulled By: jowlyzhang
fbshipit-source-id: 3f3eb5f9618d05d296062820e0ef5cb8edc7c2b2
Summary:
There is a bug in the `TieredSecondaryCache` that can result in a false negative. This can happen when a MultiGet does a cache lookup that gets a hit in the `TieredSecondaryCache` local nvm cache tier, and the result is available before MultiGet calls `WaitAll` (i.e the nvm cache `SecondaryCacheResultHandle` `IsReady` returns true).
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12134
Test Plan: Add a new unit test in tiered_secondary_cache_test
Reviewed By: akankshamahajan15
Differential Revision: D52023309
Pulled By: anand1976
fbshipit-source-id: e5ae681226a0f12753fecb2f6acc7e5f254ae72b
Summary:
As part of building another feature, I wanted this:
* Custom implementations of `TablePropertiesCollectorFactory` may now return a `nullptr` collector to decline processing a file, reducing callback overheads in such cases.
* Polished, clarified some related API comments.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12129
Test Plan: unit test added
Reviewed By: ltamasi
Differential Revision: D51966667
Pulled By: pdillinger
fbshipit-source-id: 2991c08fe6ce3a8c9f14c68f1495f5a17bca2770
Summary:
### Implement new Java API get()/put()/merge() methods, and transactional variants.
The Java API methods are very inconsistent in terms of how they pass parameters (byte[], ByteBuffer), and what variants and defaulted parameters they support. We try to bring some consistency to this.
* All APIs should support calls with ByteBuffer parameters.
* Similar methods (RocksDB.get() vs Transaction.get()) should support as similar as possible sets of parameters for predictability.
* get()-like methods should provide variants where the caller supplies the target buffer, for the sake of efficiency. Allocation costs in Java can be significant when large buffers are repeatedly allocated and freed.
### API Additions
1. RockDB.get implement indirect ByteBuffers. Added indirect ByteBuffers and supporting native methods for get().
2. RocksDB.Iterator implement missing (byte[], offset, length) variants for key() and value() parameters.
3. Transaction.get() implement missing methods, based on RocksDB.get. Added ByteBuffer.get with and without column family. Added byte[]-as-target get.
4. Transaction.iterator() implement a getIterator() which defaults ReadOptions; as per RocksDB.iterator(). Rationalize support API for this and RocksDB.iterator()
5. RocksDB.merge implement ByteBuffer methods; both direct and indirect buffers. Shadow the methods of RocksDB.put; RocksDB.put only offers ByteBuffer API with explicit WriteOptions. Duplicated this with RocksDB.merge
6. Transaction.merge implement methods as per RocksDB.merge methods. Transaction is already constructed with WriteOptions, so no explicit WriteOptions methods required.
7. Transaction.mergeUntracked implement the same API methods as Transaction.merge except the ones that use assumeTracked, because that’s not a feature of merge untracked.
### Support Changes (C++)
The current JNI code in C++ supports multiple variants of methods through a number of helper functions. There are numerous TODO suggestions in the code proposing that the helpers be re-factored/shared.
We have taken a different approach for the new methods; we have created wrapper classes `JDirectBufferSlice`, `JDirectBufferPinnableSlice`, `JByteArraySlice` and `JByteArrayPinnableSlice` RAII classes which construct slices from JNI parameters and can then be passed directly to RocksDB methods. For instance, the `Java_org_rocksdb_Transaction_getDirect` method is implemented like this:
```
try {
ROCKSDB_NAMESPACE::JDirectBufferSlice key(env, jkey_bb, jkey_off,
jkey_part_len);
ROCKSDB_NAMESPACE::JDirectBufferPinnableSlice value(env, jval_bb, jval_off,
jval_part_len);
ROCKSDB_NAMESPACE::KVException::ThrowOnError(
env, txn->Get(*read_options, column_family_handle, key.slice(),
&value.pinnable_slice()));
return value.Fetch();
} catch (const ROCKSDB_NAMESPACE::KVException& e) {
return e.Code();
}
```
Notice the try/catch mechanism with the `KVException` class, which combined with RAII and the wrapper classes means that there is no ad-hoc cleanup necessary in the JNI methods.
We propose to extend this mechanism to existing JNI methods as further work.
### Support Changes (Java)
Where there are multiple parameter-variant versions of the same method, we use fewer or just one supporting native method for all of them. This makes maintenance a bit easier and reduces the opportunity for coding errors mixing up (untyped) object handles.
In order to support this efficiently, some classes need to have default values for column families and read options added and cached so that they are not re-constructed on every method call.
This PR closes https://github.com/facebook/rocksdb/issues/9776
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11019
Reviewed By: ajkr
Differential Revision: D52039446
Pulled By: jowlyzhang
fbshipit-source-id: 45d0140a4887e42134d2e56520e9b8efbd349660
Summary:
`-Wextra-semi` or `-Wextra-semi-stmt`
If the code compiles, this is safe to land.
Reviewed By: palmje
Differential Revision: D51995065
fbshipit-source-id: 9b55a0d8abd0927b76376cb7751bf0fcab10518c
Summary:
Fixes https://github.com/facebook/rocksdb/issues/12061.
We were double counting the `BYTES_WRITTEN` ticker when doing writes with transactions. During transactions, after writing, a client can call `Prepare()`, which writes the values to WAL but not to the Memtable. After that, they can call `Commit()`, which writes a commit marker to the WAL and the values to Memtable.
The cause of this bug is previously during writes, we didn't take into account `writer->ShouldWriteToMemtable()` before adding to `total_byte_size`, so it is still added to during the `Prepare()` phase even though we're not writing to the Memtable, which was why we saw the value to be double of what's written to WAL.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12111
Test Plan: Added a test in `db/db_statistics_test.cc` that tests writes with and without transactions, by comparing the values of `BYTES_WRITTEN` and `WAL_FILE_BYTES` after doing writes.
Reviewed By: jaykorean
Differential Revision: D51954327
Pulled By: jowlyzhang
fbshipit-source-id: 57a0986a14e5b94eb5188715d819212529110d2c
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12128
The patch turns the `Timer` Meyers singleton in `PeriodicTaskScheduler::Default()` into one of the leaky variety in order to prevent static destruction order issues.
Reviewed By: akankshamahajan15
Differential Revision: D51963950
fbshipit-source-id: 0fc34113ad03c51fdc83bdb8c2cfb6c9f6913948
Summary:
**Context/Summary:**
My experimental stress runs with more frequent "xxx_one_in" surfaced a couple interesting bugs/issues with RocksDB or crash test framework in the past. We now consider changing the default value so they are run more frequently in production testing environment.
Increase frequency by 2 orders of magnitude for most parameters, except for error-prone features e.g, manual compaction and file ingestion (increased by 3 orders) and expensive features e.g, checksum verification (increased by 1 order)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12127
Test Plan: Monitor CI to see if it did surface more interesting bugs/issues. If not, we may consider intensify even more.
Reviewed By: pdillinger
Differential Revision: D51954235
Pulled By: hx235
fbshipit-source-id: 92046cb7c52a37212f19ab7965b40f77b90b08b1
Summary:
Add support for tuning of readahead_size by block cache lookup for async_io.
**Design/ Implementation** -
**BlockBasedTableIterator.cc** -
`BlockCacheLookupForReadAheadSize` callback API lookups in the block cache and tries to reduce the start
and end offset passed. This function looks into the block cache for the blocks between `start_offset`
and `end_offset` and add all the handles in the queue.
It then iterates from the end in the handles to find first miss block and update the end offset to that block.
It also iterates from the start and find first miss block and update the start offset to that block.
```
_read_curr_block_ argument : True if this call was due to miss in the cache and caller wants to read that block
synchronously.
False if current call is to prefetch additional data in extra buffers
(due to ReadAsync call in FilePrefetchBuffer)
```
In case there is no data to be read in that callback (because of upper_bound or all blocks are in cache),
it updates start and end offset to be equal and that `FilePrefetchBuffer` interprets that as 0 length to be read.
**FilePrefetchBuffer.cc** -
FilePrefetchBuffer calls the callback - `ReadAheadSizeTuning` and pass the start and end offset to that
callback to get updated start and end offset to read based on cache hits/misses.
1. In case of Read calls (when offset passed to FilePrefetchBuffer is on cache miss and that data needs to be read), _read_curr_block_ is passed true.
2. In case of ReadAsync calls, when buffer is all consumed and can go for additional prefetching, the start offset passed is the initial end offset of prev buffer (without any updated offset based on cache hit/miss).
Foreg. if following are the data blocks with cache hit/miss and start offset
and Read API found miss on DB1 and based on readahead_size (50) it passes end offset to be 50.
[DB1 - miss- 0 ] [DB2 - hit -10] [DB3 - miss -20] [DB4 - miss-30] [DB5 - hit-40]
[DB6 - hit-50] [DB7 - miss-60] [DB8 - miss - 70] [DB9 - hit - 80] [DB6 - hit 90]
- For Read call - updated start offset remains 0 but end offset updates to DB4, as DB5 is in cache.
- Read calls saves initial end offset 50 as that was meant to be prefetched.
- Now for next ReadAsync call - the start offset will be 50 (previous buffer initial end offset) and based on readahead_size, end offset will be 100
- On callback, because of cache hits - callback will update the start offset to 60 and end offset to 80 to read only 2 data blocks (DB7 and DB8).
- And for that ReadAsync call - initial end offset will be set to 100 which will again used by next ReadAsync call as start offset.
- `initial_end_offset_` in `BufferInfo` is used to save the initial end offset of that buffer.
- If let's say DB5 and DB6 overlaps in 2 buffers (because of alignment), `prev_buf_end_offset` is passed to make sure already prefetched data is not prefetched again in second buffer.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11936
Test Plan:
- Ran crash_test several times.
- New unit tests added.
Reviewed By: anand1976
Differential Revision: D50906217
Pulled By: akankshamahajan15
fbshipit-source-id: 0d75d3c98274e98aa34901b201b8fb05232139cf
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12121
The patch eliminates some code duplication by unifying the two sets of `MergeHelper::TimedFullMerge` overloads using variadic templates. It also brings the order of parameters into sync when it comes to the various `TimedFullMerge*` methods.
Reviewed By: jaykorean
Differential Revision: D51862483
fbshipit-source-id: e3f832a6ff89ba34591451655cf11025d0a0d018
Summary:
`-Wextra-semi` or `-Wextra-semi-stmt`
If the code compiles, this is safe to land.
Reviewed By: dmm-fb
Differential Revision: D51778007
fbshipit-source-id: 5d1b20a3acc4bcc7cd7c204f2f73a14fc8f81883
Summary:
This is a simple refactor for the crash test script to put shared logic for parsing stderr into a function. There is no functional change.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12109
Test Plan: manually tested the script
Reviewed By: ajkr
Differential Revision: D51692172
Pulled By: jowlyzhang
fbshipit-source-id: d346d64e981d9c489c380ff6ce33296a224b5877
Summary:
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11982
The patch constitutes the first phase of adding wide-column support to `WriteBatchWithIndex`. Namely, it implements the `PutEntity` API in `WriteBatchWithIndex` on the write path, and the `Iterator::columns()` API in `BaseDeltaIterator` on the read path. In addition, it updates all existing read APIs (`GetFromBatch`, `GetFromBatchAndDB`, `MultiGetFromBatchAndDB`, and `BaseDeltaIterator`) so that they handle wide-column entities correctly. This includes returning the value of the default column of entities as appropriate and correctly applying merges to wide-column base values. I plan to add the wide-column specific point lookup APIs (`GetEntityFromBatch`, `GetEntityFromBatchAndDB`, and `MultiGetEntityFromBatchAndDB`) in subsequent patches.
Reviewed By: jaykorean
Differential Revision: D50439231
fbshipit-source-id: 59fd0f12c45249fecde8af249c5d3f509ba58bbe
Summary:
The optimization to not find and delete obsolete files when FlushRequest is re-scheduled also inadvertently skipped flushing the `LogBuffer`, resulting in missed logs. This PR fixes the issue.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12105
Test Plan:
manually check this test has the correct info log after the fix
`./column_family_test --gtest_filter=ColumnFamilyRetainUDTTest.NotAllKeysExpiredFlushRescheduled`
Reviewed By: ajkr
Differential Revision: D51671079
Pulled By: jowlyzhang
fbshipit-source-id: da0640e07e35c69c08988772ed611ec9e67f2e92
Summary:
Add the option to have a 3-tier block cache (uncompressed RAM, compressed RAM, and local flash) in db_bench, as well as specifying secondary cache admission policy.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12104
Reviewed By: ajkr
Differential Revision: D51629092
Pulled By: anand1976
fbshipit-source-id: 6a208f853bc85d3d8b437d91cb1b0142d9a99e53
Summary:
We now support re-enabling the compressed portion of the `TieredCache` after dynamically disabling it. Add it to db_stress for testing purposes.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12102
Reviewed By: akankshamahajan15
Differential Revision: D51594259
Pulled By: anand1976
fbshipit-source-id: ea544e30a5ebd6290fc9ed46a241f09634764d2a
Summary:
I must have chosen trimming before frame 8 based on assertion failures, but that trims too many frame for a general segfault. So this changes to start printing at frame 4, as in this example where I've seeded a null deref:
```
Received signal 11 (Segmentation fault)
Invoking LLDB for stack trace...
Process 873208 stopped
* thread #1, name = 'db_stress', stop reason = signal SIGSTOP
frame #0: 0x00007fb1fe8f1033 libc.so.6`__GI___wait4(pid=873478, stat_loc=0x00007fb1fb114030, options=0, usage=0x0000000000000000) at wait4.c:30:10
thread #2, name = 'rocksdb:low', stop reason = signal SIGSTOP
frame #0: 0x00007fb1fe8972a1 libc.so.6`__GI___futex_abstimed_wait_cancelable64 at futex-internal.c:57:12
Executable module set to "/data/users/peterd/rocksdb/db_stress".
Architecture set to: x86_64-unknown-linux-gnu.
True
frame #4: 0x00007fb1fe844540 libc.so.6`__restore_rt at libc_sigaction.c:13
frame #5: 0x0000000000608514 db_stress`rocksdb::StressTest::InitDb(rocksdb::SharedState*) at db_stress_test_base.cc:345:18
frame #6: 0x0000000000585d62 db_stress`rocksdb::RunStressTestImpl(rocksdb::SharedState*) at db_stress_driver.cc:84:17
frame #7: 0x000000000058dd69 db_stress`rocksdb::RunStressTest(shared=0x00006120000001c0) at db_stress_driver.cc:266:34
frame #8: 0x0000000000453b34 db_stress`rocksdb::db_stress_tool(int, char**) at db_stress_tool.cc:370:20
...
```
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12101
Test Plan: manual (see above)
Reviewed By: ajkr
Differential Revision: D51593217
Pulled By: pdillinger
fbshipit-source-id: 4a71eb8e516edbc32e682f9537bc77d073a7b4ed
Summary:
It's been relatively easy to break our stack trace printer:
* If another thread reaches a signal condition such as a related SEGV or assertion failure while one is trying to print a stack trace from the signal handler, it seems to end the process abruptly without a stack trace.
* If the process exits normally in one thread (such as main finishing) while another is trying to print a stack trace from the signal handler, it seems the process will often end normally without a stack trace.
This change attempts to fix these issues, with
* Keep the custom signal handler installed as long as possible, so that other threads will most likely re-enter our custom handler. (We only switch back to default for triggering core dump or whatever after stack trace.)
* Use atomics and sleeps to implement a crude recursive mutex for ensuring all threads hitting the custom signal handler wait on the first that is trying to print a stack trace, while recursive signals in the same thread can still be handled cleanly.
* Use an atexit handler to hook into normal exit to (a) wait on a pending printing of stack trace when detectable and applicable, and (b) detect and warn when printing a stack trace might be interrupted by a process exit in progress. (I don't know how to pause that *after* our atexit handler has been called; the best I know how to do is warn, "In a race with process already exiting...".)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12086
Test Plan:
manual, including with TSAN. I added this code to the end of a unit test file:
```
for (size_t i = 0; i < 3; ++i) {
std::thread t([]() { assert(false); });
t.detach();
}
```
Followed by either `sleep(100)` or `usleep(100)` or usual process exit. And for recursive signal testing, inject `abort()` at various places in the handler.
Reviewed By: cbi42
Differential Revision: D51531882
Pulled By: pdillinger
fbshipit-source-id: 3473b863a43e61b722dfb7a2ed12a8120949b09c
Summary:
Fixes https://github.com/facebook/rocksdb/issues/7930.
When there is a timestamp associated with stored records, get from row cache will return the timestamp provided in query instead of the timestamp associated with the stored record.
## Cause of error:
Currently a row_handle is fetched using row_cache_key(contains a timestamp provided by user query) and the row_handle itself does not persist timestamp associated with the object. Hence the [GetContext::SaveValue()
](6e3429b8a6/table/get_context.cc (L257)) function will fetch the timestamp in row_cache_key and may return the incorrect timestamp value.
## Proposed Solution
If current cf enables ts, append a timestamp associated with stored records after the value in replay_log (equivalently the value of row cache entry).
When read, `replayGetContextLog()` will update parsed_key with the correct timestamp.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11952
Reviewed By: ajkr
Differential Revision: D51501176
Pulled By: jowlyzhang
fbshipit-source-id: 808fc943a8ae95de56ae0e82ec59a2573a031f28