Prefer static_cast in place of most reinterpret_cast (#12308)

Summary:
The following are risks associated with pointer-to-pointer reinterpret_cast:
* Can produce the "wrong result" (crash or memory corruption). IIRC, in theory this can happen for any up-cast or down-cast for a non-standard-layout type, though in practice would only happen for multiple inheritance cases (where the base class pointer might be "inside" the derived object). We don't use multiple inheritance a lot, but we do.
* Can mask useful compiler errors upon code change, including converting between unrelated pointer types that you are expecting to be related, and converting between pointer and scalar types unintentionally.

I can only think of some obscure cases where static_cast could be troublesome when it compiles as a replacement:
* Going through `void*` could plausibly cause unnecessary or broken pointer arithmetic. Suppose we have
`struct Derived: public Base1, public Base2`.  If we have `Derived*` -> `void*` -> `Base2*` -> `Derived*` through reinterpret casts, this could plausibly work (though technical UB) assuming the `Base2*` is not dereferenced. Changing to static cast could introduce breaking pointer arithmetic.
* Unnecessary (but safe) pointer arithmetic could arise in a case like `Derived*` -> `Base2*` -> `Derived*` where before the Base2 pointer might not have been dereferenced. This could potentially affect performance.

With some light scripting, I tried replacing pointer-to-pointer reinterpret_casts with static_cast and kept the cases that still compile. Most occurrences of reinterpret_cast have successfully been changed (except for java/ and third-party/). 294 changed, 257 remain.

A couple of related interventions included here:
* Previously Cache::Handle was not actually derived from in the implementations and just used as a `void*` stand-in with reinterpret_cast. Now there is a relationship to allow static_cast. In theory, this could introduce pointer arithmetic (as described above) but is unlikely without multiple inheritance AND non-empty Cache::Handle.
* Remove some unnecessary casts to void* as this is allowed to be implicit (for better or worse).

Most of the remaining reinterpret_casts are for converting to/from raw bytes of objects. We could consider better idioms for these patterns in follow-up work.

I wish there were a way to implement a template variant of static_cast that would only compile if no pointer arithmetic is generated, but best I can tell, this is not possible. AFAIK the best you could do is a dynamic check that the void* conversion after the static cast is unchanged.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/12308

Test Plan: existing tests, CI

Reviewed By: ltamasi

Differential Revision: D53204947

Pulled By: pdillinger

fbshipit-source-id: 9de23e618263b0d5b9820f4e15966876888a16e2
This commit is contained in:
Peter Dillinger 2024-02-07 10:44:11 -08:00 committed by Facebook GitHub Bot
parent e3e8fbb497
commit 54cb9c77d9
108 changed files with 319 additions and 369 deletions

View File

@ -539,7 +539,8 @@ endif
ifdef USE_CLANG
# Used by some teams in Facebook
WARNING_FLAGS += -Wshift-sign-overflow -Wambiguous-reversed-operator -Wimplicit-fallthrough
WARNING_FLAGS += -Wshift-sign-overflow -Wambiguous-reversed-operator \
-Wimplicit-fallthrough -Wreinterpret-base-class -Wundefined-reinterpret-cast
endif
ifeq ($(PLATFORM), OS_OPENBSD)

View File

@ -560,7 +560,7 @@ void BaseClockTable::TrackAndReleaseEvictedEntry(ClockHandle* h) {
took_value_ownership =
eviction_callback_(ClockCacheShard<FixedHyperClockTable>::ReverseHash(
h->GetHash(), &unhashed, hash_seed_),
reinterpret_cast<Cache::Handle*>(h),
static_cast<Cache::Handle*>(h),
h->meta.LoadRelaxed() & ClockHandle::kHitBitMask);
}
if (!took_value_ownership) {
@ -1428,19 +1428,19 @@ BaseHyperClockCache<Table>::BaseHyperClockCache(
template <class Table>
Cache::ObjectPtr BaseHyperClockCache<Table>::Value(Handle* handle) {
return reinterpret_cast<const typename Table::HandleImpl*>(handle)->value;
return static_cast<const typename Table::HandleImpl*>(handle)->value;
}
template <class Table>
size_t BaseHyperClockCache<Table>::GetCharge(Handle* handle) const {
return reinterpret_cast<const typename Table::HandleImpl*>(handle)
return static_cast<const typename Table::HandleImpl*>(handle)
->GetTotalCharge();
}
template <class Table>
const Cache::CacheItemHelper* BaseHyperClockCache<Table>::GetCacheItemHelper(
Handle* handle) const {
auto h = reinterpret_cast<const typename Table::HandleImpl*>(handle);
auto h = static_cast<const typename Table::HandleImpl*>(handle);
return h->helper;
}

2
cache/clock_cache.h vendored
View File

@ -298,7 +298,7 @@ class ClockCacheTest;
// ----------------------------------------------------------------------- //
struct ClockHandleBasicData {
struct ClockHandleBasicData : public Cache::Handle {
Cache::ObjectPtr value = nullptr;
const Cache::CacheItemHelper* helper = nullptr;
// A lossless, reversible hash of the fixed-size (16 byte) cache key. This

View File

@ -1062,7 +1062,7 @@ bool CacheUsageWithinBounds(size_t val1, size_t val2, size_t error) {
TEST_P(CompressedSecCacheTestWithTiered, CacheReservationManager) {
CompressedSecondaryCache* sec_cache =
reinterpret_cast<CompressedSecondaryCache*>(GetSecondaryCache());
static_cast<CompressedSecondaryCache*>(GetSecondaryCache());
// Use EXPECT_PRED3 instead of EXPECT_NEAR to void too many size_t to
// double explicit casts
@ -1085,7 +1085,7 @@ TEST_P(CompressedSecCacheTestWithTiered, CacheReservationManager) {
TEST_P(CompressedSecCacheTestWithTiered,
CacheReservationManagerMultipleUpdate) {
CompressedSecondaryCache* sec_cache =
reinterpret_cast<CompressedSecondaryCache*>(GetSecondaryCache());
static_cast<CompressedSecondaryCache*>(GetSecondaryCache());
EXPECT_PRED3(CacheUsageWithinBounds, GetCache()->GetUsage(), (30 << 20),
GetPercent(30 << 20, 1));
@ -1171,7 +1171,7 @@ TEST_P(CompressedSecCacheTestWithTiered, AdmissionPolicy) {
TEST_P(CompressedSecCacheTestWithTiered, DynamicUpdate) {
CompressedSecondaryCache* sec_cache =
reinterpret_cast<CompressedSecondaryCache*>(GetSecondaryCache());
static_cast<CompressedSecondaryCache*>(GetSecondaryCache());
std::shared_ptr<Cache> tiered_cache = GetTieredCache();
// Use EXPECT_PRED3 instead of EXPECT_NEAR to void too many size_t to
@ -1235,7 +1235,7 @@ TEST_P(CompressedSecCacheTestWithTiered, DynamicUpdate) {
TEST_P(CompressedSecCacheTestWithTiered, DynamicUpdateWithReservation) {
CompressedSecondaryCache* sec_cache =
reinterpret_cast<CompressedSecondaryCache*>(GetSecondaryCache());
static_cast<CompressedSecondaryCache*>(GetSecondaryCache());
std::shared_ptr<Cache> tiered_cache = GetTieredCache();
ASSERT_OK(cache_res_mgr()->UpdateCacheReservation(10 << 20));
@ -1329,7 +1329,7 @@ TEST_P(CompressedSecCacheTestWithTiered, DynamicUpdateWithReservation) {
TEST_P(CompressedSecCacheTestWithTiered, ReservationOverCapacity) {
CompressedSecondaryCache* sec_cache =
reinterpret_cast<CompressedSecondaryCache*>(GetSecondaryCache());
static_cast<CompressedSecondaryCache*>(GetSecondaryCache());
std::shared_ptr<Cache> tiered_cache = GetTieredCache();
ASSERT_OK(cache_res_mgr()->UpdateCacheReservation(110 << 20));

11
cache/lru_cache.cc vendored
View File

@ -339,8 +339,7 @@ void LRUCacheShard::NotifyEvicted(
MemoryAllocator* alloc = table_.GetAllocator();
for (LRUHandle* entry : evicted_handles) {
if (eviction_callback_ &&
eviction_callback_(entry->key(),
reinterpret_cast<Cache::Handle*>(entry),
eviction_callback_(entry->key(), static_cast<Cache::Handle*>(entry),
entry->HasHit())) {
// Callback took ownership of obj; just free handle
free(entry);
@ -506,7 +505,7 @@ bool LRUCacheShard::Release(LRUHandle* e, bool /*useful*/,
// Only call eviction callback if we're sure no one requested erasure
// FIXME: disabled because of test churn
if (false && was_in_cache && !erase_if_last_ref && eviction_callback_ &&
eviction_callback_(e->key(), reinterpret_cast<Cache::Handle*>(e),
eviction_callback_(e->key(), static_cast<Cache::Handle*>(e),
e->HasHit())) {
// Callback took ownership of obj; just free handle
free(e);
@ -661,18 +660,18 @@ LRUCache::LRUCache(const LRUCacheOptions& opts) : ShardedCache(opts) {
}
Cache::ObjectPtr LRUCache::Value(Handle* handle) {
auto h = reinterpret_cast<const LRUHandle*>(handle);
auto h = static_cast<const LRUHandle*>(handle);
return h->value;
}
size_t LRUCache::GetCharge(Handle* handle) const {
return reinterpret_cast<const LRUHandle*>(handle)->GetCharge(
return static_cast<const LRUHandle*>(handle)->GetCharge(
GetShard(0).metadata_charge_policy_);
}
const Cache::CacheItemHelper* LRUCache::GetCacheItemHelper(
Handle* handle) const {
auto h = reinterpret_cast<const LRUHandle*>(handle);
auto h = static_cast<const LRUHandle*>(handle);
return h->helper;
}

2
cache/lru_cache.h vendored
View File

@ -47,7 +47,7 @@ namespace lru_cache {
// LRUCacheShard::Lookup.
// While refs > 0, public properties like value and deleter must not change.
struct LRUHandle {
struct LRUHandle : public Cache::Handle {
Cache::ObjectPtr value;
const Cache::CacheItemHelper* helper;
LRUHandle* next_hash;

View File

@ -47,7 +47,7 @@ class LRUCacheTest : public testing::Test {
double low_pri_pool_ratio = 1.0,
bool use_adaptive_mutex = kDefaultToAdaptiveMutex) {
DeleteCache();
cache_ = reinterpret_cast<LRUCacheShard*>(
cache_ = static_cast<LRUCacheShard*>(
port::cacheline_aligned_alloc(sizeof(LRUCacheShard)));
new (cache_) LRUCacheShard(capacity, /*strict_capacity_limit=*/false,
high_pri_pool_ratio, low_pri_pool_ratio,
@ -392,8 +392,7 @@ class ClockCacheTest : public testing::Test {
void NewShard(size_t capacity, bool strict_capacity_limit = true,
int eviction_effort_cap = 30) {
DeleteShard();
shard_ =
reinterpret_cast<Shard*>(port::cacheline_aligned_alloc(sizeof(Shard)));
shard_ = static_cast<Shard*>(port::cacheline_aligned_alloc(sizeof(Shard)));
TableOpts opts{1 /*value_size*/, eviction_effort_cap};
new (shard_)

10
cache/sharded_cache.h vendored
View File

@ -139,7 +139,7 @@ class ShardedCache : public ShardedCacheBase {
explicit ShardedCache(const ShardedCacheOptions& opts)
: ShardedCacheBase(opts),
shards_(reinterpret_cast<CacheShard*>(port::cacheline_aligned_alloc(
shards_(static_cast<CacheShard*>(port::cacheline_aligned_alloc(
sizeof(CacheShard) * GetNumShards()))),
destroy_shards_in_dtor_(false) {}
@ -192,7 +192,7 @@ class ShardedCache : public ShardedCacheBase {
HashVal hash = CacheShard::ComputeHash(key, hash_seed_);
HandleImpl* result = GetShard(hash).CreateStandalone(
key, hash, obj, helper, charge, allow_uncharged);
return reinterpret_cast<Handle*>(result);
return static_cast<Handle*>(result);
}
Handle* Lookup(const Slice& key, const CacheItemHelper* helper = nullptr,
@ -202,7 +202,7 @@ class ShardedCache : public ShardedCacheBase {
HashVal hash = CacheShard::ComputeHash(key, hash_seed_);
HandleImpl* result = GetShard(hash).Lookup(key, hash, helper,
create_context, priority, stats);
return reinterpret_cast<Handle*>(result);
return static_cast<Handle*>(result);
}
void Erase(const Slice& key) override {
@ -212,11 +212,11 @@ class ShardedCache : public ShardedCacheBase {
bool Release(Handle* handle, bool useful,
bool erase_if_last_ref = false) override {
auto h = reinterpret_cast<HandleImpl*>(handle);
auto h = static_cast<HandleImpl*>(handle);
return GetShard(h->GetHash()).Release(h, useful, erase_if_last_ref);
}
bool Ref(Handle* handle) override {
auto h = reinterpret_cast<HandleImpl*>(handle);
auto h = static_cast<HandleImpl*>(handle);
return GetShard(h->GetHash()).Ref(h);
}
bool Release(Handle* handle, bool erase_if_last_ref = false) override {

7
cache/typed_cache.h vendored
View File

@ -155,7 +155,7 @@ class BasicTypedCacheInterface : public BaseCacheInterface<CachePtr>,
using BaseCacheInterface<CachePtr>::BaseCacheInterface;
struct TypedAsyncLookupHandle : public Cache::AsyncLookupHandle {
TypedHandle* Result() {
return reinterpret_cast<TypedHandle*>(Cache::AsyncLookupHandle::Result());
return static_cast<TypedHandle*>(Cache::AsyncLookupHandle::Result());
}
};
@ -169,8 +169,7 @@ class BasicTypedCacheInterface : public BaseCacheInterface<CachePtr>,
}
inline TypedHandle* Lookup(const Slice& key, Statistics* stats = nullptr) {
return reinterpret_cast<TypedHandle*>(
this->cache_->BasicLookup(key, stats));
return static_cast<TypedHandle*>(this->cache_->BasicLookup(key, stats));
}
inline void StartAsyncLookup(TypedAsyncLookupHandle& async_handle) {
@ -347,7 +346,7 @@ class FullTypedCacheInterface
Priority priority = Priority::LOW, Statistics* stats = nullptr,
CacheTier lowest_used_cache_tier = CacheTier::kNonVolatileBlockTier) {
if (lowest_used_cache_tier > CacheTier::kVolatileTier) {
return reinterpret_cast<TypedHandle*>(this->cache_->Lookup(
return static_cast<TypedHandle*>(this->cache_->Lookup(
key, GetFullHelper(), create_context, priority, stats));
} else {
return BasicTypedCacheInterface<TValue, kRole, CachePtr>::Lookup(key,

View File

@ -325,8 +325,7 @@ TEST_F(DBBlobIndexTest, Iterate) {
auto check_is_blob = [&](bool is_blob) {
return [is_blob](Iterator* iterator) {
ASSERT_EQ(is_blob,
reinterpret_cast<ArenaWrappedDBIter*>(iterator)->IsBlob());
ASSERT_EQ(is_blob, static_cast<ArenaWrappedDBIter*>(iterator)->IsBlob());
};
};

View File

@ -1279,10 +1279,9 @@ void CompactionJob::ProcessKeyValueCompaction(SubcompactionState* sub_compact) {
: nullptr);
TEST_SYNC_POINT("CompactionJob::Run():Inprogress");
TEST_SYNC_POINT_CALLBACK(
"CompactionJob::Run():PausingManualCompaction:1",
reinterpret_cast<void*>(
const_cast<std::atomic<bool>*>(&manual_compaction_canceled_)));
TEST_SYNC_POINT_CALLBACK("CompactionJob::Run():PausingManualCompaction:1",
static_cast<void*>(const_cast<std::atomic<bool>*>(
&manual_compaction_canceled_)));
const std::string* const full_history_ts_low =
full_history_ts_low_.empty() ? nullptr : &full_history_ts_low_;
@ -1330,8 +1329,7 @@ void CompactionJob::ProcessKeyValueCompaction(SubcompactionState* sub_compact) {
Status status;
TEST_SYNC_POINT_CALLBACK(
"CompactionJob::ProcessKeyValueCompaction()::Processing",
reinterpret_cast<void*>(
const_cast<Compaction*>(sub_compact->compaction)));
static_cast<void*>(const_cast<Compaction*>(sub_compact->compaction)));
uint64_t last_cpu_micros = prev_cpu_micros;
while (status.ok() && !cfd->IsDropped() && c_iter->Valid()) {
// Invariant: c_iter.status() is guaranteed to be OK if c_iter->Valid()
@ -1362,10 +1360,9 @@ void CompactionJob::ProcessKeyValueCompaction(SubcompactionState* sub_compact) {
break;
}
TEST_SYNC_POINT_CALLBACK(
"CompactionJob::Run():PausingManualCompaction:2",
reinterpret_cast<void*>(
const_cast<std::atomic<bool>*>(&manual_compaction_canceled_)));
TEST_SYNC_POINT_CALLBACK("CompactionJob::Run():PausingManualCompaction:2",
static_cast<void*>(const_cast<std::atomic<bool>*>(
&manual_compaction_canceled_)));
c_iter->Next();
if (c_iter->status().IsManualCompactionPaused()) {
break;

View File

@ -1338,7 +1338,7 @@ class PrecludeLastLevelTest : public DBTestBase {
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::StartPeriodicTaskScheduler:Init", [&](void* arg) {
auto periodic_task_scheduler_ptr =
reinterpret_cast<PeriodicTaskScheduler*>(arg);
static_cast<PeriodicTaskScheduler*>(arg);
periodic_task_scheduler_ptr->TEST_OverrideTimer(mock_clock_.get());
});
mock_clock_->SetCurrentTime(kMockStartTime);

View File

@ -1102,7 +1102,7 @@ TEST_F(CorruptionTest, VerifyWholeTableChecksum) {
int count{0};
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::VerifyFullFileChecksum:mismatch", [&](void* arg) {
auto* s = reinterpret_cast<Status*>(arg);
auto* s = static_cast<Status*>(arg);
ASSERT_NE(s, nullptr);
++count;
ASSERT_NOK(*s);
@ -1247,7 +1247,7 @@ TEST_P(CrashDuringRecoveryWithCorruptionTest, CrashDuringRecovery) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::GetLogSizeAndMaybeTruncate:0", [&](void* arg) {
auto* tmp_s = reinterpret_cast<Status*>(arg);
auto* tmp_s = static_cast<Status*>(arg);
assert(tmp_s);
*tmp_s = Status::IOError("Injected");
});
@ -1429,7 +1429,7 @@ TEST_P(CrashDuringRecoveryWithCorruptionTest, TxnDbCrashDuringRecovery) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::Open::BeforeSyncWAL", [&](void* arg) {
auto* tmp_s = reinterpret_cast<Status*>(arg);
auto* tmp_s = static_cast<Status*>(arg);
assert(tmp_s);
*tmp_s = Status::IOError("Injected");
});
@ -1597,7 +1597,7 @@ TEST_P(CrashDuringRecoveryWithCorruptionTest, CrashDuringRecoveryWithFlush) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::GetLogSizeAndMaybeTruncate:0", [&](void* arg) {
auto* tmp_s = reinterpret_cast<Status*>(arg);
auto* tmp_s = static_cast<Status*>(arg);
assert(tmp_s);
*tmp_s = Status::IOError("Injected");
});

View File

@ -131,7 +131,7 @@ TEST_F(CuckooTableDBTest, Flush) {
ASSERT_OK(dbfull()->TEST_FlushMemTable());
TablePropertiesCollection ptc;
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
ASSERT_OK(static_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(1U, ptc.size());
ASSERT_EQ(3U, ptc.begin()->second->num_entries);
@ -148,7 +148,7 @@ TEST_F(CuckooTableDBTest, Flush) {
ASSERT_OK(Put("key6", "v6"));
ASSERT_OK(dbfull()->TEST_FlushMemTable());
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
ASSERT_OK(static_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(2U, ptc.size());
auto row = ptc.begin();
@ -166,7 +166,7 @@ TEST_F(CuckooTableDBTest, Flush) {
ASSERT_OK(Delete("key5"));
ASSERT_OK(Delete("key4"));
ASSERT_OK(dbfull()->TEST_FlushMemTable());
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
ASSERT_OK(static_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(3U, ptc.size());
row = ptc.begin();
@ -191,7 +191,7 @@ TEST_F(CuckooTableDBTest, FlushWithDuplicateKeys) {
ASSERT_OK(dbfull()->TEST_FlushMemTable());
TablePropertiesCollection ptc;
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
ASSERT_OK(static_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
VerifySstUniqueIds(ptc);
ASSERT_EQ(1U, ptc.size());
ASSERT_EQ(2U, ptc.begin()->second->num_entries);

View File

@ -3342,8 +3342,7 @@ TEST_F(DBBasicTest, BestEffortsRecoveryWithVersionBuildingFailure) {
SyncPoint::GetInstance()->SetCallBack(
"VersionBuilder::CheckConsistencyBeforeReturn", [&](void* arg) {
ASSERT_NE(nullptr, arg);
*(reinterpret_cast<Status*>(arg)) =
Status::Corruption("Inject corruption");
*(static_cast<Status*>(arg)) = Status::Corruption("Inject corruption");
});
SyncPoint::GetInstance()->EnableProcessing();
@ -4644,7 +4643,7 @@ TEST_F(DBBasicTest, ManifestWriteFailure) {
SyncPoint::GetInstance()->SetCallBack(
"VersionSet::ProcessManifestWrites:AfterSyncManifest", [&](void* arg) {
ASSERT_NE(nullptr, arg);
auto* s = reinterpret_cast<Status*>(arg);
auto* s = static_cast<Status*>(arg);
ASSERT_OK(*s);
// Manually overwrite return status
*s = Status::IOError();
@ -4699,7 +4698,7 @@ TEST_F(DBBasicTest, FailOpenIfLoggerCreationFail) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"rocksdb::CreateLoggerFromOptions:AfterGetPath", [&](void* arg) {
auto* s = reinterpret_cast<Status*>(arg);
auto* s = static_cast<Status*>(arg);
assert(s);
*s = Status::IOError("Injected");
});

View File

@ -1424,7 +1424,7 @@ TEST_P(DBBlockCacheKeyTest, StableCacheKeys) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"BlockBasedTableBuilder::BlockBasedTableBuilder:PreSetupBaseCacheKey",
[&](void* arg) {
TableProperties* props = reinterpret_cast<TableProperties*>(arg);
TableProperties* props = static_cast<TableProperties*>(arg);
props->orig_file_number = 0;
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();

View File

@ -505,7 +505,7 @@ TEST_F(DBCompactionTest, TestTableReaderForCompaction) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"TableCache::FindTable:0", [&](void* arg) {
assert(arg != nullptr);
bool no_io = *(reinterpret_cast<bool*>(arg));
bool no_io = *(static_cast<bool*>(arg));
if (!no_io) {
// filter out cases for table properties queries.
num_table_cache_lookup++;
@ -681,7 +681,7 @@ TEST_F(DBCompactionTest, CompactRangeBottomPri) {
int bottom_pri_count = 0;
SyncPoint::GetInstance()->SetCallBack(
"ThreadPoolImpl::Impl::BGThread:BeforeRun", [&](void* arg) {
Env::Priority* pri = reinterpret_cast<Env::Priority*>(arg);
Env::Priority* pri = static_cast<Env::Priority*>(arg);
// First time is low pri pool in the test case.
if (low_pri_count == 0 && bottom_pri_count == 0) {
ASSERT_EQ(Env::Priority::LOW, *pri);
@ -4244,7 +4244,7 @@ TEST_F(DBCompactionTest, CompactBottomLevelFilesWithDeletions) {
ASSERT_NE(kMaxSequenceNumber, dbfull()->bottommost_files_mark_threshold_);
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
ASSERT_TRUE(compaction->compaction_reason() ==
CompactionReason::kBottommostFiles);
});
@ -4300,7 +4300,7 @@ TEST_F(DBCompactionTest, DelayCompactBottomLevelFilesWithDeletions) {
std::atomic_int compaction_count = 0;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
ASSERT_TRUE(compaction->compaction_reason() ==
CompactionReason::kBottommostFiles);
compaction_count++;
@ -4431,7 +4431,7 @@ TEST_F(DBCompactionTest, RoundRobinTtlCompactionNormal) {
SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
auto compaction_reason = compaction->compaction_reason();
if (compaction_reason == CompactionReason::kTtl) {
ttl_compactions++;
@ -4581,7 +4581,7 @@ TEST_F(DBCompactionTest, RoundRobinTtlCompactionUnsortedTime) {
SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
auto compaction_reason = compaction->compaction_reason();
if (compaction_reason == CompactionReason::kTtl) {
ttl_compactions++;
@ -4697,7 +4697,7 @@ TEST_F(DBCompactionTest, LevelCompactExpiredTtlFiles) {
ASSERT_OK(Flush());
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
ASSERT_TRUE(compaction->compaction_reason() == CompactionReason::kTtl);
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
@ -4745,7 +4745,7 @@ TEST_F(DBCompactionTest, LevelCompactExpiredTtlFiles) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
ASSERT_TRUE(compaction->compaction_reason() == CompactionReason::kTtl);
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
@ -4872,7 +4872,7 @@ TEST_F(DBCompactionTest, LevelTtlCascadingCompactions) {
int ttl_compactions = 0;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
auto compaction_reason = compaction->compaction_reason();
if (compaction_reason == CompactionReason::kTtl) {
ttl_compactions++;
@ -5020,7 +5020,7 @@ TEST_F(DBCompactionTest, LevelPeriodicCompaction) {
int periodic_compactions = 0;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
auto compaction_reason = compaction->compaction_reason();
if (compaction_reason == CompactionReason::kPeriodicCompaction) {
periodic_compactions++;
@ -5204,7 +5204,7 @@ TEST_F(DBCompactionTest, LevelPeriodicCompactionWithOldDB) {
bool set_creation_time_to_zero = true;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
auto compaction_reason = compaction->compaction_reason();
if (compaction_reason == CompactionReason::kPeriodicCompaction) {
periodic_compactions++;
@ -5212,7 +5212,7 @@ TEST_F(DBCompactionTest, LevelPeriodicCompactionWithOldDB) {
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"PropertyBlockBuilder::AddTableProperty:Start", [&](void* arg) {
TableProperties* props = reinterpret_cast<TableProperties*>(arg);
TableProperties* props = static_cast<TableProperties*>(arg);
if (set_file_creation_time_to_zero) {
props->file_creation_time = 0;
}
@ -5276,7 +5276,7 @@ TEST_F(DBCompactionTest, LevelPeriodicAndTtlCompaction) {
int ttl_compactions = 0;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
auto compaction_reason = compaction->compaction_reason();
if (compaction_reason == CompactionReason::kPeriodicCompaction) {
periodic_compactions++;
@ -5459,7 +5459,7 @@ TEST_F(DBCompactionTest, LevelPeriodicCompactionWithCompactionFilters) {
int periodic_compactions = 0;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
auto compaction_reason = compaction->compaction_reason();
if (compaction_reason == CompactionReason::kPeriodicCompaction) {
periodic_compactions++;
@ -7197,8 +7197,7 @@ TEST_F(DBCompactionTest, ConsistencyFailTest) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"VersionBuilder::CheckConsistency0", [&](void* arg) {
auto p =
reinterpret_cast<std::pair<FileMetaData**, FileMetaData**>*>(arg);
auto p = static_cast<std::pair<FileMetaData**, FileMetaData**>*>(arg);
// just swap the two FileMetaData so that we hit error
// in CheckConsistency funcion
FileMetaData* temp = *(p->first);
@ -7235,8 +7234,7 @@ TEST_F(DBCompactionTest, ConsistencyFailTest2) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"VersionBuilder::CheckConsistency1", [&](void* arg) {
auto p =
reinterpret_cast<std::pair<FileMetaData**, FileMetaData**>*>(arg);
auto p = static_cast<std::pair<FileMetaData**, FileMetaData**>*>(arg);
// just swap the two FileMetaData so that we hit error
// in CheckConsistency funcion
FileMetaData* temp = *(p->first);
@ -8049,7 +8047,7 @@ TEST_F(DBCompactionTest, UpdateLevelSubCompactionTest) {
bool has_compaction = false;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
ASSERT_TRUE(compaction->max_subcompactions() == 10);
has_compaction = true;
});
@ -8073,7 +8071,7 @@ TEST_F(DBCompactionTest, UpdateLevelSubCompactionTest) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
ASSERT_TRUE(compaction->max_subcompactions() == 2);
has_compaction = true;
});
@ -8101,7 +8099,7 @@ TEST_F(DBCompactionTest, UpdateUniversalSubCompactionTest) {
bool has_compaction = false;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"UniversalCompactionBuilder::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
ASSERT_TRUE(compaction->max_subcompactions() == 10);
has_compaction = true;
});
@ -8124,7 +8122,7 @@ TEST_F(DBCompactionTest, UpdateUniversalSubCompactionTest) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"UniversalCompactionBuilder::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
ASSERT_TRUE(compaction->max_subcompactions() == 2);
has_compaction = true;
});
@ -10250,8 +10248,7 @@ TEST_F(DBCompactionTest, ErrorWhenReadFileHead) {
SyncPoint::GetInstance()->SetCallBack(
"RandomAccessFileReader::Read::BeforeReturn",
[&count, &error_file](void* pair_ptr) {
auto p =
reinterpret_cast<std::pair<std::string*, IOStatus*>*>(pair_ptr);
auto p = static_cast<std::pair<std::string*, IOStatus*>*>(pair_ptr);
int cur = ++count;
if (cur == error_file) {
IOStatus* io_s = p->second;

View File

@ -338,7 +338,7 @@ TEST_F(DBTestDynamicLevel, DynamicLevelMaxBytesCompactRange) {
std::set<int> output_levels;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"CompactionPicker::CompactRange:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
output_levels.insert(compaction->output_level());
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();

View File

@ -272,7 +272,7 @@ TEST_F(DBFlushTest, ScheduleOnlyOneBgThread) {
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::MaybeScheduleFlushOrCompaction:AfterSchedule:0", [&](void* arg) {
ASSERT_NE(nullptr, arg);
auto unscheduled_flushes = *reinterpret_cast<int*>(arg);
auto unscheduled_flushes = *static_cast<int*>(arg);
ASSERT_EQ(0, unscheduled_flushes);
++called;
});
@ -1791,7 +1791,7 @@ TEST_F(DBFlushTest, MemPurgeCorrectLogNumberAndSSTFileCreation) {
std::atomic<uint64_t> num_memtable_at_first_flush(0);
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"FlushJob::WriteLevel0Table:num_memtables", [&](void* arg) {
uint64_t* mems_size = reinterpret_cast<uint64_t*>(arg);
uint64_t* mems_size = static_cast<uint64_t*>(arg);
// atomic_compare_exchange_strong sometimes updates the value
// of ZERO (the "expected" object), so we make sure ZERO is indeed...
// zero.
@ -2038,7 +2038,7 @@ TEST_F(DBFlushTest, FireOnFlushCompletedAfterCommittedResult) {
SyncPoint::GetInstance()->SetCallBack(
"FlushJob::WriteLevel0Table", [&listener](void* arg) {
// Wait for the second flush finished, out of mutex.
auto* mems = reinterpret_cast<autovector<MemTable*>*>(arg);
auto* mems = static_cast<autovector<MemTable*>*>(arg);
if (mems->front()->GetEarliestSequenceNumber() == listener->seq1 - 1) {
TEST_SYNC_POINT(
"DBFlushTest::FireOnFlushCompletedAfterCommittedResult:"
@ -2386,7 +2386,7 @@ TEST_F(DBFlushTest, PickRightMemtables) {
});
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::FlushMemTableToOutputFile:AfterPickMemtables", [&](void* arg) {
auto* job = reinterpret_cast<FlushJob*>(arg);
auto* job = static_cast<FlushJob*>(arg);
assert(job);
const auto& mems = job->GetMemTables();
assert(mems.size() == 1);
@ -2634,7 +2634,7 @@ TEST_P(DBAtomicFlushTest, ManualFlushUnder2PC) {
// it means atomic flush didn't write the min_log_number_to_keep to MANIFEST.
cfs.push_back(kDefaultColumnFamilyName);
ASSERT_OK(TryReopenWithColumnFamilies(cfs, options));
DBImpl* db_impl = reinterpret_cast<DBImpl*>(db_);
DBImpl* db_impl = static_cast<DBImpl*>(db_);
ASSERT_TRUE(db_impl->allow_2pc());
ASSERT_NE(db_impl->MinLogNumberToKeep(), 0);
}
@ -3171,7 +3171,7 @@ TEST_P(DBAtomicFlushTest, BgThreadNoWaitAfterManifestError) {
SyncPoint::GetInstance()->SetCallBack(
"VersionSet::ProcessManifestWrites:AfterSyncManifest", [&](void* arg) {
auto* ptr = reinterpret_cast<IOStatus*>(arg);
auto* ptr = static_cast<IOStatus*>(arg);
assert(ptr);
*ptr = IOStatus::IOError("Injected failure");
});

View File

@ -1907,7 +1907,7 @@ struct SuperVersionHandle {
};
static void CleanupSuperVersionHandle(void* arg1, void* /*arg2*/) {
SuperVersionHandle* sv_handle = reinterpret_cast<SuperVersionHandle*>(arg1);
SuperVersionHandle* sv_handle = static_cast<SuperVersionHandle*>(arg1);
if (sv_handle->super_version->Unref()) {
// Job id == 0 means that this is not our background process, but rather
@ -2269,7 +2269,7 @@ Status DBImpl::GetImpl(const ReadOptions& read_options, const Slice& key,
snapshot = get_impl_options.callback->max_visible_seq();
} else {
snapshot =
reinterpret_cast<const SnapshotImpl*>(read_options.snapshot)->number_;
static_cast<const SnapshotImpl*>(read_options.snapshot)->number_;
}
} else {
// Note that the snapshot is assigned AFTER referencing the super
@ -4285,7 +4285,7 @@ void DBImpl::ReleaseSnapshot(const Snapshot* s) {
// inplace_update_support enabled.
return;
}
const SnapshotImpl* casted_s = reinterpret_cast<const SnapshotImpl*>(s);
const SnapshotImpl* casted_s = static_cast<const SnapshotImpl*>(s);
{
InstrumentedMutexLock l(&mutex_);
snapshots_.Delete(casted_s);

View File

@ -1419,10 +1419,9 @@ Status DBImpl::CompactFiles(const CompactionOptions& compact_options,
// Perform CompactFiles
TEST_SYNC_POINT("TestCompactFiles::IngestExternalFile2");
TEST_SYNC_POINT_CALLBACK(
"TestCompactFiles:PausingManualCompaction:3",
reinterpret_cast<void*>(
const_cast<std::atomic<int>*>(&manual_compaction_paused_)));
TEST_SYNC_POINT_CALLBACK("TestCompactFiles:PausingManualCompaction:3",
static_cast<void*>(const_cast<std::atomic<int>*>(
&manual_compaction_paused_)));
{
InstrumentedMutexLock l(&mutex_);
auto* current = cfd->current();
@ -3045,8 +3044,8 @@ void DBImpl::SchedulePendingPurge(std::string fname, std::string dir_to_sync,
}
void DBImpl::BGWorkFlush(void* arg) {
FlushThreadArg fta = *(reinterpret_cast<FlushThreadArg*>(arg));
delete reinterpret_cast<FlushThreadArg*>(arg);
FlushThreadArg fta = *(static_cast<FlushThreadArg*>(arg));
delete static_cast<FlushThreadArg*>(arg);
IOSTATS_SET_THREAD_POOL_ID(fta.thread_pri_);
TEST_SYNC_POINT("DBImpl::BGWorkFlush");
@ -3055,8 +3054,8 @@ void DBImpl::BGWorkFlush(void* arg) {
}
void DBImpl::BGWorkCompaction(void* arg) {
CompactionArg ca = *(reinterpret_cast<CompactionArg*>(arg));
delete reinterpret_cast<CompactionArg*>(arg);
CompactionArg ca = *(static_cast<CompactionArg*>(arg));
delete static_cast<CompactionArg*>(arg);
IOSTATS_SET_THREAD_POOL_ID(Env::Priority::LOW);
TEST_SYNC_POINT("DBImpl::BGWorkCompaction");
auto prepicked_compaction =
@ -3080,12 +3079,12 @@ void DBImpl::BGWorkBottomCompaction(void* arg) {
void DBImpl::BGWorkPurge(void* db) {
IOSTATS_SET_THREAD_POOL_ID(Env::Priority::HIGH);
TEST_SYNC_POINT("DBImpl::BGWorkPurge:start");
reinterpret_cast<DBImpl*>(db)->BackgroundCallPurge();
static_cast<DBImpl*>(db)->BackgroundCallPurge();
TEST_SYNC_POINT("DBImpl::BGWorkPurge:end");
}
void DBImpl::UnscheduleCompactionCallback(void* arg) {
CompactionArg* ca_ptr = reinterpret_cast<CompactionArg*>(arg);
CompactionArg* ca_ptr = static_cast<CompactionArg*>(arg);
Env::Priority compaction_pri = ca_ptr->compaction_pri_;
if (Env::Priority::BOTTOM == compaction_pri) {
// Decrement bg_bottom_compaction_scheduled_ if priority is BOTTOM
@ -3095,7 +3094,7 @@ void DBImpl::UnscheduleCompactionCallback(void* arg) {
ca_ptr->db->bg_compaction_scheduled_--;
}
CompactionArg ca = *(ca_ptr);
delete reinterpret_cast<CompactionArg*>(arg);
delete static_cast<CompactionArg*>(arg);
if (ca.prepicked_compaction != nullptr) {
// if it's a manual compaction, set status to ManualCompactionPaused
if (ca.prepicked_compaction->manual_compaction_state) {
@ -3115,14 +3114,14 @@ void DBImpl::UnscheduleCompactionCallback(void* arg) {
void DBImpl::UnscheduleFlushCallback(void* arg) {
// Decrement bg_flush_scheduled_ in flush callback
reinterpret_cast<FlushThreadArg*>(arg)->db_->bg_flush_scheduled_--;
Env::Priority flush_pri = reinterpret_cast<FlushThreadArg*>(arg)->thread_pri_;
static_cast<FlushThreadArg*>(arg)->db_->bg_flush_scheduled_--;
Env::Priority flush_pri = static_cast<FlushThreadArg*>(arg)->thread_pri_;
if (Env::Priority::LOW == flush_pri) {
TEST_SYNC_POINT("DBImpl::UnscheduleLowFlushCallback");
} else if (Env::Priority::HIGH == flush_pri) {
TEST_SYNC_POINT("DBImpl::UnscheduleHighFlushCallback");
}
delete reinterpret_cast<FlushThreadArg*>(arg);
delete static_cast<FlushThreadArg*>(arg);
TEST_SYNC_POINT("DBImpl::UnscheduleFlushCallback");
}

View File

@ -208,11 +208,11 @@ void DBImpl::TEST_SignalAllBgCv() { bg_cv_.SignalAll(); }
void* DBImpl::TEST_BeginWrite() {
auto w = new WriteThread::Writer();
write_thread_.EnterUnbatched(w, &mutex_);
return reinterpret_cast<void*>(w);
return static_cast<void*>(w);
}
void DBImpl::TEST_EndWrite(void* w) {
auto writer = reinterpret_cast<WriteThread::Writer*>(w);
auto writer = static_cast<WriteThread::Writer*>(w);
write_thread_.ExitUnbatched(writer);
delete writer;
}

View File

@ -169,8 +169,7 @@ Iterator* DBImplReadOnly::NewIterator(const ReadOptions& _read_options,
SequenceNumber latest_snapshot = versions_->LastSequence();
SequenceNumber read_seq =
read_options.snapshot != nullptr
? reinterpret_cast<const SnapshotImpl*>(read_options.snapshot)
->number_
? static_cast<const SnapshotImpl*>(read_options.snapshot)->number_
: latest_snapshot;
ReadCallback* read_callback = nullptr; // No read callback provided.
auto db_iter = NewArenaWrappedDbIterator(
@ -216,8 +215,7 @@ Status DBImplReadOnly::NewIterators(
SequenceNumber latest_snapshot = versions_->LastSequence();
SequenceNumber read_seq =
read_options.snapshot != nullptr
? reinterpret_cast<const SnapshotImpl*>(read_options.snapshot)
->number_
? static_cast<const SnapshotImpl*>(read_options.snapshot)->number_
: latest_snapshot;
autovector<std::tuple<ColumnFamilyData*, SuperVersion*>> cfd_to_sv;

View File

@ -2824,7 +2824,7 @@ TEST_F(DBIterWithMergeIterTest, InnerMergeIteratorDataRace4) {
// Seek() and before calling Prev()
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"MergeIterator::Prev:BeforePrev", [&](void* arg) {
IteratorWrapper* it = reinterpret_cast<IteratorWrapper*>(arg);
IteratorWrapper* it = static_cast<IteratorWrapper*>(arg);
if (it->key().starts_with("z")) {
internal_iter2_->Add("x", kTypeValue, "7", 16u, true);
internal_iter2_->Add("x", kTypeValue, "7", 15u, true);
@ -2875,7 +2875,7 @@ TEST_F(DBIterWithMergeIterTest, InnerMergeIteratorDataRace5) {
// Seek() and before calling Prev()
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"MergeIterator::Prev:BeforePrev", [&](void* arg) {
IteratorWrapper* it = reinterpret_cast<IteratorWrapper*>(arg);
IteratorWrapper* it = static_cast<IteratorWrapper*>(arg);
if (it->key().starts_with("z")) {
internal_iter2_->Add("x", kTypeValue, "7", 16u, true);
internal_iter2_->Add("x", kTypeValue, "7", 15u, true);
@ -2922,7 +2922,7 @@ TEST_F(DBIterWithMergeIterTest, InnerMergeIteratorDataRace6) {
// Seek() and before calling Prev()
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"MergeIterator::Prev:BeforePrev", [&](void* arg) {
IteratorWrapper* it = reinterpret_cast<IteratorWrapper*>(arg);
IteratorWrapper* it = static_cast<IteratorWrapper*>(arg);
if (it->key().starts_with("z")) {
internal_iter2_->Add("x", kTypeValue, "7", 16u, true);
}
@ -2971,7 +2971,7 @@ TEST_F(DBIterWithMergeIterTest, InnerMergeIteratorDataRace7) {
// Seek() and before calling Prev()
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"MergeIterator::Prev:BeforePrev", [&](void* arg) {
IteratorWrapper* it = reinterpret_cast<IteratorWrapper*>(arg);
IteratorWrapper* it = static_cast<IteratorWrapper*>(arg);
if (it->key().starts_with("z")) {
internal_iter2_->Add("x", kTypeValue, "7", 16u, true);
internal_iter2_->Add("x", kTypeValue, "7", 15u, true);
@ -3024,7 +3024,7 @@ TEST_F(DBIterWithMergeIterTest, InnerMergeIteratorDataRace8) {
// before calling Prev()
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"MergeIterator::Prev:BeforePrev", [&](void* arg) {
IteratorWrapper* it = reinterpret_cast<IteratorWrapper*>(arg);
IteratorWrapper* it = static_cast<IteratorWrapper*>(arg);
if (it->key().starts_with("z")) {
internal_iter2_->Add("x", kTypeValue, "7", 16u, true);
internal_iter2_->Add("y", kTypeValue, "7", 17u, true);

View File

@ -2525,7 +2525,7 @@ TEST_P(DBIteratorTest, RefreshWithSnapshot) {
TEST_P(DBIteratorTest, CreationFailure) {
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::NewInternalIterator:StatusCallback", [](void* arg) {
*(reinterpret_cast<Status*>(arg)) = Status::Corruption("test status");
*(static_cast<Status*>(arg)) = Status::Corruption("test status");
});
SyncPoint::GetInstance()->EnableProcessing();
@ -3448,8 +3448,7 @@ TEST_F(DBIteratorTest, ErrorWhenReadFile) {
SyncPoint::GetInstance()->SetCallBack(
"RandomAccessFileReader::Read::BeforeReturn",
[&error_file](void* io_s_ptr) {
auto p =
reinterpret_cast<std::pair<std::string*, IOStatus*>*>(io_s_ptr);
auto p = static_cast<std::pair<std::string*, IOStatus*>*>(io_s_ptr);
if (p->first->find(error_file) != std::string::npos) {
*p->second = IOStatus::IOError();
p->second->SetRetryable(true);
@ -3529,8 +3528,7 @@ TEST_F(DBIteratorTest, ErrorWhenReadFile) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"RandomAccessFileReader::Read::AnyOffset", [&f1](void* pair_ptr) {
auto p =
reinterpret_cast<std::pair<std::string*, IOStatus*>*>(pair_ptr);
auto p = static_cast<std::pair<std::string*, IOStatus*>*>(pair_ptr);
if (p->first->find(f1) != std::string::npos) {
*p->second = IOStatus::IOError();
p->second->SetRetryable(true);

View File

@ -437,14 +437,13 @@ TEST_P(DbKvChecksumTestMergedBatch, WriteToWALCorrupted) {
// This callback should only be called by the leader thread
SyncPoint::GetInstance()->SetCallBack(
"WriteThread::JoinBatchGroup:Wait2", [&](void* arg_leader) {
auto* leader = reinterpret_cast<WriteThread::Writer*>(arg_leader);
auto* leader = static_cast<WriteThread::Writer*>(arg_leader);
ASSERT_EQ(leader->state, WriteThread::STATE_GROUP_LEADER);
// This callback should only be called by the follower thread
SyncPoint::GetInstance()->SetCallBack(
"WriteThread::JoinBatchGroup:Wait", [&](void* arg_follower) {
auto* follower =
reinterpret_cast<WriteThread::Writer*>(arg_follower);
auto* follower = static_cast<WriteThread::Writer*>(arg_follower);
// The leader thread will wait on this bool and hence wait until
// this writer joins the write group
ASSERT_NE(follower->state, WriteThread::STATE_GROUP_LEADER);
@ -549,14 +548,13 @@ TEST_P(DbKvChecksumTestMergedBatch, WriteToWALWithColumnFamilyCorrupted) {
// This callback should only be called by the leader thread
SyncPoint::GetInstance()->SetCallBack(
"WriteThread::JoinBatchGroup:Wait2", [&](void* arg_leader) {
auto* leader = reinterpret_cast<WriteThread::Writer*>(arg_leader);
auto* leader = static_cast<WriteThread::Writer*>(arg_leader);
ASSERT_EQ(leader->state, WriteThread::STATE_GROUP_LEADER);
// This callback should only be called by the follower thread
SyncPoint::GetInstance()->SetCallBack(
"WriteThread::JoinBatchGroup:Wait", [&](void* arg_follower) {
auto* follower =
reinterpret_cast<WriteThread::Writer*>(arg_follower);
auto* follower = static_cast<WriteThread::Writer*>(arg_follower);
// The leader thread will wait on this bool and hence wait until
// this writer joins the write group
ASSERT_NE(follower->state, WriteThread::STATE_GROUP_LEADER);
@ -662,7 +660,7 @@ TEST_F(DbKVChecksumWALToWriteBatchTest, WriteBatchChecksumHandoff) {
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::RecoverLogFiles:BeforeUpdateProtectionInfo:batch",
[&](void* batch_ptr) {
WriteBatch* batch = reinterpret_cast<WriteBatch*>(batch_ptr);
WriteBatch* batch = static_cast<WriteBatch*>(batch_ptr);
content.assign(batch->Data().data(), batch->GetDataSize());
Slice batch_content = batch->Data();
// Corrupt first bit
@ -672,7 +670,7 @@ TEST_F(DbKVChecksumWALToWriteBatchTest, WriteBatchChecksumHandoff) {
"DBImpl::RecoverLogFiles:BeforeUpdateProtectionInfo:checksum",
[&](void* checksum_ptr) {
// Verify that checksum is produced on the batch content
uint64_t checksum = *reinterpret_cast<uint64_t*>(checksum_ptr);
uint64_t checksum = *static_cast<uint64_t*>(checksum_ptr);
ASSERT_EQ(checksum, XXH3_64bits(content.data(), content.size()));
});
SyncPoint::GetInstance()->EnableProcessing();

View File

@ -121,7 +121,7 @@ class TestPrefixExtractor : public SliceTransform {
private:
const char* separator(const Slice& key) const {
return reinterpret_cast<const char*>(memchr(key.data(), '_', key.size()));
return static_cast<const char*>(memchr(key.data(), '_', key.size()));
}
};
@ -287,7 +287,7 @@ TEST_F(DBMemTableTest, InsertWithHint) {
options.env = env_;
Reopen(options);
MockMemTableRep* rep =
reinterpret_cast<MockMemTableRepFactory*>(options.memtable_factory.get())
static_cast<MockMemTableRepFactory*>(options.memtable_factory.get())
->rep();
ASSERT_OK(Put("foo_k1", "foo_v1"));
ASSERT_EQ(nullptr, rep->last_hint_in());

View File

@ -1390,7 +1390,7 @@ TEST_F(DBOptionsTest, ChangeCompression) {
bool compacted = false;
SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* c = reinterpret_cast<Compaction*>(arg);
Compaction* c = static_cast<Compaction*>(arg);
compression_used = c->output_compression();
compression_opt_used = c->output_compression_opts();
compacted = true;

View File

@ -3605,8 +3605,7 @@ TEST_F(DBRangeDelTest, RangeDelReseekAfterFileReadError) {
SyncPoint::GetInstance()->SetCallBack(
"RandomAccessFileReader::Read::BeforeReturn", [&fname](void* pair_ptr) {
auto p =
reinterpret_cast<std::pair<std::string*, IOStatus*>*>(pair_ptr);
auto p = static_cast<std::pair<std::string*, IOStatus*>*>(pair_ptr);
if (p->first->find(fname) != std::string::npos) {
*p->second = IOStatus::IOError();
p->second->SetRetryable(true);
@ -3666,8 +3665,7 @@ TEST_F(DBRangeDelTest, RangeDelReseekAfterFileReadError) {
SyncPoint::GetInstance()->SetCallBack(
"RandomAccessFileReader::Read::AnyOffset", [&fname](void* pair_ptr) {
auto p =
reinterpret_cast<std::pair<std::string*, IOStatus*>*>(pair_ptr);
auto p = static_cast<std::pair<std::string*, IOStatus*>*>(pair_ptr);
if (p->first->find(fname) != std::string::npos) {
*p->second = IOStatus::IOError();
p->second->SetRetryable(true);

View File

@ -131,7 +131,7 @@ TEST_F(DBSecondaryTest, FailOpenIfLoggerCreationFail) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"rocksdb::CreateLoggerFromOptions:AfterGetPath", [&](void* arg) {
auto* s = reinterpret_cast<Status*>(arg);
auto* s = static_cast<Status*>(arg);
assert(s);
*s = Status::IOError("Injected");
});
@ -1191,7 +1191,7 @@ TEST_F(DBSecondaryTest, CheckConsistencyWhenOpen) {
"DBImplSecondary::CheckConsistency:AfterFirstAttempt", [&](void* arg) {
ASSERT_NE(nullptr, arg);
called = true;
auto* s = reinterpret_cast<Status*>(arg);
auto* s = static_cast<Status*>(arg);
ASSERT_NOK(*s);
});
SyncPoint::GetInstance()->LoadDependency(
@ -1229,8 +1229,7 @@ TEST_F(DBSecondaryTest, StartFromInconsistent) {
SyncPoint::GetInstance()->SetCallBack(
"VersionBuilder::CheckConsistencyBeforeReturn", [&](void* arg) {
ASSERT_NE(nullptr, arg);
*(reinterpret_cast<Status*>(arg)) =
Status::Corruption("Inject corruption");
*(static_cast<Status*>(arg)) = Status::Corruption("Inject corruption");
});
SyncPoint::GetInstance()->EnableProcessing();
Options options1;
@ -1263,8 +1262,7 @@ TEST_F(DBSecondaryTest, InconsistencyDuringCatchUp) {
SyncPoint::GetInstance()->SetCallBack(
"VersionBuilder::CheckConsistencyBeforeReturn", [&](void* arg) {
ASSERT_NE(nullptr, arg);
*(reinterpret_cast<Status*>(arg)) =
Status::Corruption("Inject corruption");
*(static_cast<Status*>(arg)) = Status::Corruption("Inject corruption");
});
SyncPoint::GetInstance()->EnableProcessing();
Status s = db_secondary_->TryCatchUpWithPrimary();

View File

@ -1679,7 +1679,7 @@ TEST_F(DBSSTTest, OpenDBWithoutGetFileSizeInvocations) {
bool is_get_file_size_called = false;
SyncPoint::GetInstance()->SetCallBack(
"MockFileSystem::GetFileSize:CheckFileType", [&](void* arg) {
std::string* filename = reinterpret_cast<std::string*>(arg);
std::string* filename = static_cast<std::string*>(arg);
if (filename->find(".blob") != std::string::npos) {
is_get_file_size_called = true;
}

View File

@ -74,8 +74,7 @@ TEST_F(DBTablePropertiesTest, GetPropertiesOfAllTablesTest) {
if (table == 3) {
SyncPoint::GetInstance()->SetCallBack(
"BlockBasedTableBuilder::WritePropertiesBlock:Meta", [&](void* meta) {
*reinterpret_cast<const std::string**>(meta) =
&kPropertiesBlockOldName;
*static_cast<const std::string**>(meta) = &kPropertiesBlockOldName;
});
SyncPoint::GetInstance()->EnableProcessing();
}
@ -93,7 +92,7 @@ TEST_F(DBTablePropertiesTest, GetPropertiesOfAllTablesTest) {
// Part of strategy to prevent pinning table files
SyncPoint::GetInstance()->SetCallBack(
"VersionEditHandler::LoadTables:skip_load_table_files",
[&](void* skip_load) { *reinterpret_cast<bool*>(skip_load) = true; });
[&](void* skip_load) { *static_cast<bool*>(skip_load) = true; });
SyncPoint::GetInstance()->EnableProcessing();
// 1. Read table properties directly from file
@ -178,9 +177,7 @@ TEST_F(DBTablePropertiesTest, InvalidIgnored) {
// Inject properties block data that Block considers invalid
SyncPoint::GetInstance()->SetCallBack(
"BlockBasedTableBuilder::WritePropertiesBlock:BlockData",
[&](void* block_data) {
*reinterpret_cast<Slice*>(block_data) = Slice("X");
});
[&](void* block_data) { *static_cast<Slice*>(block_data) = Slice("X"); });
SyncPoint::GetInstance()->EnableProcessing();
// Corrupting the table properties corrupts the unique id.

View File

@ -203,13 +203,13 @@ TEST_P(DBTestTailingIterator, TailingIteratorTrimSeekToNext) {
bool file_iters_renewed_copy = false;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"ForwardIterator::SeekInternal:Return", [&](void* arg) {
ForwardIterator* fiter = reinterpret_cast<ForwardIterator*>(arg);
ForwardIterator* fiter = static_cast<ForwardIterator*>(arg);
ASSERT_TRUE(!file_iters_deleted ||
fiter->TEST_CheckDeletedIters(&deleted_iters, &num_iters));
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"ForwardIterator::Next:Return", [&](void* arg) {
ForwardIterator* fiter = reinterpret_cast<ForwardIterator*>(arg);
ForwardIterator* fiter = static_cast<ForwardIterator*>(arg);
ASSERT_TRUE(!file_iters_deleted ||
fiter->TEST_CheckDeletedIters(&deleted_iters, &num_iters));
});

View File

@ -2747,7 +2747,7 @@ struct MTThread {
};
static void MTThreadBody(void* arg) {
MTThread* t = reinterpret_cast<MTThread*>(arg);
MTThread* t = static_cast<MTThread*>(arg);
int id = t->id;
DB* db = t->state->test->db_;
int counter = 0;
@ -2932,7 +2932,7 @@ struct GCThread {
};
static void GCThreadBody(void* arg) {
GCThread* t = reinterpret_cast<GCThread*>(arg);
GCThread* t = static_cast<GCThread*>(arg);
int id = t->id;
DB* db = t->db;
WriteOptions wo;
@ -3190,7 +3190,7 @@ class ModelDB : public DB {
return new ModelIter(saved, true);
} else {
const KVMap* snapshot_state =
&(reinterpret_cast<const ModelSnapshot*>(options.snapshot)->map_);
&(static_cast<const ModelSnapshot*>(options.snapshot)->map_);
return new ModelIter(snapshot_state, false);
}
}
@ -3206,7 +3206,7 @@ class ModelDB : public DB {
}
void ReleaseSnapshot(const Snapshot* snapshot) override {
delete reinterpret_cast<const ModelSnapshot*>(snapshot);
delete static_cast<const ModelSnapshot*>(snapshot);
}
Status Write(const WriteOptions& /*options*/, WriteBatch* batch) override {
@ -5247,7 +5247,7 @@ TEST_F(DBTest, DynamicLevelCompressionPerLevel2) {
std::atomic<int> num_no(0);
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
if (compaction->output_level() == 4) {
ASSERT_TRUE(compaction->output_compression() == kLZ4Compression);
num_lz4.fetch_add(1);
@ -5255,7 +5255,7 @@ TEST_F(DBTest, DynamicLevelCompressionPerLevel2) {
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"FlushJob::WriteLevel0Table:output_compression", [&](void* arg) {
auto* compression = reinterpret_cast<CompressionType*>(arg);
auto* compression = static_cast<CompressionType*>(arg);
ASSERT_TRUE(*compression == kNoCompression);
num_no.fetch_add(1);
});
@ -5289,7 +5289,7 @@ TEST_F(DBTest, DynamicLevelCompressionPerLevel2) {
num_no.store(0);
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
if (compaction->output_level() == 4 && compaction->start_level() == 3) {
ASSERT_TRUE(compaction->output_compression() == kZlibCompression);
num_zlib.fetch_add(1);
@ -5300,7 +5300,7 @@ TEST_F(DBTest, DynamicLevelCompressionPerLevel2) {
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"FlushJob::WriteLevel0Table:output_compression", [&](void* arg) {
auto* compression = reinterpret_cast<CompressionType*>(arg);
auto* compression = static_cast<CompressionType*>(arg);
ASSERT_TRUE(*compression == kNoCompression);
num_no.fetch_add(1);
});
@ -6192,7 +6192,7 @@ TEST_F(DBTest, SuggestCompactRangeTest) {
return "CompactionFilterFactoryGetContext";
}
static bool IsManual(CompactionFilterFactory* compaction_filter_factory) {
return reinterpret_cast<CompactionFilterFactoryGetContext*>(
return static_cast<CompactionFilterFactoryGetContext*>(
compaction_filter_factory)
->saved_context.is_manual_compaction;
}
@ -7075,9 +7075,8 @@ TEST_F(DBTest, PinnableSliceAndRowCache) {
ASSERT_OK(Flush());
ASSERT_EQ(Get("foo"), "bar");
ASSERT_EQ(
reinterpret_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
ASSERT_EQ(static_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
{
PinnableSlice pin_slice;
@ -7085,13 +7084,11 @@ TEST_F(DBTest, PinnableSliceAndRowCache) {
ASSERT_EQ(pin_slice.ToString(), "bar");
// Entry is already in cache, lookup will remove the element from lru
ASSERT_EQ(
reinterpret_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
0);
static_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(), 0);
}
// After PinnableSlice destruction element is added back in LRU
ASSERT_EQ(
reinterpret_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
ASSERT_EQ(static_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
}
TEST_F(DBTest, ReusePinnableSlice) {
@ -7104,9 +7101,8 @@ TEST_F(DBTest, ReusePinnableSlice) {
ASSERT_OK(Flush());
ASSERT_EQ(Get("foo"), "bar");
ASSERT_EQ(
reinterpret_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
ASSERT_EQ(static_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
{
PinnableSlice pin_slice;
@ -7116,13 +7112,11 @@ TEST_F(DBTest, ReusePinnableSlice) {
// Entry is already in cache, lookup will remove the element from lru
ASSERT_EQ(
reinterpret_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
0);
static_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(), 0);
}
// After PinnableSlice destruction element is added back in LRU
ASSERT_EQ(
reinterpret_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
ASSERT_EQ(static_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
{
std::vector<Slice> multiget_keys;
@ -7141,13 +7135,11 @@ TEST_F(DBTest, ReusePinnableSlice) {
// Entry is already in cache, lookup will remove the element from lru
ASSERT_EQ(
reinterpret_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
0);
static_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(), 0);
}
// After PinnableSlice destruction element is added back in LRU
ASSERT_EQ(
reinterpret_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
ASSERT_EQ(static_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
{
std::vector<ColumnFamilyHandle*> multiget_cfs;
@ -7168,13 +7160,11 @@ TEST_F(DBTest, ReusePinnableSlice) {
// Entry is already in cache, lookup will remove the element from lru
ASSERT_EQ(
reinterpret_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
0);
static_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(), 0);
}
// After PinnableSlice destruction element is added back in LRU
ASSERT_EQ(
reinterpret_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
ASSERT_EQ(static_cast<LRUCache*>(options.row_cache.get())->TEST_GetLRUSize(),
1);
}
@ -7333,7 +7323,7 @@ TEST_F(DBTest, CreationTimeOfOldestFile) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"PropertyBlockBuilder::AddTableProperty:Start", [&](void* arg) {
TableProperties* props = reinterpret_cast<TableProperties*>(arg);
TableProperties* props = static_cast<TableProperties*>(arg);
if (set_file_creation_time_to_zero) {
if (idx == 0) {
props->file_creation_time = 0;

View File

@ -2330,7 +2330,7 @@ TEST_F(DBTest2, MaxCompactionBytesTest) {
}
static void UniqueIdCallback(void* arg) {
int* result = reinterpret_cast<int*>(arg);
int* result = static_cast<int*>(arg);
if (*result == -1) {
*result = 0;
}
@ -6472,7 +6472,7 @@ class RenameCurrentTest : public DBTestBase,
void SetupSyncPoints() {
SyncPoint::GetInstance()->DisableProcessing();
SyncPoint::GetInstance()->SetCallBack(sync_point_, [&](void* arg) {
Status* s = reinterpret_cast<Status*>(arg);
Status* s = static_cast<Status*>(arg);
assert(s);
*s = Status::IOError("Injected IO error.");
});
@ -7174,7 +7174,7 @@ TEST_F(DBTest2, PointInTimeRecoveryWithIOErrorWhileReadingWal) {
"LogReader::ReadMore:AfterReadFile", [&](void* arg) {
if (should_inject_error) {
ASSERT_NE(nullptr, arg);
*reinterpret_cast<Status*>(arg) = Status::IOError("Injected IOError");
*static_cast<Status*>(arg) = Status::IOError("Injected IOError");
}
});
SyncPoint::GetInstance()->EnableProcessing();

View File

@ -2187,7 +2187,7 @@ TEST_F(DBTestUniversalCompaction2, PeriodicCompaction) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"UniversalCompactionPicker::PickPeriodicCompaction:Return",
[&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
ASSERT_TRUE(arg != nullptr);
ASSERT_TRUE(compaction->compaction_reason() ==
CompactionReason::kPeriodicCompaction);
@ -2258,7 +2258,7 @@ TEST_F(DBTestUniversalCompaction2, PeriodicCompactionOffpeak) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"UniversalCompactionPicker::PickPeriodicCompaction:Return",
[&](void* arg) {
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
Compaction* compaction = static_cast<Compaction*>(arg);
ASSERT_TRUE(arg != nullptr);
ASSERT_TRUE(compaction->compaction_reason() ==
CompactionReason::kPeriodicCompaction);

View File

@ -2451,7 +2451,7 @@ TEST_F(DBWALTest, TruncateLastLogAfterRecoverWALEmpty) {
"DBImpl::DeleteObsoleteFileImpl::BeforeDeletion"}});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"PosixWritableFile::Close",
[](void* arg) { *(reinterpret_cast<size_t*>(arg)) = 0; });
[](void* arg) { *(static_cast<size_t*>(arg)) = 0; });
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
// Preallocate space for the empty log file. This could happen if WAL data
// was buffered in memory and the process crashed.
@ -2495,7 +2495,7 @@ TEST_F(DBWALTest, ReadOnlyRecoveryNoTruncate) {
SyncPoint::GetInstance()->SetCallBack(
"PosixWritableFile::Close", [&](void* arg) {
if (!enable_truncate) {
*(reinterpret_cast<size_t*>(arg)) = 0;
*(static_cast<size_t*>(arg)) = 0;
}
});
SyncPoint::GetInstance()->EnableProcessing();

View File

@ -3490,7 +3490,7 @@ TEST_F(UpdateFullHistoryTsLowTest, ConcurrentUpdate) {
VersionEdit* version_edit;
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::IncreaseFullHistoryTsLowImpl:BeforeEdit",
[&](void* arg) { version_edit = reinterpret_cast<VersionEdit*>(arg); });
[&](void* arg) { version_edit = static_cast<VersionEdit*>(arg); });
SyncPoint::GetInstance()->SetCallBack(
"VersionSet::LogAndApply:BeforeWriterWaiting",
[&](void* /*arg*/) { version_edit->SetFullHistoryTsLow(higher_ts_low); });

View File

@ -64,7 +64,7 @@ TEST_F(TimestampCompatibleCompactionTest, UserKeyCrossFileBoundary) {
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
const auto* compaction = reinterpret_cast<Compaction*>(arg);
const auto* compaction = static_cast<Compaction*>(arg);
ASSERT_NE(nullptr, compaction);
ASSERT_EQ(0, compaction->start_level());
ASSERT_EQ(1, compaction->num_input_levels());

View File

@ -119,7 +119,7 @@ TEST_P(DBWriteBufferManagerTest, SharedWriteBufferAcrossCFs2) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"WriteThread::WriteStall::Wait", [&](void* arg) {
InstrumentedMutexLock lock(&mutex);
WriteThread::Writer* w = reinterpret_cast<WriteThread::Writer*>(arg);
WriteThread::Writer* w = static_cast<WriteThread::Writer*>(arg);
w_set.insert(w);
// Allow the flush to continue if all writer threads are blocked.
if (w_set.size() == (unsigned long)num_writers) {
@ -368,7 +368,7 @@ TEST_P(DBWriteBufferManagerTest, SharedWriteBufferLimitAcrossDB1) {
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"WriteThread::WriteStall::Wait", [&](void* arg) {
WriteThread::Writer* w = reinterpret_cast<WriteThread::Writer*>(arg);
WriteThread::Writer* w = static_cast<WriteThread::Writer*>(arg);
{
InstrumentedMutexLock lock(&mutex);
w_set.insert(w);
@ -511,7 +511,7 @@ TEST_P(DBWriteBufferManagerTest, MixedSlowDownOptionsSingleDB) {
"WriteThread::WriteStall::Wait", [&](void* arg) {
{
InstrumentedMutexLock lock(&mutex);
WriteThread::Writer* w = reinterpret_cast<WriteThread::Writer*>(arg);
WriteThread::Writer* w = static_cast<WriteThread::Writer*>(arg);
w_slowdown_set.insert(w);
// Allow the flush continue if all writer threads are blocked.
if (w_slowdown_set.size() + (unsigned long)w_no_slowdown.load(
@ -674,7 +674,7 @@ TEST_P(DBWriteBufferManagerTest, MixedSlowDownOptionsMultipleDB) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"WriteThread::WriteStall::Wait", [&](void* arg) {
WriteThread::Writer* w = reinterpret_cast<WriteThread::Writer*>(arg);
WriteThread::Writer* w = static_cast<WriteThread::Writer*>(arg);
InstrumentedMutexLock lock(&mutex);
w_slowdown_set.insert(w);
// Allow the flush continue if all writer threads are blocked.

View File

@ -286,7 +286,7 @@ TEST_P(DBWriteTest, IOErrorOnWALWritePropagateToWriteThreadFollower) {
SyncPoint::GetInstance()->SetCallBack(
"WriteThread::JoinBatchGroup:Wait", [&](void* arg) {
ready_count++;
auto* w = reinterpret_cast<WriteThread::Writer*>(arg);
auto* w = static_cast<WriteThread::Writer*>(arg);
if (w->state == WriteThread::STATE_GROUP_LEADER) {
leader_count++;
while (ready_count < kNumThreads) {
@ -384,7 +384,7 @@ TEST_F(DBWriteTestUnparameterized, PipelinedWriteRace) {
second_write_in_progress = true;
return;
}
auto* w = reinterpret_cast<WriteThread::Writer*>(arg);
auto* w = static_cast<WriteThread::Writer*>(arg);
if (w->state == WriteThread::STATE_GROUP_LEADER) {
active_writers++;
if (leader.load() == nullptr) {
@ -404,7 +404,7 @@ TEST_F(DBWriteTestUnparameterized, PipelinedWriteRace) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"WriteThread::ExitAsBatchGroupLeader:Start", [&](void* arg) {
auto* wg = reinterpret_cast<WriteThread::WriteGroup*>(arg);
auto* wg = static_cast<WriteThread::WriteGroup*>(arg);
if (wg->leader == leader && !finished_WAL_write) {
finished_WAL_write = true;
while (active_writers.load() < 3) {
@ -416,7 +416,7 @@ TEST_F(DBWriteTestUnparameterized, PipelinedWriteRace) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"WriteThread::ExitAsBatchGroupLeader:AfterCompleteWriters",
[&](void* arg) {
auto* wg = reinterpret_cast<WriteThread::WriteGroup*>(arg);
auto* wg = static_cast<WriteThread::WriteGroup*>(arg);
if (wg->leader == leader) {
while (!second_write_in_progress.load()) {
// wait for the old follower thread to start the next write

View File

@ -125,7 +125,7 @@ class DeleteFileTest : public DBTestBase {
}
static void DoSleep(void* arg) {
auto test = reinterpret_cast<DeleteFileTest*>(arg);
auto test = static_cast<DeleteFileTest*>(arg);
test->env_->SleepForMicroseconds(2 * 1000 * 1000);
}

View File

@ -234,7 +234,7 @@ void ErrorHandler::CancelErrorRecovery() {
// recovery gets scheduled at that point
auto_recovery_ = false;
SstFileManagerImpl* sfm =
reinterpret_cast<SstFileManagerImpl*>(db_options_.sst_file_manager.get());
static_cast<SstFileManagerImpl*>(db_options_.sst_file_manager.get());
if (sfm) {
// This may or may not cancel a pending recovery
db_mutex_->Unlock();
@ -534,7 +534,7 @@ Status ErrorHandler::OverrideNoSpaceError(const Status& bg_error,
void ErrorHandler::RecoverFromNoSpace() {
SstFileManagerImpl* sfm =
reinterpret_cast<SstFileManagerImpl*>(db_options_.sst_file_manager.get());
static_cast<SstFileManagerImpl*>(db_options_.sst_file_manager.get());
// Inform SFM of the error, so it can kick-off the recovery
if (sfm) {

View File

@ -1095,7 +1095,7 @@ TEST_F(ExternalSSTFileBasicTest, FadviseTrigger) {
size_t total_fadvised_bytes = 0;
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"SstFileWriter::Rep::InvalidatePageCache", [&](void* arg) {
size_t fadvise_size = *(reinterpret_cast<size_t*>(arg));
size_t fadvise_size = *(static_cast<size_t*>(arg));
total_fadvised_bytes += fadvise_size;
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
@ -1632,7 +1632,7 @@ TEST_P(ExternalSSTFileBasicTest, IngestFileWithBadBlockChecksum) {
bool change_checksum_called = false;
const auto& change_checksum = [&](void* arg) {
if (!change_checksum_called) {
char* buf = reinterpret_cast<char*>(arg);
char* buf = static_cast<char*>(arg);
assert(nullptr != buf);
buf[0] ^= 0x1;
change_checksum_called = true;
@ -1729,10 +1729,10 @@ TEST_P(ExternalSSTFileBasicTest, IngestExternalFileWithCorruptedPropsBlock) {
uint64_t props_block_offset = 0;
size_t props_block_size = 0;
const auto& get_props_block_offset = [&](void* arg) {
props_block_offset = *reinterpret_cast<uint64_t*>(arg);
props_block_offset = *static_cast<uint64_t*>(arg);
};
const auto& get_props_block_size = [&](void* arg) {
props_block_size = *reinterpret_cast<uint64_t*>(arg);
props_block_size = *static_cast<uint64_t*>(arg);
};
SyncPoint::GetInstance()->DisableProcessing();
SyncPoint::GetInstance()->ClearAllCallBacks();

View File

@ -1714,9 +1714,8 @@ TEST_F(ExternalSSTFileTest, WithUnorderedWrite) {
{"DBImpl::WaitForPendingWrites:BeforeBlock",
"DBImpl::WriteImpl:BeforeUnorderedWriteMemtable"}});
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::IngestExternalFile:NeedFlush", [&](void* need_flush) {
ASSERT_TRUE(*reinterpret_cast<bool*>(need_flush));
});
"DBImpl::IngestExternalFile:NeedFlush",
[&](void* need_flush) { ASSERT_TRUE(*static_cast<bool*>(need_flush)); });
Options options = CurrentOptions();
options.unordered_write = true;

View File

@ -289,7 +289,7 @@ struct SVCleanupParams {
// Used in PinnedIteratorsManager to release pinned SuperVersion
void ForwardIterator::DeferredSVCleanup(void* arg) {
auto d = reinterpret_cast<SVCleanupParams*>(arg);
auto d = static_cast<SVCleanupParams*>(arg);
ForwardIterator::SVCleanup(d->db, d->sv,
d->background_purge_on_iterator_cleanup);
delete d;

View File

@ -24,7 +24,7 @@ struct MallocStatus {
};
static void GetJemallocStatus(void* mstat_arg, const char* status) {
MallocStatus* mstat = reinterpret_cast<MallocStatus*>(mstat_arg);
MallocStatus* mstat = static_cast<MallocStatus*>(mstat_arg);
size_t status_len = status ? strlen(status) : 0;
size_t buf_size = (size_t)(mstat->end - mstat->cur);
if (!status_len || status_len > buf_size) {

View File

@ -905,7 +905,7 @@ struct Saver {
static bool SaveValue(void* arg, const char* entry) {
TEST_SYNC_POINT_CALLBACK("Memtable::SaveValue:Begin:entry", &entry);
Saver* s = reinterpret_cast<Saver*>(arg);
Saver* s = static_cast<Saver*>(arg);
assert(s != nullptr);
assert(!s->value || !s->columns);

View File

@ -354,7 +354,7 @@ void testCountersWithFlushAndCompaction(Counters& counters, DB* db) {
});
SyncPoint::GetInstance()->SetCallBack(
"VersionSet::LogAndApply:WakeUpAndDone", [&](void* arg) {
auto* mutex = reinterpret_cast<InstrumentedMutex*>(arg);
auto* mutex = static_cast<InstrumentedMutex*>(arg);
mutex->AssertHeld();
int thread_id = get_thread_id();
ASSERT_EQ(2, thread_id);
@ -380,12 +380,12 @@ void testCountersWithFlushAndCompaction(Counters& counters, DB* db) {
SyncPoint::GetInstance()->EnableProcessing();
port::Thread set_options_thread([&]() {
ASSERT_OK(reinterpret_cast<DBImpl*>(db)->SetOptions(
ASSERT_OK(static_cast<DBImpl*>(db)->SetOptions(
{{"disable_auto_compactions", "false"}}));
});
TEST_SYNC_POINT("testCountersWithCompactionAndFlush:BeforeCompact");
port::Thread compact_thread([&]() {
ASSERT_OK(reinterpret_cast<DBImpl*>(db)->CompactRange(
ASSERT_OK(static_cast<DBImpl*>(db)->CompactRange(
CompactRangeOptions(), db->DefaultColumnFamily(), nullptr, nullptr));
});

View File

@ -118,7 +118,7 @@ TEST_F(ObsoleteFilesTest, RaceForObsoleteFileDeletion) {
});
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::DeleteObsoleteFileImpl:AfterDeletion", [&](void* arg) {
Status* p_status = reinterpret_cast<Status*>(arg);
Status* p_status = static_cast<Status*>(arg);
ASSERT_OK(*p_status);
});
SyncPoint::GetInstance()->SetCallBack(

View File

@ -29,7 +29,7 @@ class PeriodicTaskSchedulerTest : public DBTestBase {
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::StartPeriodicTaskScheduler:Init", [&](void* arg) {
auto periodic_task_scheduler_ptr =
reinterpret_cast<PeriodicTaskScheduler*>(arg);
static_cast<PeriodicTaskScheduler*>(arg);
periodic_task_scheduler_ptr->TEST_OverrideTimer(mock_clock_.get());
});
}

View File

@ -78,11 +78,11 @@ class PinnedIteratorsManager : public Cleanable {
private:
static void ReleaseInternalIterator(void* ptr) {
delete reinterpret_cast<InternalIterator*>(ptr);
delete static_cast<InternalIterator*>(ptr);
}
static void ReleaseArenaInternalIterator(void* ptr) {
reinterpret_cast<InternalIterator*>(ptr)->~InternalIterator();
static_cast<InternalIterator*>(ptr)->~InternalIterator();
}
bool pinning_enabled;

View File

@ -496,8 +496,8 @@ TEST_P(PlainTableDBTest, Flush) {
ASSERT_GT(int_num, 0U);
TablePropertiesCollection ptc;
ASSERT_OK(reinterpret_cast<DB*>(dbfull())->GetPropertiesOfAllTables(
&ptc));
ASSERT_OK(
static_cast<DB*>(dbfull())->GetPropertiesOfAllTables(&ptc));
ASSERT_EQ(1U, ptc.size());
auto row = ptc.begin();
auto tp = row->second;

View File

@ -36,7 +36,7 @@ class SeqnoTimeTest : public DBTestBase {
"DBImpl::StartPeriodicTaskScheduler:Init",
[mock_clock = mock_clock_](void* arg) {
auto periodic_task_scheduler_ptr =
reinterpret_cast<PeriodicTaskScheduler*>(arg);
static_cast<PeriodicTaskScheduler*>(arg);
periodic_task_scheduler_ptr->TEST_OverrideTimer(mock_clock.get());
});
mock_clock_->SetCurrentTime(kMockStartTime);

View File

@ -3098,7 +3098,7 @@ void Version::PrepareAppend(const MutableCFOptions& mutable_cf_options,
bool update_stats) {
TEST_SYNC_POINT_CALLBACK(
"Version::PrepareAppend:forced_check",
reinterpret_cast<void*>(&storage_info_.force_consistency_checks_));
static_cast<void*>(&storage_info_.force_consistency_checks_));
if (update_stats) {
UpdateAccumulatedStats(read_options);

View File

@ -1508,7 +1508,7 @@ TEST_F(VersionSetTest, SameColumnFamilyGroupCommit) {
int count = 0;
SyncPoint::GetInstance()->SetCallBack(
"VersionSet::ProcessManifestWrites:SameColumnFamily", [&](void* arg) {
uint32_t* cf_id = reinterpret_cast<uint32_t*>(arg);
uint32_t* cf_id = static_cast<uint32_t*>(arg);
EXPECT_EQ(0u, *cf_id);
++count;
});
@ -1785,7 +1785,7 @@ TEST_F(VersionSetTest, WalEditsNotAppliedToVersion) {
autovector<Version*> versions;
SyncPoint::GetInstance()->SetCallBack(
"VersionSet::ProcessManifestWrites:NewVersion",
[&](void* arg) { versions.push_back(reinterpret_cast<Version*>(arg)); });
[&](void* arg) { versions.push_back(static_cast<Version*>(arg)); });
SyncPoint::GetInstance()->EnableProcessing();
ASSERT_OK(LogAndApplyToDefaultCF(edits));
@ -1821,7 +1821,7 @@ TEST_F(VersionSetTest, NonWalEditsAppliedToVersion) {
autovector<Version*> versions;
SyncPoint::GetInstance()->SetCallBack(
"VersionSet::ProcessManifestWrites:NewVersion",
[&](void* arg) { versions.push_back(reinterpret_cast<Version*>(arg)); });
[&](void* arg) { versions.push_back(static_cast<Version*>(arg)); });
SyncPoint::GetInstance()->EnableProcessing();
ASSERT_OK(LogAndApplyToDefaultCF(edits));
@ -2029,7 +2029,7 @@ TEST_F(VersionSetTest, WalDeletion) {
std::vector<WalAddition> wal_additions;
SyncPoint::GetInstance()->SetCallBack(
"VersionSet::WriteCurrentStateToManifest:SaveWal", [&](void* arg) {
VersionEdit* edit = reinterpret_cast<VersionEdit*>(arg);
VersionEdit* edit = static_cast<VersionEdit*>(arg);
ASSERT_TRUE(edit->IsWalAddition());
for (auto& addition : edit->GetWalAdditions()) {
wal_additions.push_back(addition);
@ -2570,36 +2570,32 @@ class VersionSetAtomicGroupTest : public VersionSetTestBase,
SyncPoint::GetInstance()->ClearAllCallBacks();
SyncPoint::GetInstance()->SetCallBack(
"AtomicGroupReadBuffer::AddEdit:FirstInAtomicGroup", [&](void* arg) {
VersionEdit* e = reinterpret_cast<VersionEdit*>(arg);
VersionEdit* e = static_cast<VersionEdit*>(arg);
EXPECT_EQ(edits_.front().DebugString(),
e->DebugString()); // compare based on value
first_in_atomic_group_ = true;
});
SyncPoint::GetInstance()->SetCallBack(
"AtomicGroupReadBuffer::AddEdit:LastInAtomicGroup", [&](void* arg) {
VersionEdit* e = reinterpret_cast<VersionEdit*>(arg);
VersionEdit* e = static_cast<VersionEdit*>(arg);
EXPECT_EQ(edits_.back().DebugString(),
e->DebugString()); // compare based on value
EXPECT_TRUE(first_in_atomic_group_);
last_in_atomic_group_ = true;
});
SyncPoint::GetInstance()->SetCallBack(
"VersionEditHandlerBase::Iterate:Finish", [&](void* arg) {
num_recovered_edits_ = *reinterpret_cast<size_t*>(arg);
});
"VersionEditHandlerBase::Iterate:Finish",
[&](void* arg) { num_recovered_edits_ = *static_cast<size_t*>(arg); });
SyncPoint::GetInstance()->SetCallBack(
"AtomicGroupReadBuffer::AddEdit:AtomicGroup",
[&](void* /* arg */) { ++num_edits_in_atomic_group_; });
SyncPoint::GetInstance()->SetCallBack(
"AtomicGroupReadBuffer::AddEdit:AtomicGroupMixedWithNormalEdits",
[&](void* arg) {
corrupted_edit_ = *reinterpret_cast<VersionEdit*>(arg);
});
[&](void* arg) { corrupted_edit_ = *static_cast<VersionEdit*>(arg); });
SyncPoint::GetInstance()->SetCallBack(
"AtomicGroupReadBuffer::AddEdit:IncorrectAtomicGroupSize",
[&](void* arg) {
edit_with_incorrect_group_size_ =
*reinterpret_cast<VersionEdit*>(arg);
edit_with_incorrect_group_size_ = *static_cast<VersionEdit*>(arg);
});
SyncPoint::GetInstance()->EnableProcessing();
}
@ -2966,7 +2962,7 @@ TEST_P(VersionSetTestDropOneCF, HandleDroppedColumnFamilyInAtomicGroup) {
SyncPoint::GetInstance()->SetCallBack(
"VersionSet::ProcessManifestWrites:CheckOneAtomicGroup", [&](void* arg) {
std::vector<VersionEdit*>* tmp_edits =
reinterpret_cast<std::vector<VersionEdit*>*>(arg);
static_cast<std::vector<VersionEdit*>*>(arg);
EXPECT_EQ(kAtomicGroupSize - 1, tmp_edits->size());
for (const auto e : *tmp_edits) {
bool found = false;

View File

@ -220,7 +220,7 @@ TEST_P(WriteCallbackPTest, WriteWithCallbackTest) {
is_last = (cur_threads_linked == write_group.size() - 1);
// check my state
auto* writer = reinterpret_cast<WriteThread::Writer*>(arg);
auto* writer = static_cast<WriteThread::Writer*>(arg);
if (is_leader) {
ASSERT_TRUE(writer->state ==
@ -250,7 +250,7 @@ TEST_P(WriteCallbackPTest, WriteWithCallbackTest) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"WriteThread::JoinBatchGroup:DoneWaiting", [&](void* arg) {
// check my state
auto* writer = reinterpret_cast<WriteThread::Writer*>(arg);
auto* writer = static_cast<WriteThread::Writer*>(arg);
if (!allow_batching_) {
// no batching so everyone should be a leader

View File

@ -92,7 +92,7 @@ int64_t GetOneHotKeyID(double rand_seed, int64_t max_key) {
void PoolSizeChangeThread(void* v) {
assert(FLAGS_compaction_thread_pool_adjust_interval > 0);
ThreadState* thread = reinterpret_cast<ThreadState*>(v);
ThreadState* thread = static_cast<ThreadState*>(v);
SharedState* shared = thread->shared;
while (true) {
@ -127,7 +127,7 @@ void PoolSizeChangeThread(void* v) {
void DbVerificationThread(void* v) {
assert(FLAGS_continuous_verification_interval > 0);
auto* thread = reinterpret_cast<ThreadState*>(v);
auto* thread = static_cast<ThreadState*>(v);
SharedState* shared = thread->shared;
StressTest* stress_test = shared->GetStressTest();
assert(stress_test != nullptr);
@ -154,7 +154,7 @@ void DbVerificationThread(void* v) {
void CompressedCacheSetCapacityThread(void* v) {
assert(FLAGS_compressed_secondary_cache_size > 0 ||
FLAGS_compressed_secondary_cache_ratio > 0.0);
auto* thread = reinterpret_cast<ThreadState*>(v);
auto* thread = static_cast<ThreadState*>(v);
SharedState* shared = thread->shared;
while (true) {
{

View File

@ -15,7 +15,7 @@
namespace ROCKSDB_NAMESPACE {
void ThreadBody(void* v) {
ThreadStatusUtil::RegisterThread(db_stress_env, ThreadStatus::USER);
ThreadState* thread = reinterpret_cast<ThreadState*>(v);
ThreadState* thread = static_cast<ThreadState*>(v);
SharedState* shared = thread->shared;
if (!FLAGS_skip_verifydb && shared->ShouldVerifyAtBeginning()) {

View File

@ -380,7 +380,7 @@ void StressTest::FinishInitDb(SharedState* shared) {
if (FLAGS_enable_compaction_filter) {
auto* compaction_filter_factory =
reinterpret_cast<DbStressCompactionFilterFactory*>(
static_cast<DbStressCompactionFilterFactory*>(
options_.compaction_filter_factory.get());
assert(compaction_filter_factory);
// This must be called only after any potential `SharedState::Restore()` has

2
env/env_posix.cc vendored
View File

@ -465,7 +465,7 @@ struct StartThreadState {
};
static void* StartThreadWrapper(void* arg) {
StartThreadState* state = reinterpret_cast<StartThreadState*>(arg);
StartThreadState* state = static_cast<StartThreadState*>(arg);
state->user_function(state->arg);
delete state;
return nullptr;

16
env/env_test.cc vendored
View File

@ -239,13 +239,11 @@ TEST_F(EnvPosixTest, LowerThreadPoolCpuPriority) {
std::atomic<CpuPriority> from_priority(CpuPriority::kNormal);
std::atomic<CpuPriority> to_priority(CpuPriority::kNormal);
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"ThreadPoolImpl::BGThread::BeforeSetCpuPriority", [&](void* pri) {
from_priority.store(*reinterpret_cast<CpuPriority*>(pri));
});
"ThreadPoolImpl::BGThread::BeforeSetCpuPriority",
[&](void* pri) { from_priority.store(*static_cast<CpuPriority*>(pri)); });
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"ThreadPoolImpl::BGThread::AfterSetCpuPriority", [&](void* pri) {
to_priority.store(*reinterpret_cast<CpuPriority*>(pri));
});
"ThreadPoolImpl::BGThread::AfterSetCpuPriority",
[&](void* pri) { to_priority.store(*static_cast<CpuPriority*>(pri)); });
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
env_->SetBackgroundThreads(1, Env::BOTTOM);
@ -447,7 +445,7 @@ TEST_P(EnvPosixTestWithParam, RunMany) {
CB(std::atomic<int>* p, int i) : last_id_ptr(p), id(i) {}
static void Run(void* v) {
CB* cb = reinterpret_cast<CB*>(v);
CB* cb = static_cast<CB*>(v);
int cur = cb->last_id_ptr->load();
ASSERT_EQ(cb->id - 1, cur);
cb->last_id_ptr->store(cb->id);
@ -484,7 +482,7 @@ struct State {
};
static void ThreadBody(void* arg) {
State* s = reinterpret_cast<State*>(arg);
State* s = static_cast<State*>(arg);
s->mu.Lock();
s->val += 1;
s->num_running -= 1;
@ -531,7 +529,7 @@ TEST_P(EnvPosixTestWithParam, TwoPools) {
should_start_(_should_start) {}
static void Run(void* v) {
CB* cb = reinterpret_cast<CB*>(v);
CB* cb = static_cast<CB*>(v);
cb->Run();
}

2
env/fs_posix.cc vendored
View File

@ -806,7 +806,7 @@ class PosixFileSystem : public FileSystem {
IOStatus UnlockFile(FileLock* lock, const IOOptions& /*opts*/,
IODebugContext* /*dbg*/) override {
PosixFileLock* my_lock = reinterpret_cast<PosixFileLock*>(lock);
PosixFileLock* my_lock = static_cast<PosixFileLock*>(lock);
IOStatus result;
mutex_locked_files.Lock();
// If we are unlocking, then verify that we had locked it earlier,

4
env/io_posix.cc vendored
View File

@ -968,7 +968,7 @@ IOStatus PosixMmapReadableFile::Read(uint64_t offset, size_t n,
} else if (offset + n > length_) {
n = static_cast<size_t>(length_ - offset);
}
*result = Slice(reinterpret_cast<char*>(mmapped_region_) + offset, n);
*result = Slice(static_cast<char*>(mmapped_region_) + offset, n);
return s;
}
@ -1067,7 +1067,7 @@ IOStatus PosixMmapFile::MapNewRegion() {
}
TEST_KILL_RANDOM("PosixMmapFile::Append:2");
base_ = reinterpret_cast<char*>(ptr);
base_ = static_cast<char*>(ptr);
limit_ = base_ + map_size_;
dst_ = base_;
last_sync_ = base_;

View File

@ -117,8 +117,7 @@ class FullCompactor : public Compactor {
}
static void CompactFiles(void* arg) {
std::unique_ptr<CompactionTask> task(
reinterpret_cast<CompactionTask*>(arg));
std::unique_ptr<CompactionTask> task(static_cast<CompactionTask*>(arg));
assert(task);
assert(task->db);
Status s = task->db->CompactFiles(

View File

@ -64,7 +64,7 @@ const std::vector<std::string>& GetColumnFamilyNames() {
inline bool IsLittleEndian() {
uint32_t x = 1;
return *reinterpret_cast<char*>(&x) != 0;
return *static_cast<char*>(&x) != 0;
}
static std::atomic<int>& ShouldSecondaryWait() {
@ -75,7 +75,7 @@ static std::atomic<int>& ShouldSecondaryWait() {
static std::string Key(uint64_t k) {
std::string ret;
if (IsLittleEndian()) {
ret.append(reinterpret_cast<char*>(&k), sizeof(k));
ret.append(static_cast<char*>(&k), sizeof(k));
} else {
char buf[sizeof(k)];
buf[0] = k & 0xff;

View File

@ -367,7 +367,7 @@ Status DeleteScheduler::DeleteTrashFile(const std::string& path_in_trash,
DirFsyncOptions(DirFsyncOptions::FsyncReason::kFileDeleted));
TEST_SYNC_POINT_CALLBACK(
"DeleteScheduler::DeleteTrashFile::AfterSyncDir",
reinterpret_cast<void*>(const_cast<std::string*>(&dir_to_sync)));
static_cast<void*>(const_cast<std::string*>(&dir_to_sync)));
}
}
if (s.ok()) {

View File

@ -131,7 +131,7 @@ TEST_F(DeleteSchedulerTest, BasicRateLimiting) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"DeleteScheduler::DeleteTrashFile::AfterSyncDir", [&](void* arg) {
dir_synced++;
std::string* dir = reinterpret_cast<std::string*>(arg);
std::string* dir = static_cast<std::string*>(arg);
EXPECT_EQ(dummy_files_dirs_[0], *dir);
});

View File

@ -1550,7 +1550,7 @@ TEST_P(PrefetchTest, DBIterLevelReadAhead) {
SyncPoint::GetInstance()->SetCallBack(
"BlockPrefetcher::SetReadaheadState", [&](void* arg) {
readahead_carry_over_count++;
size_t readahead_size = *reinterpret_cast<size_t*>(arg);
size_t readahead_size = *static_cast<size_t*>(arg);
if (readahead_carry_over_count) {
ASSERT_GT(readahead_size, 8 * 1024);
}
@ -1558,7 +1558,7 @@ TEST_P(PrefetchTest, DBIterLevelReadAhead) {
SyncPoint::GetInstance()->SetCallBack(
"FilePrefetchBuffer::TryReadFromCache", [&](void* arg) {
current_readahead_size = *reinterpret_cast<size_t*>(arg);
current_readahead_size = *static_cast<size_t*>(arg);
ASSERT_GT(current_readahead_size, 0);
});
@ -1659,7 +1659,7 @@ TEST_P(PrefetchTest, DBIterLevelReadAheadWithAsyncIO) {
SyncPoint::GetInstance()->SetCallBack(
"BlockPrefetcher::SetReadaheadState", [&](void* arg) {
readahead_carry_over_count++;
size_t readahead_size = *reinterpret_cast<size_t*>(arg);
size_t readahead_size = *static_cast<size_t*>(arg);
if (readahead_carry_over_count) {
ASSERT_GT(readahead_size, 8 * 1024);
}
@ -1667,7 +1667,7 @@ TEST_P(PrefetchTest, DBIterLevelReadAheadWithAsyncIO) {
SyncPoint::GetInstance()->SetCallBack(
"FilePrefetchBuffer::TryReadFromCache", [&](void* arg) {
current_readahead_size = *reinterpret_cast<size_t*>(arg);
current_readahead_size = *static_cast<size_t*>(arg);
ASSERT_GT(current_readahead_size, 0);
});
@ -2057,7 +2057,7 @@ TEST_P(PrefetchTest1, NonSequentialReadsWithAdaptiveReadahead) {
[&](void* /*arg*/) { set_readahead++; });
SyncPoint::GetInstance()->SetCallBack(
"FilePrefetchBuffer::TryReadFromCache",
[&](void* arg) { readahead_size = *reinterpret_cast<size_t*>(arg); });
[&](void* arg) { readahead_size = *static_cast<size_t*>(arg); });
SyncPoint::GetInstance()->EnableProcessing();
@ -2152,9 +2152,8 @@ TEST_P(PrefetchTest1, DecreaseReadAheadIfInCache) {
SyncPoint::GetInstance()->SetCallBack("FilePrefetchBuffer::Prefetch:Start",
[&](void*) { buff_prefetch_count++; });
SyncPoint::GetInstance()->SetCallBack(
"FilePrefetchBuffer::TryReadFromCache", [&](void* arg) {
current_readahead_size = *reinterpret_cast<size_t*>(arg);
});
"FilePrefetchBuffer::TryReadFromCache",
[&](void* arg) { current_readahead_size = *static_cast<size_t*>(arg); });
SyncPoint::GetInstance()->EnableProcessing();
ReadOptions ro;

View File

@ -221,7 +221,7 @@ int JemallocNodumpAllocator::GetThreadSpecificCache(size_t size) {
size > options_.tcache_size_upper_bound)) {
return MALLOCX_TCACHE_NONE;
}
unsigned* tcache_index = reinterpret_cast<unsigned*>(tcache_.Get());
unsigned* tcache_index = static_cast<unsigned*>(tcache_.Get());
if (UNLIKELY(tcache_index == nullptr)) {
// Instantiate tcache.
tcache_index = new unsigned(0);

View File

@ -17,7 +17,7 @@ struct CustomDeleter {
void operator()(char* ptr) const {
if (allocator) {
allocator->Deallocate(reinterpret_cast<void*>(ptr));
allocator->Deallocate(ptr);
} else {
delete[] ptr;
}

View File

@ -739,7 +739,7 @@ bool InlineSkipList<Comparator>::InsertWithHint(const char* key, void** hint) {
Splice* splice = reinterpret_cast<Splice*>(*hint);
if (splice == nullptr) {
splice = AllocateSplice();
*hint = reinterpret_cast<void*>(splice);
*hint = splice;
}
return Insert<false>(key, splice, true);
}
@ -751,7 +751,7 @@ bool InlineSkipList<Comparator>::InsertWithHintConcurrently(const char* key,
Splice* splice = reinterpret_cast<Splice*>(*hint);
if (splice == nullptr) {
splice = AllocateSpliceOnHeap();
*hint = reinterpret_cast<void*>(splice);
*hint = splice;
}
return Insert<true>(key, splice, true);
}

View File

@ -568,7 +568,7 @@ class TestState {
};
static void ConcurrentReader(void* arg) {
TestState* state = reinterpret_cast<TestState*>(arg);
TestState* state = static_cast<TestState*>(arg);
Random rnd(state->seed_);
int64_t reads = 0;
state->Change(TestState::RUNNING);
@ -581,7 +581,7 @@ static void ConcurrentReader(void* arg) {
}
static void ConcurrentWriter(void* arg) {
TestState* state = reinterpret_cast<TestState*>(arg);
TestState* state = static_cast<TestState*>(arg);
uint32_t k = state->next_writer_++ % ConcurrentTest::K;
state->t_.ConcurrentWriteStep(k, state->use_hint_);
state->AdjustPendingWriters(-1);

View File

@ -340,7 +340,7 @@ class TestState {
};
static void ConcurrentReader(void* arg) {
TestState* state = reinterpret_cast<TestState*>(arg);
TestState* state = static_cast<TestState*>(arg);
Random rnd(state->seed_);
int64_t reads = 0;
state->Change(TestState::RUNNING);

View File

@ -45,7 +45,7 @@ class StatsHistoryTest : public DBTestBase {
SyncPoint::GetInstance()->SetCallBack(
"DBImpl::StartPeriodicTaskScheduler:Init", [&](void* arg) {
auto periodic_task_scheduler_ptr =
reinterpret_cast<PeriodicTaskScheduler*>(arg);
static_cast<PeriodicTaskScheduler*>(arg);
periodic_task_scheduler_ptr->TEST_OverrideTimer(mock_clock_.get());
});
}

View File

@ -73,7 +73,7 @@ bool Customizable::AreEquivalent(const ConfigOptions& config_options,
std::string* mismatch) const {
if (config_options.sanity_level > ConfigOptions::kSanityLevelNone &&
this != other) {
const Customizable* custom = reinterpret_cast<const Customizable*>(other);
const Customizable* custom = static_cast<const Customizable*>(other);
if (custom == nullptr) { // Cast failed
return false;
} else if (GetId() != custom->GetId()) {

View File

@ -1286,8 +1286,7 @@ struct StartThreadState {
};
void* StartThreadWrapper(void* arg) {
std::unique_ptr<StartThreadState> state(
reinterpret_cast<StartThreadState*>(arg));
std::unique_ptr<StartThreadState> state(static_cast<StartThreadState*>(arg));
state->user_function(state->arg);
return nullptr;
}

View File

@ -230,7 +230,7 @@ IOStatus WinMmapReadableFile::Read(uint64_t offset, size_t n,
} else if (offset + n > length_) {
n = length_ - static_cast<size_t>(offset);
}
*result = Slice(reinterpret_cast<const char*>(mapped_region_) + offset, n);
*result = Slice(static_cast<const char*>(mapped_region_) + offset, n);
return s;
}
@ -327,9 +327,9 @@ IOStatus WinMmapFile::MapNewRegion(const IOOptions& options,
offset.QuadPart = file_offset_;
// View must begin at the granularity aligned offset
mapped_begin_ = reinterpret_cast<char*>(
MapViewOfFileEx(hMap_, FILE_MAP_WRITE, offset.HighPart, offset.LowPart,
view_size_, NULL));
mapped_begin_ =
static_cast<char*>(MapViewOfFileEx(hMap_, FILE_MAP_WRITE, offset.HighPart,
offset.LowPart, view_size_, NULL));
if (!mapped_begin_) {
status = IOErrorFromWindowsError(

View File

@ -288,7 +288,8 @@ bool GenerateRfcUuid(std::string* output) {
return false;
}
// rpc_str is nul-terminated
// rpc_str is nul-terminated.
// reinterpret_cast for possible change between signed/unsigned char.
*output = reinterpret_cast<char*>(rpc_str);
status = RpcStringFreeA(&rpc_str);

View File

@ -184,7 +184,7 @@ class BlockBasedTableReaderBaseTest : public testing::Test {
&general_table, prefetch_index_and_filter_in_cache);
if (s.ok()) {
table->reset(reinterpret_cast<BlockBasedTable*>(general_table.release()));
table->reset(static_cast<BlockBasedTable*>(general_table.release()));
}
if (status) {

View File

@ -17,8 +17,8 @@
namespace ROCKSDB_NAMESPACE {
void ForceReleaseCachedEntry(void* arg, void* h) {
Cache* cache = reinterpret_cast<Cache*>(arg);
Cache::Handle* handle = reinterpret_cast<Cache::Handle*>(h);
Cache* cache = static_cast<Cache*>(arg);
Cache::Handle* handle = static_cast<Cache::Handle*>(h);
cache->Release(handle, true /* erase_if_last_ref */);
}

View File

@ -278,7 +278,7 @@ class BlockFetcherTest : public testing::Test {
0 /* block_protection_bytes_per_key */,
&table_reader, 0 /* tail_size */));
table->reset(reinterpret_cast<BlockBasedTable*>(table_reader.release()));
table->reset(static_cast<BlockBasedTable*>(table_reader.release()));
}
std::string ToInternalKey(const std::string& key) {

View File

@ -3208,7 +3208,7 @@ TEST_P(BlockBasedTableTest, BlockCacheLookupSeqScans) {
c.Finish(options, ioptions, moptions, table_options, internal_comparator,
&keys, &kvmap);
BlockBasedTable* bbt = reinterpret_cast<BlockBasedTable*>(c.GetTableReader());
BlockBasedTable* bbt = static_cast<BlockBasedTable*>(c.GetTableReader());
BlockHandle block_handle;
ReadOptions read_options;
@ -3246,7 +3246,7 @@ TEST_P(BlockBasedTableTest, BlockCacheLookupSeqScans) {
ASSERT_EQ(iter->value().ToString(), kv_iter->second);
FilePrefetchBuffer* prefetch_buffer =
(reinterpret_cast<BlockBasedTableIterator*>(iter.get()))
(static_cast<BlockBasedTableIterator*>(iter.get()))
->prefetch_buffer();
std::vector<std::pair<uint64_t, size_t>> buffer_info(1);
prefetch_buffer->TEST_GetBufferOffsetandSize(buffer_info);
@ -3284,7 +3284,7 @@ TEST_P(BlockBasedTableTest, BlockCacheLookupSeqScans) {
ASSERT_EQ(iter->value().ToString(), kv_iter->second);
FilePrefetchBuffer* prefetch_buffer =
(reinterpret_cast<BlockBasedTableIterator*>(iter.get()))
(static_cast<BlockBasedTableIterator*>(iter.get()))
->prefetch_buffer();
std::vector<std::pair<uint64_t, size_t>> buffer_info(1);
prefetch_buffer->TEST_GetBufferOffsetandSize(buffer_info);
@ -3349,7 +3349,7 @@ TEST_P(BlockBasedTableTest, BlockCacheLookupAsyncScansSeek) {
c.Finish(options, ioptions, moptions, table_options, internal_comparator,
&keys, &kvmap);
BlockBasedTable* bbt = reinterpret_cast<BlockBasedTable*>(c.GetTableReader());
BlockBasedTable* bbt = static_cast<BlockBasedTable*>(c.GetTableReader());
BlockHandle block_handle;
ReadOptions read_options;
@ -3392,7 +3392,7 @@ TEST_P(BlockBasedTableTest, BlockCacheLookupAsyncScansSeek) {
ASSERT_EQ(iter->value().ToString(), kv_iter->second);
FilePrefetchBuffer* prefetch_buffer =
(reinterpret_cast<BlockBasedTableIterator*>(iter.get()))
(static_cast<BlockBasedTableIterator*>(iter.get()))
->prefetch_buffer();
std::vector<std::pair<uint64_t, size_t>> buffer_info(2);
prefetch_buffer->TEST_GetBufferOffsetandSize(buffer_info);
@ -3431,7 +3431,7 @@ TEST_P(BlockBasedTableTest, BlockCacheLookupAsyncScansSeek) {
ASSERT_EQ(iter->value().ToString(), kv_iter->second);
FilePrefetchBuffer* prefetch_buffer =
(reinterpret_cast<BlockBasedTableIterator*>(iter.get()))
(static_cast<BlockBasedTableIterator*>(iter.get()))
->prefetch_buffer();
std::vector<std::pair<uint64_t, size_t>> buffer_info(2);
prefetch_buffer->TEST_GetBufferOffsetandSize(buffer_info);
@ -3482,7 +3482,7 @@ TEST_P(BlockBasedTableTest, BlockCacheLookupAsyncScansSeek) {
ASSERT_EQ(iter->value().ToString(), kv_iter->second);
FilePrefetchBuffer* prefetch_buffer =
(reinterpret_cast<BlockBasedTableIterator*>(iter.get()))
(static_cast<BlockBasedTableIterator*>(iter.get()))
->prefetch_buffer();
{

View File

@ -25,9 +25,9 @@ void MockSystemClock::InstallTimedWaitFixCallback() {
// but is interpreted in real clock time.)
SyncPoint::GetInstance()->SetCallBack(
"InstrumentedCondVar::TimedWaitInternal", [&](void* arg) {
uint64_t time_us = *reinterpret_cast<uint64_t*>(arg);
uint64_t time_us = *static_cast<uint64_t*>(arg);
if (time_us < this->RealNowMicros()) {
*reinterpret_cast<uint64_t*>(arg) = this->RealNowMicros() + 1000;
*static_cast<uint64_t*>(arg) = this->RealNowMicros() + 1000;
}
});
#endif // OS_MACOSX

View File

@ -455,7 +455,7 @@ class SleepingBackgroundTask {
}
static void DoSleepTask(void* arg) {
reinterpret_cast<SleepingBackgroundTask*>(arg)->DoSleep();
static_cast<SleepingBackgroundTask*>(arg)->DoSleep();
}
private:

View File

@ -3908,7 +3908,7 @@ class Benchmark {
};
static void ThreadBody(void* v) {
ThreadArg* arg = reinterpret_cast<ThreadArg*>(v);
ThreadArg* arg = static_cast<ThreadArg*>(v);
SharedState* shared = arg->shared;
ThreadState* thread = arg->thread;
{
@ -7912,7 +7912,7 @@ class Benchmark {
if (FLAGS_optimistic_transaction_db) {
success = inserter.OptimisticTransactionDBInsert(db_.opt_txn_db);
} else if (FLAGS_transaction_db) {
TransactionDB* txn_db = reinterpret_cast<TransactionDB*>(db_.db);
TransactionDB* txn_db = static_cast<TransactionDB*>(db_.db);
success = inserter.TransactionDBInsert(txn_db, txn_options);
} else {
success = inserter.DBInsert(db_.db);

View File

@ -54,7 +54,7 @@ struct DataPumpThread {
};
static void DataPumpThreadBody(void* arg) {
DataPumpThread* t = reinterpret_cast<DataPumpThread*>(arg);
DataPumpThread* t = static_cast<DataPumpThread*>(arg);
DB* db = t->db;
Random rnd(301);
uint64_t i = 0;

View File

@ -703,7 +703,7 @@ TEST_F(LdbCmdTest, ListFileTombstone) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"ListFileRangeDeletesCommand::DoCommand:BeforePrint", [&](void* arg) {
std::string* out_str = reinterpret_cast<std::string*>(arg);
std::string* out_str = static_cast<std::string*>(arg);
// Count number of tombstones printed
int num_tb = 0;
@ -736,7 +736,7 @@ TEST_F(LdbCmdTest, ListFileTombstone) {
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"ListFileRangeDeletesCommand::DoCommand:BeforePrint", [&](void* arg) {
std::string* out_str = reinterpret_cast<std::string*>(arg);
std::string* out_str = static_cast<std::string*>(arg);
// Count number of tombstones printed
int num_tb = 0;
@ -796,7 +796,7 @@ TEST_F(LdbCmdTest, DisableConsistencyChecks) {
SyncPoint::GetInstance()->SetCallBack(
"Version::PrepareAppend:forced_check", [&](void* arg) {
bool* forced = reinterpret_cast<bool*>(arg);
bool* forced = static_cast<bool*>(arg);
ASSERT_TRUE(*forced);
});
SyncPoint::GetInstance()->EnableProcessing();
@ -816,7 +816,7 @@ TEST_F(LdbCmdTest, DisableConsistencyChecks) {
SyncPoint::GetInstance()->SetCallBack(
"Version::PrepareAppend:forced_check", [&](void* arg) {
bool* forced = reinterpret_cast<bool*>(arg);
bool* forced = static_cast<bool*>(arg);
ASSERT_TRUE(*forced);
});
SyncPoint::GetInstance()->EnableProcessing();
@ -837,8 +837,7 @@ TEST_F(LdbCmdTest, DisableConsistencyChecks) {
SyncPoint::GetInstance()->SetCallBack(
"ColumnFamilyData::ColumnFamilyData", [&](void* arg) {
ColumnFamilyOptions* cfo =
reinterpret_cast<ColumnFamilyOptions*>(arg);
ColumnFamilyOptions* cfo = static_cast<ColumnFamilyOptions*>(arg);
ASSERT_FALSE(cfo->force_consistency_checks);
});
SyncPoint::GetInstance()->EnableProcessing();

View File

@ -81,10 +81,9 @@ TEST_F(RepeatableThreadTest, MockEnvTest) {
// time RepeatableThread::wait is called, it is no guarantee that the
// delay + mock_clock->NowMicros will be greater than the current real
// time. However, 1000 seconds should be sufficient in most cases.
uint64_t time_us = *reinterpret_cast<uint64_t*>(arg);
uint64_t time_us = *static_cast<uint64_t*>(arg);
if (time_us < mock_clock_->RealNowMicros()) {
*reinterpret_cast<uint64_t*>(arg) =
mock_clock_->RealNowMicros() + 1000;
*static_cast<uint64_t*>(arg) = mock_clock_->RealNowMicros() + 1000;
}
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();

View File

@ -25,8 +25,8 @@ TEST(SliceTest, StringView) {
// Use this to keep track of the cleanups that were actually performed
void Multiplier(void* arg1, void* arg2) {
int* res = reinterpret_cast<int*>(arg1);
int* num = reinterpret_cast<int*>(arg2);
int* res = static_cast<int*>(arg1);
int* num = static_cast<int*>(arg2);
*res *= *num;
}

View File

@ -79,7 +79,7 @@ class SimulatedBackgroundTask {
}
static void DoSimulatedTask(void* arg) {
reinterpret_cast<SimulatedBackgroundTask*>(arg)->Run();
static_cast<SimulatedBackgroundTask*>(arg)->Run();
}
private:

View File

@ -322,7 +322,7 @@ struct BGThreadMetadata {
};
void ThreadPoolImpl::Impl::BGThreadWrapper(void* arg) {
BGThreadMetadata* meta = reinterpret_cast<BGThreadMetadata*>(arg);
BGThreadMetadata* meta = static_cast<BGThreadMetadata*>(arg);
size_t thread_id = meta->thread_id_;
ThreadPoolImpl::Impl* tp = meta->thread_pool_;
#ifdef ROCKSDB_USING_THREAD_STATUS

View File

@ -1761,7 +1761,7 @@ TEST_F(BackupEngineTest, TableFileWithoutDbChecksumCorruptedDuringBackup) {
"BackupEngineImpl::CopyOrCreateFile:CorruptionDuringBackup",
[&](void* data) {
if (data != nullptr) {
Slice* d = reinterpret_cast<Slice*>(data);
Slice* d = static_cast<Slice*>(data);
if (!d->empty()) {
d->remove_suffix(1);
corrupted = true;
@ -1803,7 +1803,7 @@ TEST_F(BackupEngineTest, TableFileWithDbChecksumCorruptedDuringBackup) {
"BackupEngineImpl::CopyOrCreateFile:CorruptionDuringBackup",
[&](void* data) {
if (data != nullptr) {
Slice* d = reinterpret_cast<Slice*>(data);
Slice* d = static_cast<Slice*>(data);
if (!d->empty()) {
d->remove_suffix(1);
}
@ -3999,7 +3999,7 @@ TEST_F(BackupEngineTest, BackgroundThreadCpuPriority) {
std::atomic<CpuPriority> priority(CpuPriority::kNormal);
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"BackupEngineImpl::Initialize:SetCpuPriority", [&](void* new_priority) {
priority.store(*reinterpret_cast<CpuPriority*>(new_priority));
priority.store(*static_cast<CpuPriority*>(new_priority));
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();

View File

@ -118,9 +118,7 @@ class BlobDBTest : public testing::Test {
}
}
BlobDBImpl *blob_db_impl() {
return reinterpret_cast<BlobDBImpl *>(blob_db_);
}
BlobDBImpl *blob_db_impl() { return static_cast<BlobDBImpl *>(blob_db_); }
Status Put(const Slice &key, const Slice &value,
std::map<std::string, std::string> *data = nullptr) {

View File

@ -163,15 +163,15 @@ class HashTableBenchmark {
// Wrapper functions for thread entry
//
static void WriteMain(void* args) {
reinterpret_cast<HashTableBenchmark*>(args)->RunWrite();
static_cast<HashTableBenchmark*>(args)->RunWrite();
}
static void ReadMain(void* args) {
reinterpret_cast<HashTableBenchmark*>(args)->RunRead();
static_cast<HashTableBenchmark*>(args)->RunRead();
}
static void EraseMain(void* args) {
reinterpret_cast<HashTableBenchmark*>(args)->RunErase();
static_cast<HashTableBenchmark*>(args)->RunErase();
}
HashTableImpl<size_t, std::string>* impl_; // Implementation to test

View File

@ -255,7 +255,7 @@ std::shared_ptr<PersistentCacheTier> MakeTieredCache(
#ifdef OS_LINUX
static void UniqueIdCallback(void* arg) {
int* result = reinterpret_cast<int*>(arg);
int* result = static_cast<int*>(arg);
if (*result == -1) {
*result = 0;
}

View File

@ -282,8 +282,7 @@ Status ReplayerImpl::ReadTrace(Trace* trace) {
}
void ReplayerImpl::BackgroundWork(void* arg) {
std::unique_ptr<ReplayerWorkerArg> ra(
reinterpret_cast<ReplayerWorkerArg*>(arg));
std::unique_ptr<ReplayerWorkerArg> ra(static_cast<ReplayerWorkerArg*>(arg));
assert(ra != nullptr);
std::unique_ptr<TraceRecord> record;

View File

@ -69,7 +69,7 @@ class PointLockManagerTest : public testing::Test {
PessimisticTransaction* NewTxn(
TransactionOptions txn_opt = TransactionOptions()) {
Transaction* txn = db_->BeginTransaction(WriteOptions(), txn_opt);
return reinterpret_cast<PessimisticTransaction*>(txn);
return static_cast<PessimisticTransaction*>(txn);
}
protected:

View File

@ -60,7 +60,7 @@ class RangeLockingTest : public ::testing::Test {
PessimisticTransaction* NewTxn(
TransactionOptions txn_opt = TransactionOptions()) {
Transaction* txn = db->BeginTransaction(WriteOptions(), txn_opt);
return reinterpret_cast<PessimisticTransaction*>(txn);
return static_cast<PessimisticTransaction*>(txn);
}
};

View File

@ -158,7 +158,7 @@ void range_buffer::iterator::reset_current_chunk() {
bool range_buffer::iterator::current(record *rec) {
if (_current_chunk_offset < _current_chunk_max) {
const char *buf = reinterpret_cast<const char *>(_current_chunk_base);
const char *buf = static_cast<const char *>(_current_chunk_base);
rec->deserialize(buf + _current_chunk_offset);
_current_rec_size = rec->size();
return true;
@ -221,7 +221,7 @@ void range_buffer::append_range(const DBT *left_key, const DBT *right_key,
bool is_exclusive) {
size_t record_length =
sizeof(record_header) + left_key->size + right_key->size;
char *buf = reinterpret_cast<char *>(_arena.malloc_from_arena(record_length));
char *buf = static_cast<char *>(_arena.malloc_from_arena(record_length));
record_header h;
h.init(left_key, right_key, is_exclusive);
@ -244,7 +244,7 @@ void range_buffer::append_range(const DBT *left_key, const DBT *right_key,
void range_buffer::append_point(const DBT *key, bool is_exclusive) {
size_t record_length = sizeof(record_header) + key->size;
char *buf = reinterpret_cast<char *>(_arena.malloc_from_arena(record_length));
char *buf = static_cast<char *>(_arena.malloc_from_arena(record_length));
record_header h;
h.init(key, nullptr, is_exclusive);

Some files were not shown because too many files have changed in this diff Show More