rocksdb/util/repeatable_thread_test.cc
Peter Dillinger 54cb9c77d9 Prefer static_cast in place of most reinterpret_cast (#12308)
Summary:
The following are risks associated with pointer-to-pointer reinterpret_cast:
* Can produce the "wrong result" (crash or memory corruption). IIRC, in theory this can happen for any up-cast or down-cast for a non-standard-layout type, though in practice would only happen for multiple inheritance cases (where the base class pointer might be "inside" the derived object). We don't use multiple inheritance a lot, but we do.
* Can mask useful compiler errors upon code change, including converting between unrelated pointer types that you are expecting to be related, and converting between pointer and scalar types unintentionally.

I can only think of some obscure cases where static_cast could be troublesome when it compiles as a replacement:
* Going through `void*` could plausibly cause unnecessary or broken pointer arithmetic. Suppose we have
`struct Derived: public Base1, public Base2`.  If we have `Derived*` -> `void*` -> `Base2*` -> `Derived*` through reinterpret casts, this could plausibly work (though technical UB) assuming the `Base2*` is not dereferenced. Changing to static cast could introduce breaking pointer arithmetic.
* Unnecessary (but safe) pointer arithmetic could arise in a case like `Derived*` -> `Base2*` -> `Derived*` where before the Base2 pointer might not have been dereferenced. This could potentially affect performance.

With some light scripting, I tried replacing pointer-to-pointer reinterpret_casts with static_cast and kept the cases that still compile. Most occurrences of reinterpret_cast have successfully been changed (except for java/ and third-party/). 294 changed, 257 remain.

A couple of related interventions included here:
* Previously Cache::Handle was not actually derived from in the implementations and just used as a `void*` stand-in with reinterpret_cast. Now there is a relationship to allow static_cast. In theory, this could introduce pointer arithmetic (as described above) but is unlikely without multiple inheritance AND non-empty Cache::Handle.
* Remove some unnecessary casts to void* as this is allowed to be implicit (for better or worse).

Most of the remaining reinterpret_casts are for converting to/from raw bytes of objects. We could consider better idioms for these patterns in follow-up work.

I wish there were a way to implement a template variant of static_cast that would only compile if no pointer arithmetic is generated, but best I can tell, this is not possible. AFAIK the best you could do is a dynamic check that the void* conversion after the static cast is unchanged.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/12308

Test Plan: existing tests, CI

Reviewed By: ltamasi

Differential Revision: D53204947

Pulled By: pdillinger

fbshipit-source-id: 9de23e618263b0d5b9820f4e15966876888a16e2
2024-02-07 10:44:11 -08:00

111 lines
4.1 KiB
C++

// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
#include "util/repeatable_thread.h"
#include <atomic>
#include <memory>
#include "db/db_test_util.h"
#include "test_util/mock_time_env.h"
#include "test_util/sync_point.h"
#include "test_util/testharness.h"
class RepeatableThreadTest : public testing::Test {
public:
RepeatableThreadTest()
: mock_clock_(std::make_shared<ROCKSDB_NAMESPACE::MockSystemClock>(
ROCKSDB_NAMESPACE::SystemClock::Default())) {}
protected:
std::shared_ptr<ROCKSDB_NAMESPACE::MockSystemClock> mock_clock_;
};
TEST_F(RepeatableThreadTest, TimedTest) {
constexpr uint64_t kSecond = 1000000; // 1s = 1000000us
constexpr int kIteration = 3;
const auto& clock = ROCKSDB_NAMESPACE::SystemClock::Default();
ROCKSDB_NAMESPACE::port::Mutex mutex;
ROCKSDB_NAMESPACE::port::CondVar test_cv(&mutex);
int count = 0;
uint64_t prev_time = clock->NowMicros();
ROCKSDB_NAMESPACE::RepeatableThread thread(
[&] {
ROCKSDB_NAMESPACE::MutexLock l(&mutex);
count++;
uint64_t now = clock->NowMicros();
assert(count == 1 || prev_time + 1 * kSecond <= now);
prev_time = now;
if (count >= kIteration) {
test_cv.SignalAll();
}
},
"rt_test", clock.get(), 1 * kSecond);
// Wait for execution finish.
{
ROCKSDB_NAMESPACE::MutexLock l(&mutex);
while (count < kIteration) {
test_cv.Wait();
}
}
// Test cancel
thread.cancel();
}
TEST_F(RepeatableThreadTest, MockEnvTest) {
constexpr uint64_t kSecond = 1000000; // 1s = 1000000us
constexpr int kIteration = 3;
mock_clock_->SetCurrentTime(0); // in seconds
std::atomic<int> count{0};
#if defined(OS_MACOSX) && !defined(NDEBUG)
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->DisableProcessing();
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->ClearAllCallBacks();
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
"InstrumentedCondVar::TimedWaitInternal", [&](void* arg) {
// Obtain the current (real) time in seconds and add 1000 extra seconds
// to ensure that RepeatableThread::wait invokes TimedWait with a time
// greater than (real) current time. This is to prevent the TimedWait
// function from returning immediately without sleeping and releasing
// the mutex on certain platforms, e.g. OS X. If TimedWait returns
// immediately, the mutex will not be released, and
// RepeatableThread::TEST_WaitForRun never has a chance to execute the
// callback which, in this case, updates the result returned by
// mock_clock->NowMicros. Consequently, RepeatableThread::wait cannot
// break out of the loop, causing test to hang. The extra 1000 seconds
// is a best-effort approach because there seems no reliable and
// deterministic way to provide the aforementioned guarantee. By the
// time RepeatableThread::wait is called, it is no guarantee that the
// delay + mock_clock->NowMicros will be greater than the current real
// time. However, 1000 seconds should be sufficient in most cases.
uint64_t time_us = *static_cast<uint64_t*>(arg);
if (time_us < mock_clock_->RealNowMicros()) {
*static_cast<uint64_t*>(arg) = mock_clock_->RealNowMicros() + 1000;
}
});
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
#endif // OS_MACOSX && !NDEBUG
ROCKSDB_NAMESPACE::RepeatableThread thread(
[&] { count++; }, "rt_test", mock_clock_.get(), 1 * kSecond, 1 * kSecond);
for (int i = 1; i <= kIteration; i++) {
// Bump current time
thread.TEST_WaitForRun([&] { mock_clock_->SetCurrentTime(i); });
}
// Test function should be exectued exactly kIteraion times.
ASSERT_EQ(kIteration, count.load());
// Test cancel
thread.cancel();
}
int main(int argc, char** argv) {
ROCKSDB_NAMESPACE::port::InstallStackTraceHandler();
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}