open-vault/vault/ha_test.go
Seth Bunce a6a437a1ce
fix deadlock on core state lock (#10456)
* fix race that can cause deadlock on core state lock

The bug is in the grabLockOrStop function. For specific concurrent
executions the grabLockOrStop function can return stopped=true when
the lock is still held. A comment in grabLockOrStop indicates that the
function is only used when the stateLock is held, but grabLockOrStop is
being used to acquire the stateLock. If there are concurrent goroutines
using grabLockOrStop then some concurrent executions result in
stopped=true being returned when the lock is acquired.

The fix is to add a lock and some state around which the parent and
child goroutine in the grabLockOrStop function can coordinate so that
the different concurrent executions can be handled.

This change includes a non-deterministic unit test which reliably
reproduces the problem before the fix.

* use rand instead of time for random test stopCh close

Using time.Now().UnixNano()%2 ends up being system dependent because
different operating systems and hardware have different clock
resolution. A lower resolution will return the same unix time for a
longer period of time.

It is better to avoid this issue by using a random number generator.
This change uses the rand package default random number generator. It's
generally good to avoid using the default random number generator,
because it creates extra lock contention. For a test it should be fine.
2020-12-10 06:50:11 -05:00

84 lines
2 KiB
Go

package vault
import (
"fmt"
"math/rand"
"sync"
"sync/atomic"
"testing"
"time"
)
// TestGrabLockOrStopped is a non-deterministic test to detect deadlocks in the
// grabLockOrStopped function. This test starts a bunch of workers which
// continually lock/unlock and rlock/runlock the same RWMutex. Each worker also
// starts a goroutine which closes the stop channel 1/2 the time, which races
// with acquisition of the lock.
func TestGrabLockOrStop(t *testing.T) {
// Stop the test early if we deadlock.
const (
workers = 100
testDuration = time.Second
testTimeout = 10*testDuration
)
done := make(chan struct{})
defer close(done)
var lockCount int64
go func() {
select{
case <-done:
case <-time.After(testTimeout):
panic(fmt.Sprintf("deadlock after %d lock count",
atomic.LoadInt64(&lockCount)))
}
}()
// lock is locked/unlocked and rlocked/runlocked concurrently.
var lock sync.RWMutex
start := time.Now()
// workerWg is used to wait until all workers exit.
var workerWg sync.WaitGroup
workerWg.Add(workers)
// Start a bunch of worker goroutines.
for g := 0; g < workers; g++ {
g := g
go func() {
defer workerWg.Done()
for time.Now().Sub(start) < testDuration {
stop := make(chan struct{})
// closerWg waits until the closer goroutine exits before we do
// another iteration. This makes sure goroutines don't pile up.
var closerWg sync.WaitGroup
closerWg.Add(1)
go func() {
defer closerWg.Done()
// Close the stop channel half the time.
if rand.Int() % 2 == 0 {
close(stop)
}
}()
// Half the goroutines lock/unlock and the other half rlock/runlock.
if g % 2 == 0 {
if !grabLockOrStop(lock.Lock, lock.Unlock, stop) {
lock.Unlock()
}
} else {
if !grabLockOrStop(lock.RLock, lock.RUnlock, stop) {
lock.RUnlock()
}
}
closerWg.Wait()
// This lets us know how many lock/unlock and rlock/runlock have
// happened if there's a deadlock.
atomic.AddInt64(&lockCount, 1)
}
}()
}
workerWg.Wait()
}