Don't treat a failed recover + successful destroy as a successful
recover This code just seems incorrect. As it stands today it reports a successful restore if RecoverTask fails and then DestroyTask succeeds. This can result in a really annoying bug where it then calls RecoverTask again, whereby it will probably get ErrTaskNotFound and call DestroyTask once more. I think the only reason this has not been noticed so far is because most drivers like Docker will return Success, then nomad will call RecoverTask, get an error (not found) and call DestroyTask again, and get a ErrTasksNotFound err.
This commit is contained in:
parent
7f8e285559
commit
e247f8806b
|
@ -1136,10 +1136,9 @@ func (tr *TaskRunner) restoreHandle(taskHandle *drivers.TaskHandle, net *drivers
|
|||
"error", err, "task_id", taskHandle.Config.ID)
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
return false
|
||||
}
|
||||
|
||||
// Update driver handle on task runner
|
||||
|
|
Loading…
Reference in New Issue