When starting an allocation that is preempting other allocs, we create a
new group allocation watcher, and then wait for the allocations to
terminate in the allocation PreRun hooks.
If there's no preempted allocations, then we simply provide a
NoopAllocWatcher.
The Group Alloc watcher is an implementation of a PrevAllocWatcher that
can wait for multiple previous allocs before terminating.
This is to be used when running an allocation that is preempting upstream
allocations, and thus only supports being ran with a local alloc watcher.
It also currently requires all of its child watchers to correctly handle
context cancellation. Should this be a problem, it should be fairly easy
to implement a replacement using channels rather than a waitgroup.
It obeys the PrevAllocWatcher interface for convenience, but it may be
better to extract Migration capabilities into a seperate interface for
greater clarity.
As of now, FileRotator uses bufio.Write under the hood to write data to
configured output file. Due to the way how bufio handles any occurred io
error - saves it into `err` variable never resetting it automatically -
any operation like `Write`, `Flush` etc will become a no-op, returning the very same,
saved error (eg. Out of disk space) even when the problem is fixed (eg. disk
space is available again).
That automatically means that FileRotator will stop writing any logs,
reporting the same error over and over again, even if it's no longer
valid.
This PR fixes it by resetting the bufio Writer, which resets any errors
and tries to write requested data.
Also, LXC requires target paths to be relative. Container paths in LXC
binds should never be absolute paths, so we strip any preceeding `/`,
even if a user sets one.
The previous integration test was broken during the client refactor, and
it seems to be some sort of race with state updating.
I'm going to try and construct a replacement test as part of work on
performance, but for now, the underlying behaviour is still being
tested.
This PR fixes an edge case where we could GC an allocation that was in a
desired stop state but had not terminated yet. This can be hit if the
client hasn't shutdown the allocation yet or if the allocation is still
shutting down (long kill_timeout).
Fixes https://github.com/hashicorp/nomad/issues/4940
* Add Nomad RA
* Add deployment guide and nav
* Deployment Guide update
* Minor typo fixes
* Update diagrams
* Fixes for review
* Link fixes and typo fix
* Edits following review
- Update image text from "zone" to "datacenter" to match Nomad terminology
- Clean up text based on Preetha's feedback
* Text updates
Based on feedback from Rob
* Update diagrams
* fixing spelling
* Add suggestions from Preetha and Omar
WaitForResult expects body to fail and retries few times before giving
up. Assertions inside the testfn body causes it to terminate abruptly
without retrying.
`currentExpiration` field is accessed in multiple goroutines: Stats and
renewal, so needs locking.
I don't anticipate high contention, so simple mutex suffices.
Some tests have containers that die almost immediately, and may die
and cleaned up before `driver.WaitUntilStarted` runs.
The causes for container dying seems special for each test:
* TestDockerDriver_Cleanup: `hello-world` image just emits a message and exits immediately
* TestDockerDriver_ForcePull_RepoDigest: the busybox image in `TestDockerDriver_ForcePull_RepoDigest` test didn't support `-p 0` argument
* TestDockerDriver_Entrypoint: with the entrypoint being `/bin/sh -c`, the command needs to be the entire string; otherwise, it ignores the comments
Currently, libcontainer-based executor, upon shutdown, kills the
container initial process. The children of the killed process remain
running, and the executor is never marked as terminated until they do.
Also, fix a case where we treat processes as successful, when
`proc.Wait()` fails. In some attempts, I was getting "waitid no child
processes" errors and such error shouldn't get process to be considered
successful.