The rkt port mapping test currently starts redis with --version, which
obviously makes redis exit again almost immediately. This means that the
container exists before the network status can be queried, and so the
test fails.
The network status poll loop for the rkt drivers `Start` method was a
bit messy, and could not display the last encountered error. Here we
clean it up.
The changes introduces in #3256 require at least rkt 1.27.0 because of
a bug in the JSON output of `rkt status` in previous versions.
Here we upgrade all references to rkt's minimum version, and also make
travis and vagrant use this version when running tests.
Finally we add a CHANGELOG notice.
If the rkt driver cannot get the network status, for a task with a
configured port mapping, it will now fail the Start() call and kill the
task instead of simply logging. This matches the Docker behavior.
If no port map is specified, the warnings will be logged but the task
will be allowed to start.
To test that the rkt driver correctly sets a DriverNetwork, at least
when a port mapping is requested, we amend the
TestRktDriver_PortsMapping test with a small check.
Currently the rkt driver does not expose a DriverNetwork instance after
starting the container, which means that address_mode = 'driver' does
not work.
To get the container network information, we can call `rkt status` on
the UUID of the container and grab the container IP from there.
For the port map, we need to grab the pod manifest as it will tell us
which ports the container exposes. We then cross-reference the
configured port name with the container port names, and use that to
create a correct port mapping.
To avoid doing a (bad) reimplementation of the appc schema(which rkt
uses for its manifest) and rkt apis, we pull those in as vendored
dependencies. The versions used are the same ones that rkt use in their
glide dependency configuration for version 1.28.0.
The rkt driver currently executes run and asks that the pod UUID is
written to a file that is then polled for changes for up to five
seconds. Many container fetches will take longer than this, so this
method will often not be able to track the pod UUID reliably.
To avoid this problem, rkt allows pods to be first prepared, which will
return their UUID, and then run as a second invocation.
Here we convert the rkt driver's Start method to use this method
instead. This way, the UUID will always be tracked correctly.
@dadgar made the excellent observation in #3105 that TaskRunner removes
and re-registers checks on restarts. This means checkWatcher doesn't
need to do *any* internal restart tracking. Individual checks can just
remove themselves and be re-added when the task restarts.
This PR removes locking around commonly accessed node attributes that do
not need to be locked. The locking could cause nodes to TTL as the
heartbeat code path was acquiring a lock that could be held for an
excessively long time. An example of this is when Vault is inaccessible,
since the fingerprint is run with a lock held but the Vault
fingerprinter makes the API calls with a large timeout.
Fixes https://github.com/hashicorp/nomad/issues/2689
This PR fixes an issue introduced in Nomad 0.6.0 due to
https://github.com/shirou/gopsutil/issues/420. The issue arised from the
fact that the Windows stats from gopsutil reports CPUs in
percentages where we expected ticks.
* Parse Docker mounts correctly
This PR fixes the parsing of Docker mounts and adds testing to ensure no
regressions.
Fixes https://github.com/hashicorp/nomad/issues/3156
* Review feedback
This PR changes the fingerprint handling of network interfaces that only
contain link local addresses. The new behavior is to prefer globally
routable addresses and if none are detected, to fall back to link local
addresses if the operator hasn't disallowed it. This gives us pre 0.6
behavior for interfaces with only link local addresses but 0.6+ behavior
for IPv6 interfaces that will always have a link-local address.
Fixes https://github.com/hashicorp/nomad/issues/3005
/cc diptanuc
interface has 3 implementations:
1. local for blocking and moving data locally
2. remote for blocking and moving data from another node
3. noop for allocs that don't need to block
This PR does the following:
* Adds a mechanism to emit events in the TaskRunner
* Vendors a new version of Consul-Template that allows extraction of
missing dependencies
* Adds logic to our consul_template.go to determine missing events and
emit them in a batched fashion.
* Refactors the consul_template code to split the run method and take in
a config struct rather than many parameters.
Fixes https://github.com/hashicorp/nomad/issues/2578
Fixes an issue in which the allocation health watcher was checking for
allocations health based on un-interpolated services and checks. Change
the interface for retrieving check information from Consul to retrieving
all registered services and checks by allocation. In the future this
will allow us to output nicer messages.
Fixes https://github.com/hashicorp/nomad/issues/2969
* alloc_runner
* Random tests
* parallel task_runner and no exec compatible check
* Parallel client
* Fail fast and use random ports
* Fix docker port mapping
* Make concurrent pull less timing dependant
* up parallel
* Fixes
* don't build chroots in parallel on travis
* Reduce parallelism on travis with lxc/rkt
* make java test app not run forever
* drop parallelism a little
* use docker ports that are out of the os's ephemeral port range
* Limit even more on travis
* rkt deadline
When replacing an alloc the new alloc is blocked until the old alloc is
destroyed. This could cause a deadlock:
1. Destroying the old alloc includes a final sync of its status
2. Syncing status causes a GC
3. A GC looks for terminal allocs to cleanup
4. The GC waits for an alloc to stop completely before GC'ing
If the GC chooses the currently-being-destroyed-alloc to GC, the GC
deadlocks. If `client.max_parallel` deadlocks happen the GC is wedged
until the Nomad process is restarted.
Performing the final sync asynchronously is an ugly hack but prevents
the deadlock by allowing the final sync to occur after the alloc runner
has shutdown and been destroyed.
Since the AllocRunner.alloc struct can be mutated, most of AllocRunner
needs to acquire a lock to get the alloc's ID. Log lines always need to
include the alloc ID, so we often skipped acquiring a lock just to grab
the ID and accepted the race.
Let's make the race detector a little happier by storing the ID in a
single assignment field.
Task dir metadata is created in AllocRunner.Run which may not run before
an alloc is sync'd and Nomad exits. There's no reason not to just create
task dir metadata on restore if it doesn't exist.
Fixes#2835
Yet another bug caused by overwriting container and then trying to
reference container.ID in the err handling block. Did a quick audit of
docker.go and it seems to be the last offender. See #2804 for previous
bug.
Fixes#2802
While it's hard to reproduce the theoretical race is:
1. This goroutine calls ListContainers()
2. Another goroutine removes a container X
3. This goroutine attempts to InspectContainer(X)
However, this bug could be hit in the much simpler case of
InspectContainer() timing out.
In those cases an error is returned and the old code attempted to wrap
the error with the now-nil container.ID. Storing the container ID fixes
that panic.
This PR adds watching of allocation health at the client. The client can
watch for health based on the tasks running on time and also based on
the consul checks passing.