This PR fixes:
* An issue in which a node-drain that contains a complete batch alloc
would cause a replacement
* An issue in which allocations with the same name during a scale
down/stop event wouldn't be properly stopped.
* An issue in which batch allocations from previous job versions may not
have been stopped properly.
Fixes https://github.com/hashicorp/nomad/issues/3210
This PR makes placing new allocations count towards the limit. We do not
restrict how many new placements are made by the limit but we still
count towards the limit. This has the nice affect that if you have a
group with count = 5 and max_parallel = 1 but only 3 allocs exist for it
and a change is made, you will create 2 more at the new version but not
destroy one, taking you down to two running as you would have
previously.
Fixes https://github.com/hashicorp/nomad/issues/3053
This PR allows the scheduler to replace lost allocations even if the job
has a failed or paused deployment. The prior behavior was confusing to
users.
Fixes https://github.com/hashicorp/nomad/issues/2958
This PR enhances the distinct_property constraint such that a limit can
be specified in the RTarget/value parameter. This allows constraints
such as:
```
constraint {
distinct_property = "${meta.rack}"
value = "2"
}
```
This restricts any given rack from running more than 2 allocations from
the task group.
Fixes https://github.com/hashicorp/nomad/issues/1146
This drops the testings stdlib pkg from our dependencies. Saves a
whopping 46kb on our binary (was really hoping for more of a win there),
but also avoids potential ugliness with how testing sets flags.
This PR resolves a bug in which a job with multiple task groups would
create new deployment objects each, thus clearing out all other task
groups deployment state.
This PR updates the generic scheduler to handle destructive changes
before handling placements. This is important because the destructive
change may be due to a lowering of resources. If this is the case, the
handling of the destructive changes first may make it possible for the
placement to happen.
To reason about this imagine there is one node with CPU = 500.
If the group originally had:
* `count = 1`
* `cpu = 400`
And then the job was updated such that the group had:
* `count = 4`
* `cpu = 120`
If the original alloc isn't discounted first, nothing would be able to
place.
This PR fixes the rolling update limit calculation to avoid a panic when
there are more allocations for a deployment that haven't determined
their health than the max_parallel count of the task group.
Fixes https://github.com/hashicorp/nomad/issues/2820
This PR adds watching of allocation health at the client. The client can
watch for health based on the tasks running on time and also based on
the consul checks passing.
This PR skips adding an inplace update to a successfully terminal batch
job to the plan. This avoids extra data in the plan and avoids
triggering updates on all clients that have the terminal allocation.
This is matching behavior of the service scheduler.
/cc @armon for review
This PR fixes our vet script and fixes all the missed vet changes.
It also fixes pointers being printed in `nomad stop <job>` and `nomad
node-status <node>`.