When an alloc is due to be rescheduleLater, it goes through the
reconciler twice: once to be ignored with a follow up evals, and once
again when processing the follow up eval where they appear as
rescheduleNow.
Here, we ignore them in the first run and mark them as stopped in second
iteration; rather than stop them twice.
Currently, when an alloc fails and is rescheduled, the alloc desired
state remains as "run" and the nomad client may not free the resources.
Here, we ensure that an alloc is marked as stopped when it's
rescheduled.
Notice the Desired Status and Description before and after this change:
Before:
```
mars-2:nomad notnoop$ nomad alloc status 02aba49e
ID = 02aba49e
Eval ID = bb9ed1d2
Name = example-reschedule.nodes[0]
Node ID = 5853d547
Node Name = mars-2.local
Job ID = example-reschedule
Job Version = 0
Client Status = failed
Client Description = Failed tasks
Desired Status = run
Desired Description = <none>
Created = 10s ago
Modified = 5s ago
Replacement Alloc ID = d6bf872b
Task "payload" is "dead"
Task Resources
CPU Memory Disk Addresses
0/100 MHz 24 MiB/300 MiB 300 MiB
Task Events:
Started At = 2019-06-06T21:12:45Z
Finished At = 2019-06-06T21:12:50Z
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
2019-06-06T17:12:50-04:00 Not Restarting Policy allows no restarts
2019-06-06T17:12:50-04:00 Terminated Exit Code: 1
2019-06-06T17:12:45-04:00 Started Task started by client
2019-06-06T17:12:45-04:00 Task Setup Building Task Directory
2019-06-06T17:12:45-04:00 Received Task received by client
```
After:
```
ID = 5001ccd1
Eval ID = 53507a02
Name = example-reschedule.nodes[0]
Node ID = a3b04364
Node Name = mars-2.local
Job ID = example-reschedule
Job Version = 0
Client Status = failed
Client Description = Failed tasks
Desired Status = stop
Desired Description = alloc was rescheduled because it failed
Created = 13s ago
Modified = 3s ago
Replacement Alloc ID = 7ba7ac20
Task "payload" is "dead"
Task Resources
CPU Memory Disk Addresses
21/100 MHz 24 MiB/300 MiB 300 MiB
Task Events:
Started At = 2019-06-06T21:22:50Z
Finished At = 2019-06-06T21:22:55Z
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
2019-06-06T17:22:55-04:00 Not Restarting Policy allows no restarts
2019-06-06T17:22:55-04:00 Terminated Exit Code: 1
2019-06-06T17:22:50-04:00 Started Task started by client
2019-06-06T17:22:50-04:00 Task Setup Building Task Directory
2019-06-06T17:22:50-04:00 Received Task received by client
```
This PR changes behavior of the scheduler such that a task group with a
deployment that is failed or paused will not cause the scheduler to skip
migrations.
The reason for this change is that it causes a bad UX when draining
nodes with allocations that are part of a failed/paused deployment.
These operations should not be coupled in any way and this remedies
that.
Prior behavior was still correct, but required either jobs to
transistion to a healthy state or for the node to hit its drain
deadline.
This PR fixes:
* An issue in which a node-drain that contains a complete batch alloc
would cause a replacement
* An issue in which allocations with the same name during a scale
down/stop event wouldn't be properly stopped.
* An issue in which batch allocations from previous job versions may not
have been stopped properly.
Fixes https://github.com/hashicorp/nomad/issues/3210
This PR makes placing new allocations count towards the limit. We do not
restrict how many new placements are made by the limit but we still
count towards the limit. This has the nice affect that if you have a
group with count = 5 and max_parallel = 1 but only 3 allocs exist for it
and a change is made, you will create 2 more at the new version but not
destroy one, taking you down to two running as you would have
previously.
Fixes https://github.com/hashicorp/nomad/issues/3053