It would be preferable to only generate these for UI PRs but Netlify appears to
not have that flexibility. I tried setting up manual deployments in a Travis
environment but gave up the experiment; it could probably eventually work if
deployment failures become a nuisance.
Currently, when an alloc fails and is rescheduled, the alloc desired
state remains as "run" and the nomad client may not free the resources.
Here, we ensure that an alloc is marked as stopped when it's
rescheduled.
Notice the Desired Status and Description before and after this change:
Before:
```
mars-2:nomad notnoop$ nomad alloc status 02aba49e
ID = 02aba49e
Eval ID = bb9ed1d2
Name = example-reschedule.nodes[0]
Node ID = 5853d547
Node Name = mars-2.local
Job ID = example-reschedule
Job Version = 0
Client Status = failed
Client Description = Failed tasks
Desired Status = run
Desired Description = <none>
Created = 10s ago
Modified = 5s ago
Replacement Alloc ID = d6bf872b
Task "payload" is "dead"
Task Resources
CPU Memory Disk Addresses
0/100 MHz 24 MiB/300 MiB 300 MiB
Task Events:
Started At = 2019-06-06T21:12:45Z
Finished At = 2019-06-06T21:12:50Z
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
2019-06-06T17:12:50-04:00 Not Restarting Policy allows no restarts
2019-06-06T17:12:50-04:00 Terminated Exit Code: 1
2019-06-06T17:12:45-04:00 Started Task started by client
2019-06-06T17:12:45-04:00 Task Setup Building Task Directory
2019-06-06T17:12:45-04:00 Received Task received by client
```
After:
```
ID = 5001ccd1
Eval ID = 53507a02
Name = example-reschedule.nodes[0]
Node ID = a3b04364
Node Name = mars-2.local
Job ID = example-reschedule
Job Version = 0
Client Status = failed
Client Description = Failed tasks
Desired Status = stop
Desired Description = alloc was rescheduled because it failed
Created = 13s ago
Modified = 3s ago
Replacement Alloc ID = 7ba7ac20
Task "payload" is "dead"
Task Resources
CPU Memory Disk Addresses
21/100 MHz 24 MiB/300 MiB 300 MiB
Task Events:
Started At = 2019-06-06T21:22:50Z
Finished At = 2019-06-06T21:22:55Z
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
2019-06-06T17:22:55-04:00 Not Restarting Policy allows no restarts
2019-06-06T17:22:55-04:00 Terminated Exit Code: 1
2019-06-06T17:22:50-04:00 Started Task started by client
2019-06-06T17:22:50-04:00 Task Setup Building Task Directory
2019-06-06T17:22:50-04:00 Received Task received by client
```
Fix `TestServiceSched_NodeDown` for checking that the migrated allocs
are actually marked to be stopped.
The boolean logic in test made it skip actually checking client status
as long as desired status was stop.
Here, we mark some jobs for migration while leaving others as running,
and we check that lost flag is only set for non-migrated allocs.
Enterprise only.
Disable preemption for service and batch jobs by default.
Maintain backward compatibility in a x.y.Z release. Consider switching
the default for new clusters in the future.
Revert plan_apply.go changes from #5411
Since non-Command Raft messages do not update the StateStore index,
SnapshotAfter may unnecessarily block and needlessly fail in idle
clusters where the last Raft message is a non-Command message.
This is trivially reproducible with the dev agent and a job that has 2
tasks, 1 of which fails.
The correct logic would be to SnapshotAfter the previous plan's index to
ensure consistency. New clusters or newly elected leaders will not have
a previous plan, so the index the leader was elected should be used
instead.
This exposes a client flag to disable nomad remote exec support in
environments where access to tasks ought to be restricted.
I used `disable_remote_exec` client flag that defaults to allowing
remote exec. Opted for a client config that can be used to disable
remote exec globally, or to a subset of the cluster if necessary.