The shortlink /s/port-plan-failure is logged when a plan for a node is
rejected to help users debug and mitigate repeated `plan for node
rejected` failures.
The current link to #9506 is... less than useful. It is not clear to
users what steps they should take to either fix their cluster or
contribute to the issue.
While .../monitoring-nomad#progess isn't as comprehensive as it could
be, it's a much more gentle introduction to the class of bug than the
original issue.
* Seed-stabilization by default
* Hide right-column of topology viz route
* Remove seedless run from thee test:* suite
* Related evals paths render too late
* Vis:Hidden another topo viz unstable item
* test: use `T.TempDir` to create temporary test directory
This commit replaces `ioutil.TempDir` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using `ioutil.TempDir`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but `t.TempDir` handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
* test: fix TestLogmon_Start_restart on Windows
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
* test: fix failing TestConsul_Integration
t.TempDir fails to perform the cleanup properly because the folder is
still in use
testing.go:967: TempDir RemoveAll cleanup: unlinkat /tmp/TestConsul_Integration2837567823/002/191a6f1a-5371-cf7c-da38-220fe85d10e5/web/secrets: device or resource busy
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
Remove the step to automatically backport `backport/website` PRs to the
latest release. This will be done manually by adding the proper tags.
Also use squash backports to match the pattern we use for `main`.
During the release there are several files that need to be modified:
- .release/ci.hcl: the notification channel needs to be updated to a
channel with greater team visibility during the release.
- version/version.go: the Version and VersionPrerelease variables
need to be set so they match the release version.
After the release these files need to be reverted.
For GA releases the following additional changes also need to happen:
- version/version.go: the Version variable needs to be bumped to the
next version number.
- GNUMakefile: the LAST_RELEASE variable needs to be set to the
version that was just released.
Since the release process will commit file changes to the branch being
used for the release, it should _never_ run on main, so the first step
is now to protect against that.
It also adds a validation to make the user input version is correct.
After looking at the different release options and steps I noticed that
automatic CHANGELOG generation is actually the exception, so it would be
better to have the default to be false.
* Sample percy test added
* Node engine up to 14.x for UI prep
* Force ui test rerun
* Updated config.yml
* Node v upgraded to 14 for docker image
* Expect length in test
* Running ember tests under percy exec
* Percy exec format
* Percy cli added
* Noop to rerun tests with updated percy_token
* Evals full list and details open snapshots
* Pretty legit use of assert so disable the warning
* Jobs list tests
* Snapshots for top-level clients, servers, ACL, topology, and storage lists
* Expect caveat for Topology test
* Stabilizing tests with faker seeded to 1
* Seed-stabilizing any tests with percySnapshots
* Faker import
* Drop unused param
* Assets and test audit using an older node version
* New strategy: avoid seeding, just use percyCSS to hide certain things
This PR modifies raw_exec and exec to ensure the cgroup for a task
they are driving still exists during a task restart. These drivers
have the same bug but with different root cause.
For raw_exec, we were removing the cgroup in 2 places - the cpuset
manager, and in the unix containment implementation (the thing that
uses freezer cgroup to clean house). During a task restart, the
containment would remove the cgroup, and when the task runner hooks
went to start again would block on waiting for the cgroup to exist,
which will never happen, because it gets created by the cpuset manager
which only runs as an alloc pre-start hook. The fix here is to simply
not delete the cgroup in the containment implementation; killing the
PIDs is enough. The removal happens in the cpuset manager later anyway.
For exec, it's the same idea, except DestroyTask is called on task
failure, which in turn calls into libcontainer, which in turn deletes
the cgroup. In this case we do not have control over the deletion of
the cgroup, so instead we hack the cgroup back into life after the
call to DestroyTask.
All of this only applies to cgroups v2.
In #12324 we made it so that plugins wait until the node drain is
complete, as we do for system jobs. But we neglected to mark the node
drain as complete once only plugins (or system jobs) remaining, which
means that the node drain is left in a draining state until the
`deadline` time expires. This was incorrectly documented as expected
behavior in #12324.
notably:
- name of the compiled binary is 'nomad-device-nvidia', not 'nvidia-gpu'
- link to Nvidia docs for installing the container runtime toolkit
- list docker v19.03 as minimum version, to track with nvidia's new container runtime toolkit
The capacity fields for `create volume` set bounds on the resulting
size of the volume, but the ultimate size of the volume will be
determined by the storage provider (between the min and max). Clarify
this in the documentation and provide a suggestion for how to set a
exact size.
* chore: run prettier on hbs files
* ui: ensure to pass a real job object to task-group link
* chore: add changelog entry
* chore: prettify template
* ui: template helper for formatting jobId in LinkTo component
* ui: handle async relationship
* ui: pass in job id to model arg instead of job model
* update test for serialized namespace
* ui: defend against null in tests
* ui: prettified template added whitespace
* ui: rollback ember-data to 3.24 because watcher return undefined on abort
* ui: use format-job-helper instead of job model via alloc
* ui: fix whitespace in template caused by prettier using template helper
* ui: update test for new namespace
* ui: revert prettier change
Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
* Linear and Branching mock evaluations
* De-comment
* test-trigger
* Making evaluation trees dynamic
* Reinstated job relationship on eval mock
* Dasherize job prefix back to normal
* Handle bug where UUIDKey is not present on job
* Appending node to eval
* Job ID as a passed property
* Remove unused import
* Branching evals set up as generatable
This test exercises upgrades between 0.8 and Nomad versions greater
than 0.9. We have not supported 0.8.x in a very long time and in any
case the test has been marked to skip because the downloader doesn't
work.
Fixes#10200
**The bug**
A user reported receiving the following error when an alloc was placed
that needed to preempt existing allocs:
```
[ERROR] client.alloc_watcher: error querying previous alloc:
alloc_id=28... previous_alloc=8e... error="rpc error: alloc lookup
failed: index error: UUID must be 36 characters"
```
The previous alloc (8e) was already complete on the client. This is
possible if an alloc stops *after* the scheduling decision was made to
preempt it, but *before* the node running both allocations was able to
pull and start the preemptor. While that is hopefully a narrow window of
time, you can expect it to occur in high throughput batch scheduling
heavy systems.
However the RPC error made no sense! `previous_alloc` in the logs was a
valid 36 character UUID!
**The fix**
The fix is:
```
- prevAllocID: c.Alloc.PreviousAllocation,
+ prevAllocID: watchedAllocID,
```
The alloc watcher new func used for preemption improperly referenced
Alloc.PreviousAllocation instead of the passed in watchedAllocID. When
multiple allocs are preempted, a watcher is created for each with
watchedAllocID set properly by the caller. In this case
Alloc.PreviousAllocation="" -- which is where the `UUID must be 36 characters`
error was coming from! Sadly we were properly referencing
watchedAllocID in the log, so it made the error make no sense!
**The repro**
I was able to reproduce this with a dev agent with [preemption enabled](https://gist.github.com/schmichael/53f79cbd898afdfab76865ad8c7fc6a0#file-preempt-hcl)
and [lowered limits](https://gist.github.com/schmichael/53f79cbd898afdfab76865ad8c7fc6a0#file-limits-hcl)
for ease of repro.
First I started a [low priority count 3 job](https://gist.github.com/schmichael/53f79cbd898afdfab76865ad8c7fc6a0#file-preempt-lo-nomad),
then a [high priority job](https://gist.github.com/schmichael/53f79cbd898afdfab76865ad8c7fc6a0#file-preempt-hi-nomad)
that evicts 2 low priority jobs. Everything worked as expected.
However if I force it to use the [remotePrevAlloc implementation](https://github.com/hashicorp/nomad/blob/v1.3.0-beta.1/client/allocwatcher/alloc_watcher.go#L147),
it reproduces the bug because the watcher references PreviousAllocation
instead of watchedAllocID.