This closes#9495. As detailed in there, the collection query GET
/v1/volumes?type=csi doesn’t return ReadAllocs and WriteAllocs, so the #
Allocs cell was always showing 0 upon first load because it was derived
from the lengths of those arrays. This uses the heretofore-ignored
CurrentReaders and CurrentWriters values to calculate the total instead.
The single-resource query GET /v1/volume/csi%2F:id doesn’t return
CurrentReaders and CurrentWriters that absence doesn’t override the
stored values when visiting an individual item.
Thanks to @apollo13 for reporting this and to @tgross for the API logs
and suggestion.
The API is missing values for `ReadAllocs` and `WriteAllocs` fields, resulting
in allocation claims not being populated in the web UI. These fields mirror
the fields in `nomad/structs.CSIVolume`. Returning a separate list of stubs
for read and write would be ideal, but this can't be done without either
bloating the API response with repeated full `Allocation` data, or causing a
panic in previous versions of the CLI.
The `nomad/structs` fields are persisted with nil values and are populated
during RPC, so we'll do the same in the HTTP API and populate the `ReadAllocs`
and `WriteAllocs` fields with a map of allocation IDs, but with null
values. The web UI will then create its `ReadAllocations` and
`WriteAllocations` fields by mapping from those IDs to the values in
`Allocations`, instead of flattening the map into a list.
Without this, visiting any job detail page on Nomad OSS would crash with
an error like this:
Error: Ember Data Request GET
/v1/recommendations?job=ping%F0%9F%A5%B3&namespace=default returned a
404 Payload (text/xml)
The problem was twofold.
1. The recommendation ability didn’t include anything about checking
whether the feature was present. This adds a request to
/v1/operator/license on application load to determine which features are
present and store them in the system service. The ability now looks for
'Dynamic Application Sizing' in that feature list.
2. Second, I didn’t check permissions at all in the job-fetching or job
detail templates.
The job factory will now accept an array of resourceSpecs that is a shorthand
notation for memory, cpu, disk, and iops requirements.
These specs get passed down to task groups. The task group factory will
split the resource requirements near evenly (there is variance
threshold) across all expected tasks.
Allocations then construct task-resource objects based on the resources
from the matching task.
This partially addresses #7799.
Task state filesystems are contained within a subdirectory of their
parent allocation, so almost everything that existed for browsing task
state filesystems was applicable to browsing allocations, just without
the task name prepended to the path. I aimed to push this differential
handling into as few contained places as possible.
The tests also have significant overlap, so this includes an extracted
behavior to run the same tests for allocations and task states.