* debug: add pprof duration CLI argument
* debug: add CSI plugin details
* update help text with ACL requirements
* debug: provide ACL hints upon permission failures
* debug: only write file when pprof retrieve is successful
* debug: add helper function to clean bad characters from dynamic filenames
* debug: ensure files are unable to escape the capture directory
* upsertaclpolicies
* delete acl policies msgtype
* upsert acl policies msgtype
* delete acl tokens msgtype
* acl bootstrap msgtype
wip unsubscribe on token delete
test that subscriptions are closed after an ACL token has been deleted
Start writing policyupdated test
* update test to use before/after policy
* add SubscribeWithACLCheck to run acl checks on subscribe
* update rpc endpoint to use broker acl check
* Add and use subscriptions.closeSubscriptionFunc
This fixes the issue of not being able to defer unlocking the mutex on
the event broker in the for loop.
handle acl policy updates
* rpc endpoint test for terminating acl change
* add comments
Co-authored-by: Kris Hicks <khicks@hashicorp.com>
* Remove Managed Sinks from Nomad
Managed Sinks were a beta feature in Nomad 1.0-beta2. During the beta
period it was determined that this was not a scalable approach to
support community and third party sinks.
* update comment
* changelog
Before, upstreams could only be defined using the default datacenter.
Now, the `datacenter` field can be set in a connect upstream definition,
informing consul of the desire for an instance of the upstream service
in the specified datacenter. The field is optional and continues to
default to the local datacenter.
Closes#8964
The API is missing values for `ReadAllocs` and `WriteAllocs` fields, resulting
in allocation claims not being populated in the web UI. These fields mirror
the fields in `nomad/structs.CSIVolume`. Returning a separate list of stubs
for read and write would be ideal, but this can't be done without either
bloating the API response with repeated full `Allocation` data, or causing a
panic in previous versions of the CLI.
The `nomad/structs` fields are persisted with nil values and are populated
during RPC, so we'll do the same in the HTTP API and populate the `ReadAllocs`
and `WriteAllocs` fields with a map of allocation IDs, but with null
values. The web UI will then create its `ReadAllocations` and
`WriteAllocations` fields by mapping from those IDs to the values in
`Allocations`, instead of flattening the map into a list.
* `nomad operator keyring` was missing the general options section
* `nomad operator metrics` was missing a page in the docs entirely
Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
The `nomad alloc logs` command does not remove terminal escape sequences for
color from the log outputs of a task. Clarify that the standard `-no-color`
flag, which does apply to Nomad's error responses from `nomad alloc logs`,
does not apply to the log output.
The `nomad alloc status -verbose` command returns a 404 from CSI volumes
because the volume mount block in the task points back to the
`job.group.volume` block. So using the `Name` field to query is the "name" as
seen in the jobspec, and not the name of the volume that we need for querying.
Show both the job-specific name and the volume ID in the resulting output,
which clarifies the difference between the two fields and is more consistent
with the web UI.
This PR adds the ability to set HTTP headers when downloading
an artifact from an `http` or `https` resource.
The implementation in `go-getter` is such that a new `HTTPGetter`
must be created for each artifact that sets headers (as opposed
to conveniently setting headers per-request). This PR maintains
the memoization of the default Getter objects, creating new ones
only for artifacts where headers are set.
Closes#9306
* operator debug - add client node filtering arguments
* add WaitForClient helper function
* use RPC in WaitForClient to avoid unnecessary imports
* guard against nil values
* move initialization up and shorten test duration
* cleanup nodeLookupFailCount logic
* only display max node notice if we actually tried to capture nodes
Previously, connect sidecars would be re-registered with consul every cycle
of Nomad's reconciliation loop around Consul service registrations. This is
because part of the comparison used `reflect.DeepEqual` on []string objects,
which returns false when one object is `[]string{}` and the other is `[]string{}(nil)`.
Unforunately, this was always the case, and every Connect sidecar service
would be re-registered on every iteration, which happens every 30 seconds.
The unpublish workflow requires that we know the mode (RW vs RO) if we want to
unpublish the node. Update the hook and the Unpublish RPC so that we mark the
claim for release in a new state but leave the mode alone. This fixes a bug
where RO claims were failing node unpublish.
The core job GC doesn't know the mode, but we don't need it for that workflow,
so add a mode specifically for GC; the volumewatcher uses this as a sentinel
to check whether claims (with their specific RW vs RO modes) need to be claimed.
The only user of monitor(evalID, true) was command/eval_status, and
eval_status had a duplicate of the prefix-handling code inside it, so in
all cases the complete evalID was being passed to monitor.
Given that, we can remove the prefix code from command/monitor, and
remove the boolean arg.
state store: call-out to generic update of job recommendations from job update method
recommendations API work, and http endpoint errors for OSS
support for scaling polices in task block of job spec
add query filters for ScalingPolicy list endpoint
command: nomad scaling policy list: added -job and -type
* Process to send events to configured sinks
This PR adds a SinkManager to a server which is responsible for managing
managed sinks. Managed sinks subscribe to the event broker and send
events to a sink writer (webhook). When changes to the eventstore are
made the sinkmanager and managed sink are responsible for reloading or
starting a new managed sink.
* periodically check in sink progress to raft
Save progress on the last successfully sent index to raft. This allows a
managed sink to resume close to where it left off in the event of a lost
server or leadership change
dereference eventsink so we can accurately use the watchch
When using a pointer to eventsink struct it was updated immediately and our reload logic would not trigger
* network sink rpc/api plumbing
state store methods and restore
upsert sink test
get sink
delete sink
event sink list and tests
go generate new msg types
validate sink on upsert
* go generate
* remove event durability
temporarily removing go-memdb event durability until a new strategy is developed on how to best handled increased durability needs
* drop events table schema and state store methods
* fix neweventbuffer invocations