## Development Environment Changes
* Added stringer to build deps
## New HTTP APIs
* Added scheduler worker config API
* Added scheduler worker info API
## New Internals
* (Scheduler)Worker API refactor—Start(), Stop(), Pause(), Resume()
* Update shutdown to use context
* Add mutex for contended server data
- `workerLock` for the `workers` slice
- `workerConfigLock` for the `Server.Config.NumSchedulers` and
`Server.Config.EnabledSchedulers` values
## Other
* Adding docs for scheduler worker api
* Add changelog message
Co-authored-by: Derek Strickland <1111455+DerekStrickland@users.noreply.github.com>
Some tests assert on numbers on numbers of servers, e.g.
TestHTTP_AgentSetServers and TestHTTP_AgentListServers_ACL . Though, in dev and
test modes, the agent starts with servers having duplicate entries for
advertised and normalized RPC values, then settles with one unique value after
Raft/Serf re-sets servers with one single unique value.
This leads to flakiness, as the test will fail if assertion runs before Serf
update takes effect.
Here, we update the inital dev handling so it only adds a unique value if the
advertised and normalized values are the same.
Sample log lines illustrating the problem:
```
=== CONT TestHTTP_AgentSetServers
TestHTTP_AgentSetServers: testlog.go:34: 2020-04-06T21:47:51.016Z [INFO] nomad.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:127.0.0.1:9008 Address:127.0.0.1:9008}]"
TestHTTP_AgentSetServers: testlog.go:34: 2020-04-06T21:47:51.016Z [INFO] nomad: serf: EventMemberJoin: TestHTTP_AgentSetServers.global 127.0.0.1
TestHTTP_AgentSetServers: testlog.go:34: 2020-04-06T21:47:51.035Z [DEBUG] client.server_mgr: new server list: new_servers=[127.0.0.1:9008, 127.0.0.1:9008] old_servers=[]
...
TestHTTP_AgentSetServers: agent_endpoint_test.go:759:
Error Trace: agent_endpoint_test.go:759
http_test.go:1089
agent_endpoint_test.go:705
Error: "[127.0.0.1:9008 127.0.0.1:9008]" should have 1 item(s), but has 2
Test: TestHTTP_AgentSetServers
```
Test set Agent.client=nil which prevented the client from being
shutdown. This leaked goroutines and could cause panics due to the
leaked client goroutines logging after their parent test had finished.
Removed ACLs from the server test because I couldn't get it to work with
the test agent, and it tested very little.
Fixes a panic when accessing a.agent.Server() when agent is a client
instead. This pr removes a redundant ACL check since ACLs are validated
at the RPC layer. It also nil checks the agent server and uses Client()
when appropriate.
Addresses feedback around monitor implementation
subselect on stopCh to prevent blocking forever.
Set up a separate goroutine to check every 3 seconds for dropped
messages.
rename returned ch to avoid confusion
Adds nomad monitor command. Like consul monitor, this command allows you
to stream logs from a nomad agent in real time with a a specified log
level
add endpoint tests
Upgrade go-hclog to latest version
The current version of go-hclog pads log prefixes to equal lengths
so info becomes [INFO ] and debug becomes [DEBUG]. This breaks
hashicorp/logutils/level.go Check function. Upgrading to the latest
version removes this padding and fixes log filtering that uses logutils
Check
Currently when hitting the /v1/agent/self API with ACL Replication
enabled results in the token being returned in the API. This commit
redacts that information, as it should be treated as a shared secret.
This PR sorts the output of the endpoint since its results are used as
part of Consul checks to avoid the value changing unnecessarily.
Fixes https://github.com/hashicorp/nomad/issues/3211
If Config.Vault.Token is defined, /v1/agent/self will return the string
`<redacted>`. If the token is not set, This endpoint will continue to
return the empty string.
* Added the keygen command
* Added support for gossip encryption
* Changed the URL for keyring management
* Fixed the cli
* Added some tests
* Added tests for keyring operations
* Added a test for removal of keys
* Added some docs
* Fixed some docs
* Added general options
When an agent is running a server, the list of servers includes the
Raft peers. When the agent is running a client (which is always the
case?), include a list of the servers found in the Client's RpcProxy.
Dedupe and provide a unique list back to the caller.