* Updating the Helm chart to include ACL parameter and examples.
* Updates based on feedback.
* Update website/source/docs/platform/k8s/helm.html.md
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Start adding tests for cluster override
* Refactor tests for clusters
* Passing tests for custom upstream cluster override
* Added capability to customise local app cluster
* Rename config for local cluster override
Also in acl_endpoint_test.go:
* convert logical blocks in some token tests to subtests
* remove use of require.New
This removes a lot of noise in a later PR.
This way we can avoid unnecessary panics which cause other tests not to run.
This doesn't remove all the possibilities for panics causing other tests not to run, it just fixes the TestAgent
This is a followup to the testing feature added in #5304.
While it is very nice to have test agent logs to through testing.T.Logf
so that only logs from failed tests are emitted, if a test is hanging or
taking a long time those logs are delayed. In that case setting
NOLOGBUFFER=1 will temporarily redirect them to stdout so you can see
immediate feedback about hangs when running the tests in verbose mode.
One current example of a test where this can be relevant is:
$ go test ./agent/consul -run 'TestCatalog_ListServices_Stale' -v
* Confirm RA against Consul 1.3
Change product_version frontmatter to ea_version and increase to 1.3
* Confirm DG against Consul 1.3
Change product_version frontmatter to ea_version and increase to 1.3
Currently the gRPC server assumes that if you have configured TLS
certs on the agent (for RPC) that you want gRPC to be encrypted.
If gRPC is bound to localhost this can be overkill. For the API we
let the user choose to offer HTTP or HTTPS API endpoints
independently of the TLS cert configuration for a similar reason.
This setting will let someone encrypt RPC traffic with TLS but avoid
encrypting local gRPC traffic if that is what they want to do by only
enabling TLS on gRPC if the HTTPS API port is enabled.
There was an errant early-return in PolicyDelete() that bypassed the
rest of the function. This was ok because the only caller of this
function ignores the results.
This removes the early-return making it structurally behave like
TokenDelete() and for both PolicyDelete and TokenDelete clarify the lone
callers to indicate that the return values are ignored.
We may wish to avoid the entire return value as well, but this patch
doesn't go that far.
`establishLeadership` invoked during leadership monitoring may use autopilot to do promotions etc. There was a race with doing that and having autopilot initialized and this fixes it.
The guide currently uses node, service, and service for the UI Policy.
This will cause a practically useless UI. This patch uses the _prefix
variants instead which will have the intended behavior.
It appears that the `read` command for ACL policies was used to template the `read` command for ACL tokens, and an invalid option was not dropped from the docs.
Given a query like:
```
{
"Name": "tagged-connect-query",
"Service": {
"Service": "foo",
"Tags": ["tag"],
"Connect": true
}
}
```
And a Consul configuration like:
```
{
"services": [
"name": "foo",
"port": 8080,
"connect": { "sidecar_service": {} },
"tags": ["tag"]
]
}
```
If you executed the query it would always turn up with 0 results. This was because the sidecar service was being created without any tags. You could instead make your config look like:
```
{
"services": [
"name": "foo",
"port": 8080,
"connect": { "sidecar_service": {
"tags": ["tag"]
} },
"tags": ["tag"]
]
}
```
However that is a bit redundant for most cases. This PR ensures that the tags and service meta of the parent service get copied to the sidecar service. If there are any tags or service meta set in the sidecar service definition then this copying does not take place. After the changes, the query will now return the expected results.
A second change was made to prepared queries in this PR which is to allow filtering on ServiceMeta just like we allow for filtering on NodeMeta.
When tests fail, only the logs for the failing run are dumped to the
console which helps in diagnosis. This is easily added to other test
scenarios as they come up.