A previous commit introduced an internally-managed server certificate
to use for peering-related purposes.
Now the peering token has been updated to match that behavior:
- The server name matches the structure of the server cert
- The CA PEMs correspond to the Connect CA
Note that if Conect is disabled, and by extension the Connect CA, we
fall back to the previous behavior of returning the manually configured
certs and local server SNI.
Several tests were updated to use the gRPC TLS port since they enable
Connect by default. This means that the peering token will embed the
Connect CA, and the dialer will expect a TLS listener.
* Update empty-state copy throughout app
Update empty-states throughout the app to only include mentions of ACLs if the user has ACLs enabled.
* Update peers empty state copy
Flip the empty state copy logic for peers. Small typo fixes on other empty states.
* Update Node empty state with docs
* Update intentions empty state
Make ACL copy dependent on if acls are enabled.
* Update Nodes empty state learn copy
* Fix binding rule copy key
* Use postcss instead of ember-cli-sass
This will make it possible to work with tailwindcss.
* configure postcss to compile sass
* add "sub-app" css into app/styles tree
* pin node@14 via volta
Only used by people that use volta
* Install tailwind and autoprefixer
* Create tailwind config
* Use tailwind via postcss
* Fix: tailwind changes current styling
When adding tailwind to the bottom of app.scss we apparently
change the way the application looks. We will import
it first to make sure we don't change the current styling
of the application right now.
* Automatic import of HDS colors in tailwind
* Install @hashicorp/design-system-components
* install add-on
* setup postcss scss pipeline to include tokens css
* import add-on css
* Install ember-auto-import v2
HDS depends on v2 of ember-auto-import so we need to upgrade.
* Upgrade ember-cli-yadda
v0.6.0 of ember-cli-yadda adds configuration for webpack.
This configuration is incompatible with webpack v5
which ember-auto-import v2 is using.
We need to upgrade ember-cli-yadda to the latest
version that fixes this incompatability with auto-import v2
* Install ember-flight-icons
HDS components are using the addon internally.
* Document HDS usage in engineering docs
* Upgrade ember-cli-api-double
* fix new linting errors
* Wrap service names on show and instance routes
Moves the trailing type/kind/actions to the second row of the header
no matter what length the service name is. Wraps service name text.
* Change grid format of AppView globally
* Add tooltips to the last element of breadcrumbs
* updating to serf v0.10.1 and memberlist v0.5.0 to get memberlist size metrics and memberlist broadcast queue depth metric
* update changelog
* update changelog
* correcting changelog
* adding "QueueCheckInterval" for memberlist to test
* updating integration test containers to grab latest api
more readable in CI.
```
Running primary verification step for case-ingress-gateway-multiple-services...
�[34;1mverify.bats
�[0m�[1G ingress proxy admin is up on :20000�[K�[75G 1/12�[2G�[1G ✓ ingress proxy admin is up on :20000�[K
�[0m�[1G s1 proxy admin is up on :19000�[K�[75G 2/12�[2G�[1G ✓ s1 proxy admin is up on :19000�[K
�[0m�[1G s2 proxy admin is up on :19001�[K�[75G 3/12�[2G�[1G ✓ s2 proxy admin is up on :19001�[K
�[0m�[1G s1 proxy listener should be up and have right cert�[K�[75G 4/12�[2G�[1G ✓ s1 proxy listener should be up and have right cert�[K
�[0m�[1G s2 proxy listener should be up and have right cert�[K�[75G 5/12�[2G�[1G ✓ s2 proxy listener should be up and have right cert�[K
�[0m�[1G ingress-gateway should have healthy endpoints for s1�[K�[75G 6/12�[2G�[31;1m�[1G ✗ ingress-gateway should have healthy endpoints for s1�[K
�[0m�[31;22m (from function `assert_upstream_has_endpoints_in_status' in file /workdir/primary/bats/helpers.bash, line 385,
```
versus
```
Running primary verification step for case-ingress-gateway-multiple-services...
1..12
ok 1 ingress proxy admin is up on :20000
ok 2 s1 proxy admin is up on :19000
ok 3 s2 proxy admin is up on :19001
ok 4 s1 proxy listener should be up and have right cert
ok 5 s2 proxy listener should be up and have right cert
not ok 6 ingress-gateway should have healthy endpoints for s1
not ok 7 s1 proxy should have been configured with max_connections in services
ok 8 ingress-gateway should have healthy endpoints for s2
```