* Only show instances- and tags-tab peered services
* Adapt show-with-slashes test to peering changes
Tests always have the peering feature turned on and the default service
we load from the mock-api will be peered. This is why the topology
view of the service.show page will not be accessible in the updated
test it will show the instances instead. This change does not change
what the test is actually testing so just putting changing to the now
different url is fine.
* don't show partition / peer at the same time in bucket-list
* use bucket-list in intentions table
* add bucket-list tests
* Simplify bucket list - match old behavior
Refactor the bucket-list component to be easier to grok and match
how the old template based approach worked. I.e. do not surface
partition or namespace when it matches the passed nspace or partition
property.
* Update docs for bucket-list
* fix linting
* Only display dc dropdown when more than one dc is available
* Add wan federation message to dc dropdown
* Add test for conditionally displaying dc dropdown
* Move single datacenter indicator into datacenter selector
* Add `DATACENTERS` seperator dc dropdown
* "fix" unnecessary margin-top in dc dropdown
* Don't request nodes/services `with-peers` anymore
This will be automatic - no need for the query-param anymore.
* Return peering data based on feature flag mock-api services/nodes
* Update tests to reflect removed with-peers query-param
* setup cookie for turning peer feature flag on in mock-api in testing
* Add missing `S` for renamed PEERING feature-flag cookie
* ui: Add peer searching and sorting
Initial name search and sort only, more to come here
* Remove old peerings::search component
* Use @model peers
* ui: Peer listing with dc/ns/partition/name based unique IDs and polling deletion (#13648)
* ui: Add peer repo with listing datasource
* ui: Use data-loader component to use the data-source
* ui: Remove ember-data REST things and Route.model hook
* 10 second not 1 second poll
* Fill out Datacenter and Partition
* route > routeName
* Faker randomised mocks for peering endpoint
* ui: Adds initial peer detail page plus address tab (#13651)
* add peers route
* add peers to nav
* use regular app ui patterns peers template
* use empty state in peers UI
* mock `v1/peerings` request
* implement custom adapter/serializer for `peers`-model
* index request for peerings on peers route
* update peers list to show as proper list
* Use tailwind for easier styling
* Unique ids in peerings response mock-api
* Add styling peerings list
* Allow creating empty tooltip
To make it easier to iterate over a set of items where some items
should not display a tooltip and others should.
* Add tooltip Peerings:Badge
* Add undefined peering state badge
* Remove imported/exported services count peering
This won't be included in the initial version of the API response
* Implement Peerings::Search
* Make it possible to filter peerings by name
* Install ember-keyboard
For idiomatic handling of key-presses.
* Clear peering search input when pressing `Escape`
* use peers.index instead of peers for peerings listing
* Allow to include peered services in services-query
* update services mock to add peerName
* add Consul::Peer component
To surface peering information on a resource
* add PeerName as attribute to service model
* surface peering information in service list
* Add tooltip to Consul::Peer
* Make services searchable by peer-name
* Allow passing optional query-params to href-to
* Add peer query-param to dc.services.show
* Pass peer as query-param services listing
* support option peer route-param
* set peer-name undefined in services serializer when empty
* update peer route-param when navigating to peered service
* request sercice with peer-name if need be
* make sure to reset peer route-param when leaving service.show
* componentize services.peer-info
* surface peer info services.show
* make sure to reset peer route-param in main nav
* fix services breadcrumb services.intentions
we need to reset peer route-param here to not break the app
* surface peer when querying for it on service api call
* query for peer info service-instance api calls
* surface peer info service-instance.show
* Camelize peer attributes to match rest of app
* Refactor peers.index to reflect camelized attributes for peer
* Remove unused query-params services.show
* make logo href reset peer route-param
* Cleanup optional peer param query service-instance
* Use replace decorator instead of serializer for empty peerName
* make sure to only send peer info when correct qp is passed
* Always send qp for querying peers services request
* rename with-imports to with-peers
* Use css for peer-icon
* Refactor bucket-list component to surface peer-info
* Remove Consul::Peer component
This info is now displayed via the bucket-list component
* Fix bucket-list component to surface service again
* Update bucket-list docs to reflect peer-info addition
* Remove tailwind related styles
* Remove consul-tailwind package
We won't be using tailwind for now
* Fix typo badge scss
* Add with-import handling mock-api nodes
* Add peerName to node attributes
* include peers when querying nodes
* reflect api updates node list mock
* Create consul::node::peer-info component
* Surface peer-info in nodes list
* Mock peer response for node request
* Make it possible to add peer-name to node request
* Update peer route-param when linking to node
* Reset peers route-param when leaving nodes.show
We need to reset the route-param to not introduce a bug - otherwise
subsequent node show request would request with the old peer query-param
* Add sourcePeer intentions api mock
* add SourcePeer attr to intentions model
* Surface peering info on intentions list
* Request peered intentions differently intentions.edit
* Handle peer info in intentions/exact mock
* Surface peering info intention view
* Add randomized peer data topology mock
* Surface peer info topology view
* fix service/peer-info styling
We aren't using tailwind anymore - we need to create a custom scss file
* Update peerings api mocks
* Update peerings::badge with updated styling
* cleanup intentions/exact mock
* Create watcher component to declaratively register polling
* Poll peers in background when on peers route
* use existing colors for peering-badge
* Add test for requesting service with `with-peers`-query
* add imported/exported count to peers model
* update mock-api to surface exported/imported count on peers
* Show exported/imported peers count on peers list
* Use translations for service import/export UI peers
* Make sure to ask for nodes with peers
* Add match-url step for easier url testing of service urls
* Add test for peer-name on peered services
* Add test for service navigation peered service
* Implement feature-flag handling
* Enable peering feature in test and development
* Redirect peers to services.index when feature-flag is disabled
* Only query for peers when feature is enabled
* Only show peers in nav when feature is enabled
* Componentize peering service count detail
* Handle non-state Peerings::Badge
* Use Peerings::ServiceCount in peerings list
* Only send peer query for peered service-instances.
* Add step to visit url directly
* add test for accessing peered service directly
* Remove unused service import peers.index
* Only query for peer when peer provided node-adapter
* fix tests
* Add %panel CSS component
* Deprecate old menu-panel component
* Various smallish tweaks to disclosure-menu
* Move all menus in the app chrome to use new DisclosureMenu
* Follow up CSS to move all app chrome menus to new components
* Don't prevent default any events from anchors
* Add a tick to click steps
This commit excludes the health of any service instances from the Node Listing page. This means that if you are viewing the Node listing page you will only see failing nodes if there are any Node Checks failing, Service Instance Health checks are no longer taken into account.
Co-authored-by: Jamie White <jamie@jgwhite.co.uk>
We noticed that the Service Instance listing on both Node and Service views where not taking into account proxy instance health. This fixes that up so that the small health check information in each Service Instance row includes the proxy instances health checks when displaying Service Instance health (afterall if the proxy instance is unhealthy then so is the service instance that it should be proxying)
* Refactor Consul::InstanceChecks with docs
* Add to-hash helper, which will return an object keyed by a prop
* Stop using/relying on ember-data type things, just use a hash lookup
* For the moment add an equivalent "just give me proxies" model prop
* Start stitching things together, this one requires an extra HTTP request
..previously we weren't even requesting proxies instances here
* Finish up the stitching
* Document Consul::ServiceInstance::List while I'm here
* Fix up navigation mocks Name > Service
The fix here is two fold:
- We shouldn't be providing the DataSource (which loads the data) with an id when we are creating from within a folder (in the buggy code we are providing the parentKey of the new KV you are creating)
- Being able to provide an empty id to the DataSource/KV repository and that repository responding with a newly created object is more towards the "new way of doing forms", therefore the corresponding code to return a newly created ember-data object. As we changed the actual bug in point 1 here, we need to make sure the repository responds with an empty object when the request id is empty.
* Make sure the mocks reflect the requested partition/namespace
* Ensure partition is passed through to the HTTP adapter
* Pass AuthMethod object through to TokenSource in order to use Partition
* Change up docs and add potential improvements for future
* Pass the query partition back onto the response
* Make sure the OIDC callback mock returns a Partition
* Enable OIDC provider mock overwriting during acceptance testing
* Make sure we can enable partitions and SSO post bootup only required
...for now
* Wire up oidc provider mocking
* Add SSO full auth flow acceptance tests
* Add some less fake API data
* Rename the models class so as to not be confused with JS Proxies
* Rearrange routlets slightly and add some initial outletFor tests
* Move away from a MeshChecks computed property and just use a helper
* Just use ServiceChecks for healthiness filtering for the moment
* Make TProxy cookie configurable
* Amend exposed paths and upstreams so they know about meta AND proxy
* Slight bit of TaggedAddresses refactor while I was checking for `meta` etc
* Document CONSUL_TPROXY_ENABLE
We recently changed the intentions form to take a full model of a dc rather than just the string identifier (so {Name: 'dc', Primary: true} vs just 'dc' in order to know whether the DC is the primary or not.
Unfortunately, we only did this on the global intentions page not the per service intentions page. This makes it impossible to save an intention from the per service intention page (whilst you can still save intentions from the global intention page as normal).
The fix here pretty much copy/pastes the approach taken in the global intention edit template over to the per service intention edit template.
Tests have been added for creation in the per service intention section, which again are pretty much just copied from the global one, unfortunately this didn't exist previously which would have helped prevent this.
- Move AuthDialog to use a Glimmer Component plus native named blocks/slots.
- Unravel the Auth* contextual components, there wasn't a lot of point having them as contextual components and now the AuthDialog (non-view-specific state machine component) can be used entirely separately from the view-specific components (AuthForm and AuthProfile).
- Move all the ACL related components that are in the main app chrome/navigation (our HashicorpConsul component) in our consul-acls sub package/module (which will eventually be loaded on demand only when ACLs are enabled)
* Update disco fixtures now we have partitions
* Add virtual-admin-6 fixture with partition 'redirects' and failovers
* Properly cope with extra partition segment for splitters and resolvers
* Make 'redirects' and failovers look/act consistently
* Fixup some unit tests
This commit uses all our new ways of doing things to Lock Sessions and their interactions with KV and Nodes. This is mostly around are new under-the-hood things, but also I took the opportunity to upgrade some of the CSS to reuse some of our CSS utils that have been made over the past few months (%csv-list and %horizontal-kv-list).
Also added (and worked on existing) documentation for Lock Session related components.
This sounds a bit 'backwards' as the end goal here is to add an improved UX to partitions, not namespaces. The reason for doing it this way is that Namespaces already has a type of 'improved UX' CRUD in that it has one to many relationship in the form when saving your namespaces (the end goal for partitions). In moving Namespaces to use the same approach as partitions we:
- Ensure the new approach works with one-to-many forms.
- Test the new approach without writing a single test (we already have a bunch of tests for namespaces which are now testing the approach used by both namespaces and partitions)
Additionally:
- Fixes issue with missing default nspace in the nspace selector
- In doing when checking to see that things where consistent between the two, I found a few little minor problems with the Admin Partition CRUD so fixed those up here also.
- Removed the old style Nspace notifications
Most HTTP API calls will use the default namespace of the calling token to additionally filter/select the data used for the response if one is not specified by the frontend.
The internal permissions/authorize endpoint does not do this (you can ask for permissions from different namespaces in on request).
Therefore this PR adds the tokens default namespace in the frontend only to our calls to the authorize endpoint. I tried to do it in a place that made it feel like it's getting added in the backend, i.e. in a place which was least likely to ever require changing or thinking about.
Note: We are probably going to change this internal endpoint to also inspect the tokens default namespace on the backend. At which point we can revert this commit/PR.
* Add the same support for the tokens default partition
* ui: Filter global intentions list by namespace and partition
Filters global intention listing by the current partition rather than trying to use a wildcard.
* ui: Ensure dc selector correctly shows the currently selected dc
* ui: Restrict access to non-default partitions in non-primaries (#11420)
This PR restricts access via the UI to only the default partition when in a non-primary datacenter i.e. you can only have multiple (non-default) partitions in the primary datacenter.
> In the future, this should all be moved to each individual repository now, which will mean we can finally get rid of this service.
This PR moves reconciliation to 'each individual repository'. I stopped short of getting rid of the service, but its so small now we pretty much don't need it. I'd rather wait until I look at the equivalent DataSink service and see if we can get rid of both equivalent services together (this also currently dependant on work soon to be merged)
Reconciliation of models (basically doing the extra work to clean up the ember-data store and bring our frontend 'truth' into line with the actual backend truth) when blocking/long-polling on different views/filters of data is slightly more complicated due to figuring out what should be cleaned up and what should be left in the store. This is especially apparent for KVs.
I built in a such a way to hopefully make sure it will all make sense for the future. I also checked that this all worked nicely with all our models, even KV which has never supported blocking queries. I left all that work in so that if we want to enable blocking queries/live updates for KV it now just involves deleting a couple of lines of code.
There is a tonne of old stuff that we can clean up here now (our 'fake headers' that we pass around) and I've added that to my list of thing for a 'Big Cleanup PR' that will remove lots of code that we no longer require.