These changes are primarily for Consul's UI, where we want to be more
specific about the state a peering is in.
- The "initial" state was renamed to pending, and no longer applies to
peerings being established from a peering token.
- Upon request to establish a peering from a peering token, peerings
will be set as "establishing". This will help distinguish between the
two roles: the cluster that generates the peering token and the
cluster that establishes the peering.
- When marked for deletion, peering state will be set to "deleting".
This way the UI determines the deletion via the state rather than the
"DeletedAt" field.
Co-authored-by: freddygv <freddy@hashicorp.com>
* ui: Add peer searching and sorting
Initial name search and sort only, more to come here
* Remove old peerings::search component
* Use @model peers
* ui: Peer listing with dc/ns/partition/name based unique IDs and polling deletion (#13648)
* ui: Add peer repo with listing datasource
* ui: Use data-loader component to use the data-source
* ui: Remove ember-data REST things and Route.model hook
* 10 second not 1 second poll
* Fill out Datacenter and Partition
* route > routeName
* Faker randomised mocks for peering endpoint
* ui: Adds initial peer detail page plus address tab (#13651)
This is the OSS portion of enterprise PR 2157.
It builds on the local blocking query work in #13438 to implement the
proxycfg.IntentionUpstreams interface using server-local data.
Also moves the ACL filtering logic from agent/consul into the acl/filter
package so that it can be reused here.
This is the OSS portion of enterprise PR 2141.
This commit provides a server-local implementation of the `proxycfg.Intentions`
interface that sources data from streaming events.
It adds events for the `service-intentions` config entry type, and then consumes
event streams (via materialized views) for the service's explicit intentions and
any applicable wildcard intentions, merging them into a single list of intentions.
An alternative approach I considered was to consume _all_ intention events (via
`SubjectWildcard`) and filter out the irrelevant ones. This would admittedly
remove some complexity in the `agent/proxycfg-glue` package but at the expense
of considerable overhead from waking potentially many thousands of connect
proxies every time any intention is updated.
This is the OSS portion of enterprise PR 2056.
This commit provides server-local implementations of the proxycfg.ConfigEntry
and proxycfg.ConfigEntryList interfaces, that source data from streaming events.
It makes use of the LocalMaterializer type introduced for peering replication,
adding the necessary support for authorization.
It also adds support for "wildcard" subscriptions (within a topic) to the event
publisher, as this is needed to fetch service-resolvers for all services when
configuring mesh gateways.
Currently, events will be emitted for just the ingress-gateway, service-resolver,
and mesh config entry types, as these are the only entries required by proxycfg
— the events will be emitted on topics named IngressGateway, ServiceResolver,
and MeshConfig topics respectively.
Though these events will only be consumed "locally" for now, they can also be
consumed via the gRPC endpoint (confirmed using grpcurl) so using them from
client agents should be a case of swapping the LocalMaterializer for an
RPCMaterializer.
For initial cluster peering TProxy support we consider all imported services of a partition to be potential upstreams.
We leverage the VirtualIP table because it stores plain service names (e.g. "api", not "api-sidecar-proxy").
When the protocol is http-like, and an intention has a peered source
then the normal RBAC mTLS SAN field check is replaces with a joint combo
of:
mTLS SAN field must be the service's local mesh gateway leaf cert
AND
the first XFCC header (from the MGW) must have a URI field that matches the original intention source
Also:
- Update the regex program limit to be much higher than the teeny
defaults, since the RBAC regex constructions are more complicated now.
- Fix a few stray panics in xds generation.
When traversing an exported peered service, the discovery chain
evaluation at the other side may re-route the request to a variety of
endpoints. Furthermore we intend to terminate mTLS at the mesh gateway
for arriving peered traffic that is http-like (L7), so the caller needs
to know the mesh gateway's SpiffeID in that case as well.
The following new SpiffeID values will be shipped back in the peerstream
replication:
- tcp: all possible SpiffeIDs resulting from the service-resolver
component of the exported discovery chain
- http-like: the SpiffeID of the mesh gateway
Adds fine-grained node.[node] entries to the index table, allowing blocking queries to return fine-grained indexes that prevent them from returning immediately when unrelated nodes/services are updated.
Co-authored-by: kisunji <ckim@hashicorp.com>
We have many indexer functions in Consul which take interface{} and type assert before building the index. We can use generics to get rid of the initial plumbing and pass around functions with better defined signatures. This has two benefits: 1) Less verbosity; 2) Developers can parse the argument types to memdb schemas without having to introspect the function for the type assertion.
* add peers route
* add peers to nav
* use regular app ui patterns peers template
* use empty state in peers UI
* mock `v1/peerings` request
* implement custom adapter/serializer for `peers`-model
* index request for peerings on peers route
* update peers list to show as proper list
* Use tailwind for easier styling
* Unique ids in peerings response mock-api
* Add styling peerings list
* Allow creating empty tooltip
To make it easier to iterate over a set of items where some items
should not display a tooltip and others should.
* Add tooltip Peerings:Badge
* Add undefined peering state badge
* Remove imported/exported services count peering
This won't be included in the initial version of the API response
* Implement Peerings::Search
* Make it possible to filter peerings by name
* Install ember-keyboard
For idiomatic handling of key-presses.
* Clear peering search input when pressing `Escape`
* use peers.index instead of peers for peerings listing
* Allow to include peered services in services-query
* update services mock to add peerName
* add Consul::Peer component
To surface peering information on a resource
* add PeerName as attribute to service model
* surface peering information in service list
* Add tooltip to Consul::Peer
* Make services searchable by peer-name
* Allow passing optional query-params to href-to
* Add peer query-param to dc.services.show
* Pass peer as query-param services listing
* support option peer route-param
* set peer-name undefined in services serializer when empty
* update peer route-param when navigating to peered service
* request sercice with peer-name if need be
* make sure to reset peer route-param when leaving service.show
* componentize services.peer-info
* surface peer info services.show
* make sure to reset peer route-param in main nav
* fix services breadcrumb services.intentions
we need to reset peer route-param here to not break the app
* surface peer when querying for it on service api call
* query for peer info service-instance api calls
* surface peer info service-instance.show
* Camelize peer attributes to match rest of app
* Refactor peers.index to reflect camelized attributes for peer
* Remove unused query-params services.show
* make logo href reset peer route-param
* Cleanup optional peer param query service-instance
* Use replace decorator instead of serializer for empty peerName
* make sure to only send peer info when correct qp is passed
* Always send qp for querying peers services request
* rename with-imports to with-peers
* Use css for peer-icon
* Refactor bucket-list component to surface peer-info
* Remove Consul::Peer component
This info is now displayed via the bucket-list component
* Fix bucket-list component to surface service again
* Update bucket-list docs to reflect peer-info addition
* Remove tailwind related styles
* Remove consul-tailwind package
We won't be using tailwind for now
* Fix typo badge scss
* Add with-import handling mock-api nodes
* Add peerName to node attributes
* include peers when querying nodes
* reflect api updates node list mock
* Create consul::node::peer-info component
* Surface peer-info in nodes list
* Mock peer response for node request
* Make it possible to add peer-name to node request
* Update peer route-param when linking to node
* Reset peers route-param when leaving nodes.show
We need to reset the route-param to not introduce a bug - otherwise
subsequent node show request would request with the old peer query-param
* Add sourcePeer intentions api mock
* add SourcePeer attr to intentions model
* Surface peering info on intentions list
* Request peered intentions differently intentions.edit
* Handle peer info in intentions/exact mock
* Surface peering info intention view
* Add randomized peer data topology mock
* Surface peer info topology view
* fix service/peer-info styling
We aren't using tailwind anymore - we need to create a custom scss file
* Update peerings api mocks
* Update peerings::badge with updated styling
* cleanup intentions/exact mock
* Create watcher component to declaratively register polling
* Poll peers in background when on peers route
* use existing colors for peering-badge
* Add test for requesting service with `with-peers`-query
* add imported/exported count to peers model
* update mock-api to surface exported/imported count on peers
* Show exported/imported peers count on peers list
* Use translations for service import/export UI peers
* Make sure to ask for nodes with peers
* Add match-url step for easier url testing of service urls
* Add test for peer-name on peered services
* Add test for service navigation peered service
* Implement feature-flag handling
* Enable peering feature in test and development
* Redirect peers to services.index when feature-flag is disabled
* Only query for peers when feature is enabled
* Only show peers in nav when feature is enabled
* Componentize peering service count detail
* Handle non-state Peerings::Badge
* Use Peerings::ServiceCount in peerings list
* Only send peer query for peered service-instances.
* Add step to visit url directly
* add test for accessing peered service directly
* Remove unused service import peers.index
* Only query for peer when peer provided node-adapter
* fix tests