We noticed that the Service Instance listing on both Node and Service views where not taking into account proxy instance health. This fixes that up so that the small health check information in each Service Instance row includes the proxy instances health checks when displaying Service Instance health (afterall if the proxy instance is unhealthy then so is the service instance that it should be proxying)
* Refactor Consul::InstanceChecks with docs
* Add to-hash helper, which will return an object keyed by a prop
* Stop using/relying on ember-data type things, just use a hash lookup
* For the moment add an equivalent "just give me proxies" model prop
* Start stitching things together, this one requires an extra HTTP request
..previously we weren't even requesting proxies instances here
* Finish up the stitching
* Document Consul::ServiceInstance::List while I'm here
* Fix up navigation mocks Name > Service
* ui: Add the most basic workspace root in /ui
* We already have a LICENSE file in the repository root
* Change directory path in build scripts ui-v2 -> ui
* Make yarn install flags configurable from elsewhere
* Minimal workspace root makefile
* Call the new docker specific target
* Update yarn in the docker build image
* Reconfigure the netlify target and move to the higher makefile
* Move ui-v2 -> ui/packages/consul-ui
* Change repo root to refleect new folder structure
* Temporarily don't hoist consul-api-double
* Fixup CI configuration
* Fixup lint errors
* Fixup Netlify target
2020-10-21 15:23:16 +01:00
Renamed from ui-v2/app/components/consul/instance-checks/index.hbs (Browse further)