The rationale behind removing them is that all of our own code (xDS, builtin connect proxy) use the cache notification mechanism. This ensures that the blocking fetch behind the scenes is always executing. Therefore the only way you might go to get a certificate and have to wait is when 1) the request has never been made for that cert before or 2) you are using the v1/agent/connect/ca/leaf API for retrieving the cert yourself.
In the first case, the refresh change doesn’t alter the behavior. In the second case, it can be mitigated by using blocking queries with that API which just like normal cache notification mechanism will cause the blocking fetch to be initiated and to get leaf certs as soon as needed.
If you are not using blocking queries, or Envoy/xDS, or the builtin connect proxy but are retrieving the certs yourself then the HTTP endpoint might take a little longer to respond.
This also renames the RefreshTimeout field on the register options to QueryTimeout to more accurately reflect that it is used for any type that supports blocking queries.
The fallback method would still work but it would get into a state where it would let the certificate expire for 10s before getting a new one. And the new one used the less secure RPC endpoint.
This is also a pretty large refactoring of the auto encrypt code. I was going to write some tests around the certificate monitoring but it was going to be impossible to get a TestAgent configured in such a way that I could write a test that ran in less than an hour or two to exercise the functionality.
Moving the certificate monitoring into its own package will allow for dependency injection and in particular mocking the cache types to control how it hands back certificates and how long those certificates should live. This will allow for exercising the main loop more than would be possible with it coupled so tightly with the Agent.
Close#8294. Set overflow to hidden for both x and y axis. This prevents the overflow-y defaulting to auto, and creating scrollbars. Given the text overflow is set to ellipsis, this doesn't change the UI functionality.
CheckAlias already had a waitGroup, but the Add() call was happening too late, which was causing a race in tests. The add must happen before the goroutine is started.
CheckHTTP did not have a waitGroup, so I added it to match CheckAlias.
It looks like a lot of the implementation could be shared, and may not need all of channel, waitgroup and bool, but I will leave that refactor for another time.
When the race detector is enabled we see this test fail occasionally. The reordering of execution seems to make it possible for the snapshot splice to happen before any events are published to the topicBuffers.
We can handle this case in the test the same way it is handled by a subscription, by proceeding to the next event.
* ui: Add new consul-nspace-list component
* ui: Use new consul-nspace-list component
* Fix up other components to use linkable list-collection action
* ui: Remove some dead CSS
* Add components for KV form, KV list and Session form
* Pass through a @label attribute for a human label + don't require error
* Ignore transition aborted errors for if you are re-transitioning
* Make old confirmation dialog more ember-like and tagless
* Make sure data-source and data-sink supports KV and sessions
* Use new components and delete all the things
* Fix up tests
* Make list component tagless
* Add component pageobject and fixup tests from that
* Add eslint warning back in
Running every test with the race detector would add significant time to
CI. That additionaltime won't provide much value as many of the integration tests use
much of the same code.
For now we can run -race on some of the smaller packages. As we move
more code into smaller packages we should be able to add more packages
to the list that runs with '-race'.
For now this is running without parallelism, but we can enable that as
well when we need it.
boltdb fails the 'checkptr' check, which is automatically enabled by
'-race', so I've disabled checkptr as well.
* Add uri identifiers to all data source things and make them the same
1. Add uri identitifer to data-source service
2. Make <EventSource /> and <DataSource /> as close as possible
3. Add extra `.closed` method to get a list of inactive/closed/closing
data-sources from elsewhere
* Make the connections cleanup the least worst connection when required
* Pass the uri/request id through all the things
* Better user erroring
* Make event sources close on error
* Allow <DataLoader /> data slot to be configurable
* Allow the <DataWriter /> removed state to be configurable
* Don't error if meta is undefined
* Stitch together all the repositories into the data-source/sink
* Use data.source over repositories
* Add missing <EventSource /> components
* Fix up the views/templates
* Disable all the old route based blocking query things
* We still need the repo for the mixin for the moment
* Don't default to default, default != ''
This change was mostly automated with the following
First generate a list of functions with:
git grep -o 'Store) \([^(]\+\)(tx \*txn' ./agent/consul/state | awk '{print $2}' | grep -o '^[^(]\+'
Then the list was curated a bit with trial/error to remove and add funcs
as necessary.
Finally the replacement was done with:
dir=agent/consul/state
file=${1-funcnames}
while read fn; do
echo "$fn"
sed -i -e "s/(s \*Store) $fn(/$fn(/" $dir/*.go
sed -i -e "s/s\.$fn(/$fn(/" $dir/*.go
sed -i -e "s/s\.store\.$fn(/$fn(/" $dir/*.go
done < $file
Making these functions allows them to be used without introducing
an artificial dependency on the struct. Many of these will be called
from streaming Event processors, which do not have a store.
This change is being made ahead of the streaming work to get to reduce
the size of the streaming diff.
Move the subscription context to Next. context.Context should generally
never be stored in a struct because it makes that struct only valid
while the context is valid. This is rarely obvious from the caller.
Adds a forceClosed channel in place of the old context, and uses the new
context as a way for the caller to stop the Subscription blocking.
Remove some recursion out of bufferImte.Next. The caller is already looping so we can continue
in that loop instead of recursing. This ensures currentItem is updated immediately (which probably
does not matter in practice), and also removes the chance that we overflow the stack.
NextNoBlock and FollowAfter do not need to handle bufferItem.Err, the caller already
handles it.
Moves filter to a method to simplify Next, and more explicitly separate filtering from looping.
Also improve some godoc
Only unwrap itemBuffer.Err when necessary