Deadlock scenario:
1. Due to scheduling, the state runner sends one snapshot into
snapCh and then attempts to send a second. The first send succeeds
because the channel is buffered, but the second blocks.
2. Separately, Manager.Watch is called by the xDS server after
getting a discovery request from Envoy. This function acquires the
manager lock and then blocks on receiving the CurrentSnapshot from
the state runner.
3. Separately, there is a Manager goroutine that reads the snapshots
from the channel in step 1. These reads are done to notify proxy
watchers, but they require holding the manager lock. This goroutine
goes to acquire that lock, but can't because it is held by step 2.
Now, the goroutine from step 3 is waiting on the one from step 2 to
release the lock. The goroutine from step 2 won't release the lock until
the goroutine in step 1 advances. But the goroutine in step 1 is waiting
for the one in step 3. Deadlock.
By making this send non-blocking step 1 above can proceed. The coalesce
timer will be reset and a new valid snapshot will be delivered after it
elapses or when one is requested by xDS.
* Use NodeName not Node for cross checking proxies/instances
* Also copy over the meta data to keep the correct cursor/index
* When we sync checks to the ProxyInstance replace rather than accumulate
This commit makes a number of changes that should make
TestLoad_FullConfig easier to work with, and make the test more like
real world scenarios.
* use separate files in testdata/ dir to store the config source.
Separate files are much easier to edit because editors can syntax
highlight json/hcl, and it makes strings easier to find. Previously
trying to find strings would match strings used in other tests.
* use the exported config.Load interface instead of internal NewBuilder
and BuildAndValidate.
* remove the tail config overrides, which are only necessary with
nonZero works.
This commit reduces the interface to Load() a bit, in preparation for
unexporting NewBuilder and having everything call Load.
The three arguments are reduced to a single argument by moving the other
two into the options struct.
The three return values are reduced to two by moving the RuntimeConfig
and Warnings into a LoadResult struct.
There are many places in the API where we receive a property set to
`null` which can then lead to defensive code deeper in the app in order
to guard for this type of thing when usually we are expecting an array
or for the property to be undefined using omitempty on the backend.
Previously we had two places where we would deal with this in the
serializer using our 'remove-null' util (KV and Intentions).
This new decorator lets you declaritively define this type of data using
a decorator @NullValue([]) (which would replce a null value with [].
@NullValue in turn uses a more generic @replace helper, which we
currently don't need but would let you replace any value with another,
not just a null value.
An additional benefit here is that the guard/replacement is executed
lazily when we get the property instead of serializing all the values
when they come in via the API. On super large datasets, where we only
visualize part of the dataset (say in our scroll panes), this feels like
a good improvement on the previous approach.
We've always had this idea of being able to markup up information
semantically without thinking about what it should look like, then
applying our %h* placeholder styles to control what the information
should look like.
Back when we originally made our set of %h* placeholders, we tried to
follow Structure as much as possible, which defined the largest header
(which we thought would have been the h1 style) as a super large 3.5rem.
Therefore we made our set of %h* placeholders the same as Structure
beginning at a huge 3.5 size. We then re-overwrote those sizes only in
Consul specific CSS files thinking that this was due to us existing
before Structure did.
Lately we saw an extra clue in Structure - the extra large 3.5 header was
called 'h0'.
This commit moves all our headers to use a zero based scale, and
additionally uses our 3 digit scale as opposed to 1 digit (h1 vs h100),
similar to our color scales (note we don't use a hypen, which we can
alter later if need be), which means we can insert additional h150 etc
if need be.
Additional we stop styling our headers globally (h1 { @extend %h100; }
). This means there is no reason not to use headers for marking up
content depending on what it is rather than what it should look like,
and as a consequence means we can be more purposeful in ordering h*
tags.
Lastly, we use the new scale over the entire codebase and update a
couple of places where we were using using header tags due to what the
styleing for them looked like rather than what the meaning/order was.
* CSS for moving from a horizontal main menu to a side/vertical one
* Add <App /> Component and rearrange <HashcorpConsul /> to use it
1. HashicorpConsul now uses <App />
2. <App /> is now translated and adds 'skip to main content' functionality
3. Adds ember-in-viewport addon in order to visibly hide main navigation
items in order to take them out of focus/tabbing
4. Slight amends to the dom service while I was there
Previously the ServiceManager had to run a separate goroutine so that it could block on a channel
send/receive instead of a lock. Using this mutex with TryLock allows us to cancel the lock when
the serviceConfigWatch is stopped.
Without this change removing the ServiceManager.Start goroutine would not be possible because
when AddService is called it acquires the stateLock. While that lock is held, if there are
existing watches for the service, the old watch will be stopped, and the goroutine holding the
lock will attempt to wait for that watcher goroutine to exit.
If the goroutine is handling an update (serviceConfigWatch.handleUpdate) then it can block on
acquiring the stateLock and deadlock the agent. With this change the context is cancelled as
and the goroutine will exit instead of waiting on the stateLock.