Switch from /v1/agent/self to /v1/status/leader when checking if the test server has come up successfully in the waitForAPI function.
Previously, the test server was relying (probably not intentionally) on the default value of the acl_enforce_version_8 in the TestConfig, which was false. So if you create a test server and enabled ACLs, they would not be enforced and the server would be able to come up pretty quickly because /v1/agent/self would return a 200 status pretty much as soon as the agent is running and most likely before leader election is finished.
Now that we have removed acl_enforce_version_8 property (equivalent to being true by default) if you've created a test server with ACLs enabled, it will need to wait for leader election and for ACLs to be initialized before it'll get a successful response from the /v1/agent/self.
Note: With this change, waitForAPI function no longer requires a 200 response status from the v1/status/leader endpoint. This is because in some tests, namely TestAPI_AgentLeave, we are only running clients, and this endpoint returns a 500 status.
* ui: Split up client/http and replace $.ajax
This splits the client/http service more in the following ways:
1. Connections are now split out into its own service
2. The transport is now split out into its own service that returns a
listener based http transport
3. Various string parsing/stringifying functions are now split out into
utils
* Remove jQuery from our production build
* Move the coverage serving to the server.js file
* Self review amends
* Add X-Requested-With header
* Move some files around, externalize some functions
* Move connection tracking to use native Set
* Ensure HTTP parsing doesn't encode headers
In the future this will change to deal with all HTTP parsing in one
place, hence the commented out METHOD_PARSING etc
* Start to fix up integration tests to use requestParams
A port can be sent in the Host header as defined in the HTTP RFC, so we
take any hosts that we want to match traffic to and also add another
host with the listener port added.
Also fix an issue with envoy integration tests not running the
case-ingress-gateway-tls test.
PRs #7610 and #7962 changed the locations/URLs for the gateway docs
which results in a HTTP 404 Not Found being returned when accessing
the previous URLs.
Update URLs for gateway docs to point to new URLs.
PR #8243 adds corresponding redirects on consul.io.
Fixes#7527
I want to highlight this and explain what I think the implications are and make sure we are aware:
* `HTTPConnStateFunc` closes the connection when it is beyond the limit. `Close` does not block.
* `HTTPConnStateFuncWithDefault429Handler(10 * time.Millisecond)` blocks until the following is done (worst case):
1) `conn.SetDeadline(10*time.Millisecond)` so that
2) `conn.Write(429error)` is guaranteed to timeout after 10ms, so that the http 429 can be written and
3) `conn.Close` can happen
The implication of this change is that accepting any new connection is worst case delayed by 10ms. But only after a client reached the limit already.
A query made with AllowNotModifiedResponse and a MinIndex, where the
result has the same Index as MinIndex, will return an empty response
with QueryMeta.NotModified set to true.
Co-authored-by: Pierre Souchay <pierresouchay@users.noreply.github.com>
* Fix typos on commandline flags, updated config opts
- Added anchors to https://github.com/hashicorp/consul/pull/8223
- Fix Typos
Updated to include config file options as well as CLI.
* Upgrade consul-api-dobule to version 3.1.3
* Create ConsulInstaceChecks component with test
* Redesign: Service Instaces tab in for a Node
* Update Node tests to work with the ConsulServiceInstancesList
* Style fix to the copy button in the composite-row details
* Delete helper and move logic to ConsulInstanceChecks component
* Delete unused component consul-node-service-list
In https://github.com/hashicorp/consul/pull/8065 we attempted to reduce
the amount of times that the UI requests the discovery chain endpoint
when connect is disabled on a datacenter.
Currently we can only tell if connect is disabled on a datacenter by
detecting a 500 error from a connect related endpoint.
In the above PR we mistakenly returned from a catch instead of
rethrowing the error, which meant that when a none 500 error was caught
the discovery chain data would be removed. Whilst at first glance this
doens't seem like a big problem due to the endpoint erroring, but we
also receive a 0 error when we abort endpoints during blocking queries.
This means that in certain cases we can remove cached data for the
discovery chain and then delay reloading it via a blocking query.
This PR replaces the return with a throw, which means that everything is
dealt with correctly via the blocking query error detection/logic.
Also fix a bug where Consul could segfault if TLS was enabled but no client certificate was provided. How no one has reported this as a problem I am not sure.
The initial auto encrypt CSR wasn’t containing the user supplied IP and DNS SANs. This fixes that. Also We were configuring a default :: IP SAN. This should be ::1 instead and was fixed.
This provides a user with a better experience, knowing that the command
worked appropriately. The output of the write/delete CLI commands are
not going to be used in a bash script, in fact previously a success
provided no ouput, so we do not have to worry about spurious text being
injected into bash pipelines.
We'd assumed that TTL check outputs shouldn't be shown as it seemed like
they never had outputs, but they can be submitted with notes, which are
then converted into the output.
This unhides the output for TTLs and treats them exactly the same as
other healthchecks.