* Allow RSA CA certs for consul and vault providers to correctly sign EC leaf certs.
* Ensure key type ad bits are populated from CA cert and clean up tests
* Add integration test and fix error when initializing secondary CA with RSA key.
* Add more tests, fix review feedback
* Update docs with key type config and output
* Apply suggestions from code review
Co-Authored-By: R.B. Boyer <rb@hashicorp.com>
Updating all .io Community sites to direct practitioners to the Forum as the first medium for communicating with other users and HashiCorp employees. Deleted Gitter link and Google Group link, as these will be phased out over the next few months. Updated what appeared to be a typo on the page description. Chatted with Nic Jackson before submitting PR.
* Changed Guides to Learn in the top nav and added utm parameters to the guide index page
* Update website/source/docs/guides/index.html.md
* Update website/source/docs/guides/index.html.md
* Update website/source/layouts/layout.erb
A check may be set to become passing/critical only if a specified number of successive
checks return passing/critical in a row. Status will stay identical as before until
the threshold is reached.
This feature is available for HTTP, TCP, gRPC, Docker & Monitor checks.
Add text listing Consul's L7 features (via Envoy). Re-organize text to
flow similarly to Istio section.
Co-Authored-By: Judith Malnick <judith.patudith@gmail.com>
Fixes#2742
Previously the docs didn't clarify that if a server restarts as a client then force-leave won't lead to removing the node from the raft config. This is because the node, which is alive after a restart, will refute messages about it having left . These messages about members leaving are in turn what trigger Consul's leader to remove a server from raft.
Fixes: #5396
This PR adds a proxy configuration stanza called expose. These flags register
listeners in Connect sidecar proxies to allow requests to specific HTTP paths from outside of the node. This allows services to protect themselves by only
listening on the loopback interface, while still accepting traffic from non
Connect-enabled services.
Under expose there is a boolean checks flag that would automatically expose all
registered HTTP and gRPC check paths.
This stanza also accepts a paths list to expose individual paths. The primary
use case for this functionality would be to expose paths for third parties like
Prometheus or the kubelet.
Listeners for requests to exposed paths are be configured dynamically at run
time. Any time a proxy, or check can be registered, a listener can also be
created.
In this initial implementation requests to these paths are not
authenticated/encrypted.
The fields in the certs are meant to hold the original binary
representation of this data, not some ascii-encoded version.
The only time we should be colon-hex-encoding fields is for display
purposes or marshaling through non-TLS mediums (like RPC).
- fix instructions for CoreDNS (it updated)
- fix instructions for new component names
- recommend installing with the name 'consul'
- add disclaimer that catalog sync is not always required
- clean up example values.yaml files
* website: Update middleman-hashicorp container and Gemfile.lock
Time marches on, and so do security vulnerabilities in Nokogiri. So it's time
for a new container.
As with last time, here's a reminder for the next person who needs to update
this:
- You shouldn't just update the dependency in Gemfile.lock, because your build
times will go to heck as you compile Nokogiri from source on every run. So you
need an updated container with all the dependencies.
- To update the container, you need to push a new tag to the middleman-hashicorp
repo. Teamcity does the rest, and will ship a new container to Docker Hub
(unless its credentials are out of date, in which case go ask team-eng-serv.)
- Once that's pushed:
- Update Makefile
- Update the Gemfile
- Delete Gemfile.lock
- `make website` until it comes up, then ctrl-C
- Commit the changes
* website: Specify a different json version in Gemfile.lock
The Consul website uses different containers for preview and deploy, and this
oddball JSON version was causing issues. This commit sacrifices a little bit
of preview startup speed for (hopefully) working deploys.
- Bootstrap escape hatches are OK.
- Public listener/cluster escape hatches are OK.
- Upstream listener/cluster escape hatches are not supported.
If an unsupported escape hatch is configured and the discovery chain is
activated log a warning and act like it was not configured.
Fixes#6160
* website: update the vs. envoy and proxies page
This is the second result on Google for "consul envoy" and
it seemed like it needed a bit of an upgrade to help clarify the
current state.
* Update website/source/intro/vs/proxies.html.md
Co-Authored-By: Judith Malnick <judith.patudith@gmail.com>
* Update website/source/intro/vs/proxies.html.md
Co-Authored-By: Judith Malnick <judith.patudith@gmail.com>
* Update website/source/intro/vs/proxies.html.md
Co-Authored-By: Judith Malnick <judith.patudith@gmail.com>
* Update website/source/intro/vs/proxies.html.md
Co-Authored-By: Judith Malnick <judith.patudith@gmail.com>
* Apply suggestions from code review
Co-Authored-By: Judith Malnick <judith.patudith@gmail.com>
Add parameter local-only to operator keyring list requests to force queries to only hit local servers (no WAN traffic).
HTTP API: GET /operator/keyring?local-only=true
CLI: consul keyring -list --local-only
Sending the local-only flag with any non-GET/list request will result in an error.
The main change is that we no longer filter service instances by health,
preferring instead to render all results down into EDS endpoints in
envoy and merely label the endpoints as HEALTHY or UNHEALTHY.
When OnlyPassing is set to true we will force consul checks in a
'warning' state to render as UNHEALTHY in envoy.
Fixes#6171
* Update go-bexpr to v0.1.1
This brings in:
• `in`/`not in` operators to do substring matching
• `matches` / `not matches` operators to perform regex string matching.
* Add the capability to auto-generate the filtering selector ops tables for our docs
This fixes pathological cases where the write throughput and snapshot size are both so large that more than 10k log entries are written in the time it takes to restore the snapshot from disk. In this case followers that restart can never catch up with leader replication again and enter a loop of constantly downloading a full snapshot and restoring it only to find that snapshot is already out of date and the leader has truncated its logs so a new snapshot is sent etc.
In general if you need to adjust this, you are probably abusing Consul for purposes outside its design envelope and should reconsider your usage to reduce data size and/or write volume.