Commit Graph

583 Commits

Author SHA1 Message Date
Derek Menteer 19f9de2224
Backport of Add grpc keepalive configuration into release/1.16.x (#19339) (#19346)
Add grpc keepalive configuration. (#19339)

Prior to the introduction of this configuration, grpc keepalive messages were
sent after 2 hours of inactivity on the stream. This posed issues in various
scenarios where the server-side xds connection balancing was unaware that envoy
instances were uncleanly killed / force-closed, since the connections would
only be cleaned up after ~5 minutes of TCP timeouts occurred. Setting this
config to a 30 second interval with a 20 second timeout ensures that at most,
it should take up to 50 seconds for a dead xds connection to be closed.
2023-10-24 08:52:05 -05:00
Gerard Nguyen 3ad17ed6b8
Show latest config in /v1/agent/self (#18716)
* Show latest config in /v1/agent/self

* remove license string in backport
2023-09-12 09:38:17 -04:00
hc-github-team-consul-core 1eb12f79c0
Backport of fix: emit consul version metric on a regular interval into release/1.16.x (#18728)
* backport of commit a4f77878e74d5f4f5c737554ee0b7ebf15ae6872

* backport of commit 29da4339bff6a7d2269551f9e7d63d65b2575eea

---------

Co-authored-by: Semir Patel <semir.patel@hashicorp.com>
2023-09-08 18:55:29 +00:00
Dan Bond 042f4a33e1
Manual Backport of Add TCP+TLS Healthchecks into release/1.16.x (#18678)
Add TCP+TLS Healthchecks (#18381)

* Begin adding TCPUseTLS

* More TCP with TLS plumbing

* Making forward progress

* Keep on adding TCP+TLS support for healthchecks

* Removed too many lines

* Unit tests for TCP+TLS

* Update tlsutil/config.go

Co-authored-by: Samantha <hello@entropy.cat>

* Working on the tcp+tls unit test

* Updated the runtime integration tests

* Progress

* Revert this file back to HEAD

* Remove debugging lines

* Implement TLS enabled TCP socket server and make a successful TCP+TLS healthcheck on it

* Update docs

* Update agent/agent_test.go

Co-authored-by: Samantha <hello@entropy.cat>

* Update website/content/docs/ecs/configuration-reference.mdx

Co-authored-by: Samantha <hello@entropy.cat>

* Update website/content/docs/ecs/configuration-reference.mdx

Co-authored-by: Samantha <hello@entropy.cat>

* Update agent/checks/check.go

Co-authored-by: Samantha <hello@entropy.cat>

* Address comments

* Remove extraneous bracket

* Update agent/agent_test.go

Co-authored-by: Samantha <hello@entropy.cat>

* Update agent/agent_test.go

Co-authored-by: Samantha <hello@entropy.cat>

* Update website/content/docs/ecs/configuration-reference.mdx

Co-authored-by: Samantha <hello@entropy.cat>

* Update the mockTLSServer

* Remove trailing newline

* Address comments

* Fix merge problem

* Add changelog entry

---------

Co-authored-by: Samantha <hello@entropy.cat>
(cherry picked from commit 7ea986783d1bec75779225cd358bec042d6f020e)

Co-authored-by: Phil Porada <pgporada@users.noreply.github.com>
2023-09-05 14:13:59 -07:00
hc-github-team-consul-core 8fbef46443
Backport of [NET-4958] Fix issue where envoy endpoints would fail to populate after snapshot restore into release/1.16.x (#18645)
* backport of commit 029a517e1cfe7103e7f620ef07fc3b2d6b995d9e

* backport of commit 5f43acf568856ba58e98344897fa8038bb1915eb

---------

Co-authored-by: Derek Menteer <derek.menteer@hashicorp.com>
2023-09-01 15:42:42 +00:00
Semir Patel 3fb8dda960
[BACKPORT] 1.16.x manual backport of OSS->CE branch (#18549) 2023-08-23 11:53:44 -05:00
hc-github-team-consul-core ea93c7b29c
Backport of Displays Consul version of each nodes in UI nodes section into release/1.16.x (#18113)
## Backport

This PR is auto-generated from #17754 to be assessed for backporting due
to the inclusion of the label backport/1.16.


🚨
>**Warning** automatic cherry-pick of commits failed. If the first
commit failed,
you will see a blank no-op commit below. If at least one commit
succeeded, you
will see the cherry-picked commits up to, _not including_, the commit
where
the merge conflict occurred.

The person who merged in the original PR is:
@WenInCode
This person should manually cherry-pick the original PR into a new
backport PR,
and close this one when the manual backport PR is merged in.

> merge conflict error: unable to process merge commit:
"1c757b8a2c1160ad53421b7b8bd7f74b205c4b89", automatic backport requires
rebase workflow



The below text is copied from the body of the original PR.

---

fixes #17097 Consul version of each nodes in UI nodes section

@jkirschner-hashicorp @huikang @team @Maintainers

Updated consul version in the request to register consul.
Added this as Node MetaData.
Fetching this new metadata in UI

<img width="1512" alt="Screenshot 2023-06-15 at 4 21 33 PM"
src="https://github.com/hashicorp/consul/assets/3139634/94f7cf6b-701f-4230-b9f7-d8c4342d0737">

Also made this backward compatible and tested.

Backward compatible in this context means - If consul binary with above
PR changes is deployed to one of node, and if UI is run from this node,
then the version of not only current (upgraded) node is displayed in UI
, but also of older nodes given that they are consul servers only.
For older (non-server or client) nodes the version is not added in
NodeMeta Data and hence the version will not be displayed for them.
If a old node is consul server, the version will be displayed. As the
endpoint - "v1/internal/ui/nodes?dc=dc1" was already returning version
in service meta. This is made use of in current UI changes.

<img width="1480" alt="Screenshot 2023-06-16 at 6 58 32 PM"
src="https://github.com/hashicorp/consul/assets/3139634/257942f4-fbed-437d-a492-37849d2bec4c">




---

<details>
<summary> Overview of commits </summary>

- 931fdfc7ecdc26bb7cc20b698c5e14c1b65fcc6e -
b3e2ec1ccaca3832a088ffcac54257fa6653c6c1 -
8d0e9a54907039c09330c6cd7b9e761566af6856 -
04e5d88cca37821f6667be381c16aaa5958b5c92 -
28286a2e98f8cd66ef8593c2e2893b4db6080417 -
43e50ad38207952a9c4d04d45d08b6b8f71b31fe -
0cf1b7077cdf255596254d9dc1624a269c42b94d -
27f34ce1c2973591f75b1e38a81ccbe7cee6cee3 -
2ac76d62b8cbae76b1a903021aebb9b865e29d6e -
3d618df9ef1d10dd5056c8b1ed865839c553a0e0 -
1c757b8a2c1160ad53421b7b8bd7f74b205c4b89 -
23ce82b4cee8f74dd634dbe145313e9a56c0077d -
4dc1c9b4c5aafdb8883ef977dfa9b39da138b6cb -
85a12a92528bfa267a039a9bb258170be914abf7 -
25d30a3fa980d130a30d445d26d47ef2356cb553 -
7f1d6192dce3352e92307175848b89f91e728c24 -
5174cbff84b0795d4cb36eb8980d0d5336091ac9

</details>

---------

Co-authored-by: Vijay Srinivas <vijayraghav22@gmail.com>
Co-authored-by: John Murret <john.murret@hashicorp.com>
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
2023-07-17 17:27:50 +00:00
hc-github-team-consul-core 5e3ad77fc0
Backport of feature - [NET - 4005] - [Supportability] Reloadable Configuration - enable_debug into release/1.16.x (#17969)
* backport of commit 10f500e895d92cc3691ade7b74a33db755d22039

* backport of commit e08c30910167fa2c6dd5a639b92338d2389457e4

* backport of commit 58638deeb3ac16b6556bba6a2b24034d7f886cce

* merge conf resolve

---------

Co-authored-by: Ashesh Vidyut <ashesh.vidyut@hashicorp.com>
Co-authored-by: Ashesh Vidyut <134911583+absolutelightning@users.noreply.github.com>
2023-06-30 18:40:20 +05:30
hc-github-team-consul-core 68e191dd50
Backport of Fix issue with streaming service health watches. into release/1.16.x (#17776)
* backport of commit 92bb96727fe781df8926f38e02db1d489d9163f4

* backport of commit 3ea67c04a6a9993b859d7e338c996b72a9793fb4

---------

Co-authored-by: Derek Menteer <derek.menteer@hashicorp.com>
2023-06-15 18:06:09 +00:00
hc-github-team-consul-core 2a51cb64dc
Backport of agent: remove agent cache dependency from service mesh leaf certificate management into release/1.16.x (#17704)
* backport of commit 558a8677ce0bd7ae01abda9652952a51f43a7c0c

* backport of commit 5cd06e00cc30eff34f88ab7992437b783ddaeeea

---------

Co-authored-by: R.B. Boyer <rb@hashicorp.com>
2023-06-13 16:12:43 +00:00
hc-github-team-consul-core 81eafc221b
Backport of Add writeAuditRPCEvent to agent_oss into release/1.16.x (#17608)
* backport of commit d77784ba51fd6a5d598ea2b87cb6e36e0fed8e72

* backport of commit f5a557dd7a5995094b3af96f1c522d49acfe795b

* backport of commit 1d782d63c437ab16e30d5bd00a6b8c3cbad08845

---------

Co-authored-by: Ronald Ekambi <ronekambi@gmail.com>
2023-06-07 19:01:05 -04:00
cskh 8a8913317f
chore: fix the error message format (#17554) 2023-06-02 13:37:44 +00:00
skpratt 82946aebf9
issue a warning if major FIPS assumptions are broken (#17524) 2023-05-31 09:01:44 -05:00
Dan Bond 5d07624e80
agent: don't write server metadata in dev mode (#17383)
Signed-off-by: Dan Bond <danbond@protonmail.com>
2023-05-16 02:50:27 -07:00
Dan Bond 6bb7782745
agent: prevent very old servers re-joining a cluster with stale data (#17171)
* agent: configure server lastseen timestamp

Signed-off-by: Dan Bond <danbond@protonmail.com>

* use correct config

Signed-off-by: Dan Bond <danbond@protonmail.com>

* add comments

Signed-off-by: Dan Bond <danbond@protonmail.com>

* use default age in test golden data

Signed-off-by: Dan Bond <danbond@protonmail.com>

* add changelog

Signed-off-by: Dan Bond <danbond@protonmail.com>

* fix runtime test

Signed-off-by: Dan Bond <danbond@protonmail.com>

* agent: add server_metadata

Signed-off-by: Dan Bond <danbond@protonmail.com>

* update comments

Signed-off-by: Dan Bond <danbond@protonmail.com>

* correctly check if metadata file does not exist

Signed-off-by: Dan Bond <danbond@protonmail.com>

* follow instructions for adding new config

Signed-off-by: Dan Bond <danbond@protonmail.com>

* add comments

Signed-off-by: Dan Bond <danbond@protonmail.com>

* update comments

Signed-off-by: Dan Bond <danbond@protonmail.com>

* Update agent/agent.go

Co-authored-by: Dan Upton <daniel@floppy.co>

* agent/config: add validation for duration with min

Signed-off-by: Dan Bond <danbond@protonmail.com>

* docs: add new server_rejoin_age_max config definition

Signed-off-by: Dan Bond <danbond@protonmail.com>

* agent: add unit test for checking server last seen

Signed-off-by: Dan Bond <danbond@protonmail.com>

* agent: log continually for 60s before erroring

Signed-off-by: Dan Bond <danbond@protonmail.com>

* pr comments

Signed-off-by: Dan Bond <danbond@protonmail.com>

* remove unneeded todo

* agent: fix error message

Signed-off-by: Dan Bond <danbond@protonmail.com>

---------

Signed-off-by: Dan Bond <danbond@protonmail.com>
Co-authored-by: Dan Upton <daniel@floppy.co>
2023-05-15 04:05:47 -07:00
Freddy 29d5811f0d
Update HCP bootstrapping to support existing clusters (#16916)
* Persist HCP management token from server config

We want to move away from injecting an initial management token into
Consul clusters linked to HCP. The reasoning is that by using a separate
class of token we can have more flexibility in terms of allowing HCP's
token to co-exist with the user's management token.

Down the line we can also more easily adjust the permissions attached to
HCP's token to limit it's scope.

With these changes, the cloud management token is like the initial
management token in that iit has the same global management policy and
if it is created it effectively bootstraps the ACL system.

* Update SDK and mock HCP server

The HCP management token will now be sent in a special field rather than
as Consul's "initial management" token configuration.

This commit also updates the mock HCP server to more accurately reflect
the behavior of the CCM backend.

* Refactor HCP bootstrapping logic and add tests

We want to allow users to link Consul clusters that already exist to
HCP. Existing clusters need care when bootstrapped by HCP, since we do
not want to do things like change ACL/TLS settings for a running
cluster.

Additional changes:

* Deconstruct MaybeBootstrap so that it can be tested. The HCP Go SDK
  requires HTTPS to fetch a token from the Auth URL, even if the backend
  server is mocked. By pulling the hcp.Client creation out we can modify
  its TLS configuration in tests while keeping the secure behavior in
  production code.

* Add light validation for data received/loaded.

* Sanitize initial_management token from received config, since HCP will
  only ever use the CloudConfig.MangementToken.

* Add changelog entry
2023-04-27 22:27:39 +02:00
Michael Wilkerson 4edb1b553d
* added Sameness Group to proto files (#16998)
- added Sameness Group to config entries
- added Sameness Group to subscriptions

* generated proto files

* added Sameness Group events to the state store
- added test cases

* Refactored health RPC Client
- moved code that is common to rpcclient under rpcclient common.go. This will help set us up to support future RPC clients

* Refactored proxycfg glue views
- Moved views to rpcclient config entry. This will allow us to reuse this code for a config entry client

* added config entry RPC Client
- Copied most of the testing code from rpcclient/health

* hooked up new rpcclient in agent

* fixed documentation and comments for clarity
2023-04-14 09:24:46 -07:00
Poonam Jadhav c8d21de074
feat: add reporting config with reload (#16890) 2023-04-11 15:04:02 -04:00
Ronald dd0e8eec14
copyright headers for agent folder (#16704)
* copyright headers for agent folder

* Ignore test data files

* fix proto files and remove headers in agent/uiserver folder

* ignore deep-copy files
2023-03-28 14:39:22 -04:00
Eric Haberkorn 0351f48bfd
allow setting locality on services and nodes (#16581) 2023-03-10 09:36:15 -05:00
Eric Haberkorn 1d9a09f276
add agent locality and replicate it across peer streams (#16522) 2023-03-07 14:05:23 -05:00
R.B. Boyer b089f93292
proxycfg: ensure that an irrecoverable error in proxycfg closes the xds session and triggers a replacement proxycfg watcher (#16497)
Receiving an "acl not found" error from an RPC in the agent cache and the
streaming/event components will cause any request loops to cease under the
assumption that they will never work again if the token was destroyed. This
prevents log spam (#14144, #9738).

Unfortunately due to things like:

- authz requests going to stale servers that may not have witnessed the token
  creation yet

- authz requests in a secondary datacenter happening before the tokens get
  replicated to that datacenter

- authz requests from a primary TO a secondary datacenter happening before the
  tokens get replicated to that datacenter

The caller will get an "acl not found" *before* the token exists, rather than
just after. The machinery added above in the linked PRs will kick in and
prevent the request loop from looping around again once the tokens actually
exist.

For `consul-dataplane` usages, where xDS is served by the Consul servers
rather than the clients ultimately this is not a problem because in that
scenario the `agent/proxycfg` machinery is on-demand and launched by a new xDS
stream needing data for a specific service in the catalog. If the watching
goroutines are terminated it ripples down and terminates the xDS stream, which
CDP will eventually re-establish and restart everything.

For Consul client usages, the `agent/proxycfg` machinery is ahead-of-time
launched at service registration time (called "local" in some of the proxycfg
machinery) so when the xDS stream comes in the data is already ready to go. If
the watching goroutines terminate it should terminate the xDS stream, but
there's no mechanism to re-spawn the watching goroutines. If the xDS stream
reconnects it will see no `ConfigSnapshot` and will not get one again until
the client agent is restarted, or the service is re-registered with something
changed in it.

This PR fixes a few things in the machinery:

- there was an inadvertent deadlock in fetching snapshot from the proxycfg
  machinery by xDS, such that when the watching goroutine terminated the
  snapshots would never be fetched. This caused some of the xDS machinery to
  get indefinitely paused and not finish the teardown properly.

- Every 30s we now attempt to re-insert all locally registered services into
  the proxycfg machinery.

- When services are re-inserted into the proxycfg machinery we special case
  "dead" ones such that we unilaterally replace them rather that doing that
  conditionally.
2023-03-03 14:27:53 -06:00
Dan Upton 118ffb1e95
grpc: fix data race in balancer registration (#16229)
Registering gRPC balancers is thread-unsafe because they are stored in a
global map variable that is accessed without holding a lock. Therefore,
it's expected that balancers are registered _once_ at the beginning of
your program (e.g. in a package `init` function) and certainly not after
you've started dialing connections, etc.

> NOTE: this function must only be called during initialization time
> (i.e. in an init() function), and is not thread-safe.

While this is fine for us in production, it's challenging for tests that
spin up multiple agents in-memory. We currently register a balancer per-
agent which holds agent-specific state that cannot safely be shared.

This commit introduces our own registry that _is_ thread-safe, and
implements the Builder interface such that we can call gRPC's `Register`
method once, on start-up. It uses the same pattern as our resolver
registry where we use the dial target's host (aka "authority"), which is
unique per-agent, to determine which builder to use.
2023-02-28 10:18:38 +00:00
cskh 806d63e7fc
fix: add tls config to unix socket when https is used (#16301)
* fix: add tls config to unix socket when https is used

* unit test and changelog
2023-02-21 08:28:13 -05:00
Matt Keeler f3c80c4eef
Protobuf Refactoring for Multi-Module Cleanliness (#16302)
Protobuf Refactoring for Multi-Module Cleanliness

This commit includes the following:

Moves all packages that were within proto/ to proto/private
Rewrites imports to account for the packages being moved
Adds in buf.work.yaml to enable buf workspaces
Names the proto-public buf module so that we can override the Go package imports within proto/buf.yaml
Bumps the buf version dependency to 1.14.0 (I was trying out the version to see if it would get around an issue - it didn't but it also doesn't break things and it seemed best to keep up with the toolchain changes)

Why:

In the future we will need to consume other protobuf dependencies such as the Google HTTP annotations for openapi generation or grpc-gateway usage.
There were some recent changes to have our own ratelimiting annotations.
The two combined were not working when I was trying to use them together (attempting to rebase another branch)
Buf workspaces should be the solution to the problem
Buf workspaces means that each module will have generated Go code that embeds proto file names relative to the proto dir and not the top level repo root.
This resulted in proto file name conflicts in the Go global protobuf type registry.
The solution to that was to add in a private/ directory into the path within the proto/ directory.
That then required rewriting all the imports.

Is this safe?

AFAICT yes
The gRPC wire protocol doesn't seem to care about the proto file names (although the Go grpc code does tack on the proto file name as Metadata in the ServiceDesc)
Other than imports, there were no changes to any generated code as a result of this.
2023-02-17 16:14:46 -05:00
Paul Banks 50c600f93b
Adding experimental support for a more efficient LogStore implementation (#16176)
* Adding experimental support for a more efficient LogStore implementation

* Adding changelog entry

* Fix go mod tidy issues
2023-02-08 16:50:22 +00:00
cskh 58e3ea5b52
Apply agent partition to load services and agent api (#16024)
* Apply agent partition to load services and agent api

changelog
2023-01-20 12:59:26 -05:00
Dan Upton 618deae657
xds: don't attempt to load-balance sessions for local proxies (#15789)
Previously, we'd begin a session with the xDS concurrency limiter
regardless of whether the proxy was registered in the catalog or in
the server's local agent state.

This caused problems for users who run `consul connect envoy` directly
against a server rather than a client agent, as the server's locally
registered proxies wouldn't be included in the limiter's capacity.

Now, the `ConfigSource` is responsible for beginning the session and we
only do so for services in the catalog.

Fixes: https://github.com/hashicorp/consul/issues/15753
2023-01-18 12:33:21 -06:00
Paul Glass 1bf1686ebc
Add new config_file_service_registration token (#15828) 2023-01-10 10:24:02 -06:00
Chris S. Kim 82d6d12a13
Output user-friendly name for anonymous token (#15884) 2023-01-09 12:28:53 -06:00
Dhia Ayachi f17bc5ed73
inject logger and create logdrop sink (#15822)
* inject logger and create logdrop sink

* init sink with an empty struct instead of nil

* wrap a logger instead of a sink and add a discard logger to avoid double logging

* fix compile errors

* fix linter errors

* Fix bug where log arguments aren't properly formatted

* Move log sink construction outside of handler

* Add prometheus definition and docs for log drop counter

Co-authored-by: Daniel Upton <daniel@floppy.co>
2023-01-06 11:33:53 -07:00
Dan Upton 006138beb4
Wire in rate limiter to handle internal and external gRPC calls (#15857) 2022-12-23 13:42:16 -06:00
John Murret 2a0aeb2349
Rate Limit Handler - ensure rate limiting is not in the code path when not configured (#15819)
* Rate limiting handler - ensure configuration has changed before modifying limiters

* Updating test to validate arguments to UpdateConfig

* Removing duplicate test.  Updating mock.

* Renaming NullRateLimiter to NullRequestLimitsHandler

* Rate Limit Handler - ensure rate limiting is not in the code path when not configured

* Update agent/consul/rate/handler.go

Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>

* formatting handler.go

* Rate limiting handler - ensure configuration has changed before modifying limiters

* Updating test to validate arguments to UpdateConfig

* Removing duplicate test.  Updating mock.

* adding logging for when UpdateConfig is called but the config has not changed.

* Update agent/consul/rate/handler.go

Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>

* Update agent/consul/rate/handler_test.go

Co-authored-by: Dan Upton <daniel@floppy.co>

* modifying existing variable name based on pr feedback

* updating a broken merge conflict;

Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>
Co-authored-by: Dan Upton <daniel@floppy.co>
2022-12-20 15:00:22 -07:00
Semir Patel 1f82e82e04
Pass remote addr of incoming HTTP requests through to RPC(..) calls (#15700) 2022-12-14 09:24:22 -06:00
John Murret 700c693b33
adding config for request_limits (#15531)
* server: add placeholder glue for rate limit handler

This commit adds a no-op implementation of the rate-limit handler and
adds it to the `consul.Server` struct and setup code.

This allows us to start working on the net/rpc and gRPC interceptors and
config logic.

* Add handler errors

* Set the global read and write limits

* fixing multilimiter moving packages

* Fix typo

* Simplify globalLimit usage

* add multilimiter and tests

* exporting LimitedEntity

* Apply suggestions from code review

Co-authored-by: John Murret <john.murret@hashicorp.com>

* add config update and rename config params

* add doc string and split config

* Apply suggestions from code review

Co-authored-by: Dan Upton <daniel@floppy.co>

* use timer to avoid go routine leak and change the interface

* add comments to tests

* fix failing test

* add prefix with config edge, refactor tests

* Apply suggestions from code review

Co-authored-by: Dan Upton <daniel@floppy.co>

* refactor to apply configs for limiters under a prefix

* add fuzz tests and fix bugs found. Refactor reconcile loop to have a simpler logic

* make KeyType an exported type

* split the config and limiter trees to fix race conditions in config update

* rename variables

* fix race in test and remove dead code

* fix reconcile loop to not create a timer on each loop

* add extra benchmark tests and fix tests

* fix benchmark test to pass value to func

* server: add placeholder glue for rate limit handler

This commit adds a no-op implementation of the rate-limit handler and
adds it to the `consul.Server` struct and setup code.

This allows us to start working on the net/rpc and gRPC interceptors and
config logic.

* Set the global read and write limits

* fixing multilimiter moving packages

* add server configuration for global rate limiting.

* remove agent test

* remove added stuff from handler

* remove added stuff from multilimiter

* removing unnecessary TODOs

* Removing TODO comment from handler

* adding in defaulting to infinite

* add disabled status in there

* adding in documentation for disabled mode.

* make disabled the default.

* Add mock and agent test

* addig documentation and missing mock file.

* Fixing test TestLoad_IntegrationWithFlags

* updating docs based on PR feedback.

* Updating Request Limits mode to use int based on PR feedback.

* Adding RequestLimits struct so we have a nested struct in ReloadableConfig.

* fixing linting references

* Update agent/consul/rate/handler.go

Co-authored-by: Dan Upton <daniel@floppy.co>

* Update agent/consul/config.go

Co-authored-by: Dan Upton <daniel@floppy.co>

* removing the ignore of the request limits in JSON.  addingbuilder logic to convert any read rate or write rate less than 0 to rate.Inf

* added conversion function to convert request limits object to handler config.

* Updating docs to reflect gRPC and RPC are rate limit and as a result, HTTP requests are as well.

* Updating values for TestLoad_FullConfig() so that they were different and discernable.

* Updating TestRuntimeConfig_Sanitize

* Fixing TestLoad_IntegrationWithFlags test

* putting nil check in place

* fixing rebase

* removing change for missing error checks.  will put in another PR

* Rebasing after default multilimiter config change

* resolving rebase issues

* updating reference for incomingRPCLimiter to use interface

* updating interface

* Updating interfaces

* Fixing mock reference

Co-authored-by: Daniel Upton <daniel@floppy.co>
Co-authored-by: Dhia Ayachi <dhia@hashicorp.com>
2022-12-13 13:09:55 -07:00
Dan Upton c73707ca3c
grpc: add rate-limiting middleware (#15550)
Implements the gRPC middleware for rate-limiting as a tap.ServerInHandle
function (executed before the request is unmarshaled).

Mappings between gRPC methods and their operation type are generated by
a protoc plugin introduced by #15564.
2022-12-13 15:01:56 +00:00
John Murret fe0432ade5
agent: Fix assignment of error when auto-reloading cert and key file changes. (#15769)
* Adding the setting of errors missing in config file watcher code in agent.

* add changelog
2022-12-12 12:24:39 -07:00
Eric Haberkorn 5dd131fee8
Remove the `connect.enable_serverless_plugin` agent configuration option (#15710) 2022-12-08 14:46:42 -05:00
Dhia Ayachi 219a3c5bd3
Leadership transfer cmd (#14132)
* add leadership transfer command

* add RPC call test (flaky)

* add missing import

* add changelog

* add command registration

* Apply suggestions from code review

Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>

* add the possibility of providing an id to raft leadership transfer. Add few tests.

* delete old file from cherry pick

* rename changelog filename to PR #

* rename changelog and fix import

* fix failing test

* check for OperatorWrite

Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>

* rename from leader-transfer to transfer-leader

* remove version check and add test for operator read

* move struct to operator.go

* first pass

* add code for leader transfer in the grpc backend and tests

* wire the http endpoint to the new grpc endpoint

* remove the RPC endpoint

* remove non needed struct

* fix naming

* add mog glue to API

* fix comment

* remove dead code

* fix linter error

* change package name for proto file

* remove error wrapping

* fix failing test

* add command registration

* add grpc service mock tests

* fix receiver to be pointer

* use defined values

Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>

* reuse MockAclAuthorizer

* add documentation

* remove usage of external.TokenFromContext

* fix failing tests

* fix proto generation

* Apply suggestions from code review

Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>

* Apply suggestions from code review

* add more context in doc for the reason

* Apply suggestions from docs code review

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>

* regenerate proto

* fix linter errors

Co-authored-by: github-team-consul-core <github-team-consul-core@hashicorp.com>
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
2022-11-14 15:35:12 -05:00
Derek Menteer 0c07a36408
Prevent serving TLS via ports.grpc (#15339)
Prevent serving TLS via ports.grpc

We remove the ability to run the ports.grpc in TLS mode to avoid
confusion and to simplify configuration. This breaking change
ensures that any user currently using ports.grpc in an encrypted
mode will receive an error message indicating that ports.grpc_tls
must be explicitly used.

The suggested action for these users is to simply swap their ports.grpc
to ports.grpc_tls in the configuration file. If both ports are defined,
or if the user has not configured TLS for grpc, then the error message
will not be printed.
2022-11-11 14:29:22 -06:00
Kyle Schochenmaier 2b1e5f69e2
removes ioutil usage everywhere which was deprecated in go1.16 (#15297)
* update go version to 1.18 for api and sdk, go mod tidy
* removes ioutil usage everywhere which was deprecated in go1.16 in favour of io and os packages. Also introduces a lint rule which forbids use of ioutil going forward.
Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>
2022-11-10 10:26:01 -06:00
Derek Menteer 58f15db4c4 Allow peering endpoints to bypass verify_incoming. 2022-10-31 09:56:30 -05:00
Chris S. Kim e4c20ec190
Refactor client RPC timeouts (#14965)
Fix an issue where rpc_hold_timeout was being used as the timeout for non-blocking queries. Users should be able to tune read timeouts without fiddling with rpc_hold_timeout. A new configuration `rpc_read_timeout` is created.

Refactor some implementation from the original PR 11500 to remove the misleading linkage between RPCInfo's timeout (used to retry in case of certain modes of failures) and the client RPC timeouts.
2022-10-18 15:05:09 -04:00
Chris S. Kim 58c041eb6e
Merge pull request #13388 from deblasis/feature/health-checks_windows_service
Feature: Health checks windows service
2022-10-17 09:26:19 -04:00
Dan Upton 3b9297f95a
proxycfg: rate-limit delivery of config snapshots (#14960)
Adds a user-configurable rate limiter to proxycfg snapshot delivery,
with a default limit of 250 updates per second.

This addresses a problem observed in our load testing of Consul
Dataplane where updating a "global" resource such as a wildcard
intention or the proxy-defaults config entry could starve the Raft or
Memberlist goroutines of CPU time, causing general cluster instability.
2022-10-14 15:52:00 +01:00
Freddy 909fc33271
Merge pull request #14935 from hashicorp/fix/alias-leak 2022-10-13 16:31:15 -06:00
Riddhi Shah 474d9cfcdc
Service http checks data source for agentless proxies (#14924)
Adds another datasource for proxycfg.HTTPChecks, for use on server agents. Typically these checks are performed by local client agents and there is no equivalent of this in agentless (where servers configure consul-dataplane proxies).
Hence, the data source is mostly a no-op on servers but in the case where the service is present within the local state, it delegates to the cache data source.
2022-10-12 07:49:56 -07:00
Paul Glass 8cf430140a
gRPC server metrics (#14922)
* Move stats.go from grpc-internal to grpc-middleware
* Update grpc server metrics with server type label
* Add stats test to grpc-external
* Remove global metrics instance from grpc server tests
2022-10-11 17:00:32 -05:00
freddygv 9f0ab69aef Fix alias check leak
Preivously when alias check was removed it would not be stopped nor
cleaned up from the associated aliasChecks map.

This means that any time an alias check was deregistered we would
leak a goroutine for CheckAlias.run() because the stopCh would never
be closed.

This issue mostly affects service mesh deployments on platforms where
the client agent is mostly static but proxy services come and go
regularly, since by default sidecars are registered with an alias check.
2022-10-10 16:42:29 -06:00
DanStough df94470e76 feat: xDS updates for peerings control plane through mesh gw 2022-10-07 08:46:42 -06:00