include all fields when fuzzing in tests
split tests by struct type
Ensure the new value for the field is different
fuzzer.Fuzz could produce the same value again in some cases.
Use a custom fuzz function for QueryOptions. That type is an embedded struct in the request types
but only one of the fields is important to include in the cache key.
Move enterpriseMetaField to an oss file so that we can change it in enterprise.
go test will only run tests in parallel within a single package. In this case the package test run time is exactly the same with or without t.Parallel() (~0.7s).
In generally we should avoid t.Parallel() as it causes a number of problems with `go test` not reporting failure messages correctly. I encountered one of these problems, which is what prompted this change. Since `t.Parallel` is not providing any benefit in this package, this commit removes it.
The change was automated with:
git grep -l 't.Parallel' | xargs sed -i -e '/t.Parallel/d'
* Add Ingress gateway config entry and other relevant structs
* Add api package tests for ingress gateways
* Embed EnterpriseMeta into ingress service struct
* Add namespace fields to api module and test consul config write decoding
* Don't require a port for ingress gateways
* Add snakeJSON and camelJSON cases in command test
* Run Normalize on service's ent metadata
Sadly cannot think of a way to test this in OSS.
* Every protocol requires at least 1 service
* Validate ingress protocols
* Update agent/structs/config_entry_gateways.go
Co-authored-by: Chris Piraino <cpiraino@hashicorp.com>
Co-authored-by: Freddy <freddygv@users.noreply.github.com>
This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch:
There are several distinct chunks of code that are affected:
* new flags and config options for the server
* retry join WAN is slightly different
* retry join code is shared to discover primary mesh gateways from secondary datacenters
* because retry join logic runs in the *agent* and the results of that
operation for primary mesh gateways are needed in the *server* there are
some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur
at multiple layers of abstraction just to pass the data down to the right
layer.
* new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers
* the function signature for RPC dialing picked up a new required field (the
node name of the destination)
* several new RPCs for manipulating a FederationState object:
`FederationState:{Apply,Get,List,ListMeshGateways}`
* 3 read-only internal APIs for debugging use to invoke those RPCs from curl
* raft and fsm changes to persist these FederationStates
* replication for FederationStates as they are canonically stored in the
Primary and replicated to the Secondaries.
* a special derivative of anti-entropy that runs in secondaries to snapshot
their local mesh gateway `CheckServiceNodes` and sync them into their upstream
FederationState in the primary (this works in conjunction with the
replication to distribute addresses for all mesh gateways in all DCs to all
other DCs)
* a "gateway locator" convenience object to make use of this data to choose
the addresses of gateways to use for any given RPC or gossip operation to a
remote DC. This gets data from the "retry join" logic in the agent and also
directly calls into the FSM.
* RPC (`:8300`) on the server sniffs the first byte of a new connection to
determine if it's actually doing native TLS. If so it checks the ALPN header
for protocol determination (just like how the existing system uses the
type-byte marker).
* 2 new kinds of protocols are exclusively decoded via this native TLS
mechanism: one for ferrying "packet" operations (udp-like) from the gossip
layer and one for "stream" operations (tcp-like). The packet operations
re-use sockets (using length-prefixing) to cut down on TLS re-negotiation
overhead.
* the server instances specially wrap the `memberlist.NetTransport` when running
with gateway federation enabled (in a `wanfed.Transport`). The general gist is
that if it tries to dial a node in the SAME datacenter (deduced by looking
at the suffix of the node name) there is no change. If dialing a DIFFERENT
datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh
gateways to eventually end up in a server's :8300 port.
* a new flag when launching a mesh gateway via `consul connect envoy` to
indicate that the servers are to be exposed. This sets a special service
meta when registering the gateway into the catalog.
* `proxycfg/xds` notice this metadata blob to activate additional watches for
the FederationState objects as well as the location of all of the consul
servers in that datacenter.
* `xds:` if the extra metadata is in place additional clusters are defined in a
DC to bulk sink all traffic to another DC's gateways. For the current
datacenter we listen on a wildcard name (`server.<dc>.consul`) that load
balances all servers as well as one mini-cluster per node
(`<node>.server.<dc>.consul`)
* the `consul tls cert create` command got a new flag (`-node`) to help create
an additional SAN in certs that can be used with this flavor of federation.
We set RawToString=true so that []uint8 => string when decoding an interface{}.
We set the MapType so that map[interface{}]interface{} decodes to map[string]interface{}.
Add tests to ensure that this doesn't break existing usages.
Fixes#7223
Fixes: #5396
This PR adds a proxy configuration stanza called expose. These flags register
listeners in Connect sidecar proxies to allow requests to specific HTTP paths from outside of the node. This allows services to protect themselves by only
listening on the loopback interface, while still accepting traffic from non
Connect-enabled services.
Under expose there is a boolean checks flag that would automatically expose all
registered HTTP and gRPC check paths.
This stanza also accepts a paths list to expose individual paths. The primary
use case for this functionality would be to expose paths for third parties like
Prometheus or the kubelet.
Listeners for requests to exposed paths are be configured dynamically at run
time. Any time a proxy, or check can be registered, a listener can also be
created.
In this initial implementation requests to these paths are not
authenticated/encrypted.
* connect: validate upstreams and prevent duplicates
* Actually run Upstream.Validate() instead of ignoring it as dead code.
* Prevent two upstreams from declaring the same bind address and port.
It wouldn't work anyway.
* Prevent two upstreams from being declared that use the same
type+name+namespace+datacenter. Due to how the Upstream.Identity()
function worked this ended up mostly being enforced in xDS at use-time,
but it should be enforced more clearly at register-time.
* Ensure we MapWalk the proxy config in the NodeService and ServiceNode structs
This gets rid of some json encoder errors in the catalog endpoints
* Allow passing explicit bind addresses to envoy
* Move map walking to the ConnectProxyConfig struct
Any place where this struct gets JSON encoded will benefit as opposed to having to implement it everywhere.
* Fail when a non-empty address is provided and not bindable
* camel case
* Update command/connect/envoy/envoy.go
Co-Authored-By: Paul Banks <banks@banksco.de>
This allows addresses to be tagged at the service level similar to what we allow for nodes already. The address translation that can be enabled with the `translate_wan_addrs` config was updated to take these new addresses into account as well.
* Add integration test for central config; fix central config WIP
* Add integration test for central config; fix central config WIP
* Set proxy protocol correctly and begin adding upstream support
* Add upstreams to service config cache key and start new notify watcher if they change.
This doesn't update the tests to pass though.
* Fix some merging logic get things working manually with a hack (TODO fix properly)
* Simplification to not allow enabling sidecars centrally - it makes no sense without upstreams anyway
* Test compile again and obvious ones pass. Lots of failures locally not debugged yet but may be flakes. Pushing up to see what CI does
* Fix up service manageer and API test failures
* Remove the enable command since it no longer makes much sense without being able to turn on sidecar proxies centrally
* Remove version.go hack - will make integration test fail until release
* Remove unused code from commands and upstream merge
* Re-bump version to 1.5.0
* bugfix: use ServiceTags to generate cahce key hash
* update unit test
* update
* remote print log
* Update .gitignore
* Completely deprecate ServiceTag field internally for clarity
* Add explicit test for CacheInfo cases
* Add leader token upgrade test and fix various ACL enablement bugs
* Update the leader ACL initialization tests.
* Add a StateStore ACL tests for ACLTokenSet and ACLTokenGetBy* functions
* Advertise the agents acl support status with the agent/self endpoint.
* Make batch token upsert CAS’able to prevent consistency issues with token auto-upgrade
* Finish up the ACL state store token tests
* Finish the ACL state store unit tests
Also rename some things to make them more consistent.
* Do as much ACL replication testing as I can.
This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week.
Description
At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers.
On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though.
Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though.
All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management.
Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are:
A server running the new system must still support other clients using the legacy system.
A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system.
The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode.
So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
* [Performance On Large clusters] Checks do update services/nodes only when really modified to avoid too many updates on very large clusters
In a large cluster, when having a few thousands of nodes, the anti-entropy
mechanism performs lots of changes (several per seconds) while
there is no real change. This patch wants to improve this in order
to increase Consul scalability when using many blocking requests on
health for instance.
* [Performance for large clusters] Only updates index of service if service is really modified
* [Performance for large clusters] Only updates index of nodes if node is really modified
* Added comments / ensure IsSame() has clear semantics
* Avoid having modified boolean, return nil directly if stutures are Same
* Fixed unstable unit tests TestLeader_ChangeServerID
* Rewrite TestNode_IsSame() for better readability as suggested by @banks
* Rename ServiceNode.IsSame() into IsSameService() + added unit tests
* Do not duplicate TestStructs_ServiceNode_Conversions() and increase test coverage of IsSameService
* Clearer documentation in IsSameService
* Take into account ServiceProxy into ServiceNode.IsSameService()
* Fixed IsSameService() with all new structures
* Added new Config for SidecarService in ServiceDefinitions.
* WIP: all the code needed for SidecarService is written... none of it is tested other than config :). Need API updates too.
* Test coverage for the new sidecarServiceFromNodeService method.
* Test API registratrion with SidecarService
* Recursive Key Translation 🤦
* Add tests for nested sidecar defintion arrays to ensure they are translated correctly
* Use dedicated internal state rather than Service Meta for tracking sidecars for deregistration.
Add tests for deregistration.
* API struct for agent register. No other endpoint should be affected yet.
* Additional test cases to cover updates to API registrations
* Refactor Service Definition ProxyDestination.
This includes:
- Refactoring all internal structs used
- Updated tests for both deprecated and new input for:
- Agent Services endpoint response
- Agent Service endpoint response
- Agent Register endpoint
- Unmanaged deprecated field
- Unmanaged new fields
- Managed deprecated upstreams
- Managed new
- Catalog Register
- Unmanaged deprecated field
- Unmanaged new fields
- Managed deprecated upstreams
- Managed new
- Catalog Services endpoint response
- Catalog Node endpoint response
- Catalog Service endpoint response
- Updated API tests for all of the above too (both deprecated and new forms of register)
TODO:
- config package changes for on-disk service definitions
- proxy config endpoint
- built-in proxy support for new fields
* Agent proxy config endpoint updated with upstreams
* Config file changes for upstreams.
* Add upstream opaque config and update all tests to ensure it works everywhere.
* Built in proxy working with new Upstreams config
* Command fixes and deprecations
* Fix key translation, upstream type defaults and a spate of other subtele bugs found with ned to end test scripts...
TODO: tests still failing on one case that needs a fix. I think it's key translation for upstreams nested in Managed proxy struct.
* Fix translated keys in API registration.
≈
* Fixes from docs
- omit some empty undocumented fields in API
- Bring back ServiceProxyDestination in Catalog responses to not break backwards compat - this was removed assuming it was only used internally.
* Documentation updates for Upstreams in service definition
* Fixes for tests broken by many refactors.
* Enable travis on f-connect branch in this branch too.
* Add consistent Deprecation comments to ProxyDestination uses
* Update version number on deprecation notices, and correct upstream datacenter field with explanation in docs
* Implementation of Weights Data structures
Adding this datastructure will allow us to resolve the
issues #1088 and #4198
This new structure defaults to values:
```
{ Passing: 1, Warning: 0 }
```
Which means, use weight of 0 for a Service in Warning State
while use Weight 1 for a Healthy Service.
Thus it remains compatible with previous Consul versions.
* Implemented weights for DNS SRV Records
* DNS properly support agents with weight support while server does not (backwards compatibility)
* Use Warning value of Weights of 1 by default
When using DNS interface with only_passing = false, all nodes
with non-Critical healthcheck used to have a weight value of 1.
While having weight.Warning = 0 as default value, this is probably
a bad idea as it breaks ascending compatibility.
Thus, we put a default value of 1 to be consistent with existing behaviour.
* Added documentation for new weight field in service description
* Better documentation about weights as suggested by @banks
* Return weight = 1 for unknown Check states as suggested by @banks
* Fixed typo (of -> or) in error message as requested by @mkeeler
* Fixed unstable unit test TestRetryJoin
* Fixed unstable tests
* Fixed wrong Fatalf format in `testrpc/wait.go`
* Added notes regarding DNS SRV lookup limitations regarding number of instances
* Documentation fixes and clarification regarding SRV records with weights as requested by @banks
* Rephrase docs