open-consul/agent/consul/leader.go

1452 lines
44 KiB
Go
Raw Normal View History

// Copyright (c) HashiCorp, Inc.
// SPDX-License-Identifier: MPL-2.0
2014-01-09 23:49:09 +00:00
package consul
import (
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
"context"
"fmt"
"net"
"reflect"
"strconv"
"strings"
"sync"
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
"sync/atomic"
"time"
"github.com/armon/go-metrics"
2020-11-14 00:26:08 +00:00
"github.com/armon/go-metrics/prometheus"
"github.com/hashicorp/go-hclog"
"github.com/hashicorp/go-uuid"
"github.com/hashicorp/go-version"
"github.com/hashicorp/raft"
2014-01-09 23:49:09 +00:00
"github.com/hashicorp/serf/serf"
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
"golang.org/x/time/rate"
"github.com/hashicorp/consul/acl"
"github.com/hashicorp/consul/agent/metadata"
"github.com/hashicorp/consul/agent/structs"
"github.com/hashicorp/consul/agent/structs/aclfilter"
"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/lib"
"github.com/hashicorp/consul/logging"
"github.com/hashicorp/consul/types"
2014-01-09 23:49:09 +00:00
)
2020-11-14 00:26:08 +00:00
var LeaderSummaries = []prometheus.SummaryDefinition{
{
Name: []string{"leader", "barrier"},
2020-11-16 19:02:11 +00:00
Help: "Measures the time spent waiting for the raft barrier upon gaining leadership.",
2020-11-14 00:26:08 +00:00
},
{
Name: []string{"leader", "reconcileMember"},
2020-11-16 19:02:11 +00:00
Help: "Measures the time spent updating the raft store for a single serf member's information.",
2020-11-14 00:26:08 +00:00
},
{
Name: []string{"leader", "reapTombstones"},
2020-11-16 19:02:11 +00:00
Help: "Measures the time spent clearing tombstones.",
2020-11-14 00:26:08 +00:00
},
}
2014-01-09 23:49:09 +00:00
const (
newLeaderEvent = "consul:new-leader"
barrierWriteTimeout = 2 * time.Minute
defaultDeletionRoundBurst int = 5 // number replication round bursts
defaultDeletionApplyRate rate.Limit = 10 // raft applies per second
2014-01-09 23:49:09 +00:00
)
var (
// caRootPruneInterval is how often we check for stale CARoots to remove.
caRootPruneInterval = time.Hour
// minCentralizedConfigVersion is the minimum Consul version in which centralized
// config is supported
minCentralizedConfigVersion = version.Must(version.NewVersion("1.5.0"))
)
2017-12-12 00:38:52 +00:00
2014-01-09 23:49:09 +00:00
// monitorLeadership is used to monitor if we acquire or lose our role
// as the leader in the Raft cluster. There is some work the leader is
// expected to do, so we must react to changes
func (s *Server) monitorLeadership() {
// We use the notify channel we configured Raft with, NOT Raft's
// leaderCh, which is only notified best-effort. Doing this ensures
// that we get all notifications in order, which is required for
// cleanup and to ensure we never run multiple leader loops.
2017-07-06 14:09:21 +00:00
raftNotifyCh := s.raftNotifyCh
var weAreLeaderCh chan struct{}
var leaderLoop sync.WaitGroup
2014-01-09 23:49:09 +00:00
for {
select {
case <-time.After(s.config.MetricsReportingInterval):
if s.IsLeader() {
metrics.SetGauge([]string{"server", "isLeader"}, float32(1))
} else {
metrics.SetGauge([]string{"server", "isLeader"}, float32(0))
}
case isLeader := <-raftNotifyCh:
switch {
case isLeader:
if weAreLeaderCh != nil {
s.logger.Error("attempted to start the leader loop while running")
continue
}
weAreLeaderCh = make(chan struct{})
leaderLoop.Add(1)
go func(ch chan struct{}) {
defer leaderLoop.Done()
s.leaderLoop(ch)
}(weAreLeaderCh)
s.logger.Info("cluster leadership acquired")
default:
if weAreLeaderCh == nil {
s.logger.Error("attempted to stop the leader loop while not running")
continue
}
s.logger.Debug("shutting down leader loop")
close(weAreLeaderCh)
leaderLoop.Wait()
weAreLeaderCh = nil
s.logger.Info("cluster leadership lost")
2014-01-09 23:49:09 +00:00
}
case <-s.shutdownCh:
return
}
}
}
func (s *Server) leadershipTransfer() error {
retryCount := 3
for i := 0; i < retryCount; i++ {
future := s.raft.LeadershipTransfer()
if err := future.Error(); err != nil {
s.logger.Error("failed to transfer leadership attempt, will retry",
"attempt", i,
"retry_limit", retryCount,
"error", err,
)
} else {
s.logger.Info("successfully transferred leadership",
"attempt", i,
"retry_limit", retryCount,
)
return nil
}
}
return fmt.Errorf("failed to transfer leadership in %d attempts", retryCount)
}
2014-01-09 23:49:09 +00:00
// leaderLoop runs as long as we are the leader to run various
2015-09-15 12:22:08 +00:00
// maintenance activities
2014-01-09 23:49:09 +00:00
func (s *Server) leaderLoop(stopCh chan struct{}) {
stopCtx := &lib.StopChannelContext{StopCh: stopCh}
// Fire a user event indicating a new leader
payload := []byte(s.config.NodeName)
if err := s.LANSendUserEvent(newLeaderEvent, payload, false); err != nil {
s.logger.Warn("failed to broadcast new leader event", "error", err)
}
// Reconcile channel is only used once initial reconcile
// has succeeded
var reconcileCh chan serf.Member
establishedLeader := false
2014-01-09 23:49:09 +00:00
RECONCILE:
// Setup a reconciliation timer
reconcileCh = nil
interval := time.After(s.config.ReconcileInterval)
2014-01-09 23:49:09 +00:00
// Apply a raft barrier to ensure our FSM is caught up
2014-02-20 23:16:26 +00:00
start := time.Now()
barrier := s.raft.Barrier(barrierWriteTimeout)
2014-01-09 23:49:09 +00:00
if err := barrier.Error(); err != nil {
s.logger.Error("failed to wait for barrier", "error", err)
goto WAIT
2014-01-09 23:49:09 +00:00
}
metrics.MeasureSince([]string{"leader", "barrier"}, start)
2014-01-09 23:49:09 +00:00
// Check if we need to handle initial leadership actions
if !establishedLeader {
if err := s.establishLeadership(stopCtx); err != nil {
s.logger.Error("failed to establish leadership", "error", err)
// Immediately revoke leadership since we didn't successfully
// establish leadership.
s.revokeLeadership()
// attempt to transfer leadership. If successful it is
// time to leave the leaderLoop since this node is no
// longer the leader. If leadershipTransfer() fails, we
// will try to acquire it again after
// 5 seconds.
if err := s.leadershipTransfer(); err != nil {
s.logger.Error("failed to transfer leadership", "error", err)
interval = time.After(5 * time.Second)
goto WAIT
}
return
}
establishedLeader = true
defer s.revokeLeadership()
}
2014-01-09 23:49:09 +00:00
// Reconcile any missing data
if err := s.reconcile(); err != nil {
s.logger.Error("failed to reconcile", "error", err)
2014-01-09 23:49:09 +00:00
goto WAIT
}
// Initial reconcile worked, now we can process the channel
// updates
reconcileCh = s.reconcileCh
2014-01-09 23:49:09 +00:00
WAIT:
// Poll the stop channel to give it priority so we don't waste time
// trying to perform the other operations if we have been asked to shut
// down.
select {
case <-stopCh:
return
default:
}
// Periodically reconcile as long as we are the leader,
// or when Serf events arrive
for {
select {
case <-stopCh:
return
case <-s.shutdownCh:
return
case <-interval:
goto RECONCILE
case member := <-reconcileCh:
s.reconcileMember(member)
case index := <-s.tombstoneGC.ExpireCh():
go s.reapTombstones(index)
case errCh := <-s.reassertLeaderCh:
// we can get into this state when the initial
// establishLeadership has failed as well as the follow
// up leadershipTransfer. Afterwards we will be waiting
// for the interval to trigger a reconciliation and can
// potentially end up here. There is no point to
// reassert because this agent was never leader in the
// first place.
if !establishedLeader {
errCh <- fmt.Errorf("leadership has not been established")
continue
}
// continue to reassert only if we previously were the
// leader, which means revokeLeadership followed by an
// establishLeadership().
s.revokeLeadership()
err := s.establishLeadership(stopCtx)
errCh <- err
// in case establishLeadership failed, we will try to
// transfer leadership. At this time raft thinks we are
// the leader, but consul disagrees.
if err != nil {
if err := s.leadershipTransfer(); err != nil {
// establishedLeader was true before,
// but it no longer is since it revoked
// leadership and Leadership transfer
// also failed. Which is why it stays
// in the leaderLoop, but now
// establishedLeader needs to be set to
// false.
establishedLeader = false
interval = time.After(5 * time.Second)
goto WAIT
}
// leadershipTransfer was successful and it is
// time to leave the leaderLoop.
return
}
}
2014-01-09 23:49:09 +00:00
}
}
// establishLeadership is invoked once we become leader and are able
// to invoke an initial barrier. The barrier is used to ensure any
2015-09-11 19:24:54 +00:00
// previously inflight transactions have been committed and that our
// state is up-to-date.
func (s *Server) establishLeadership(ctx context.Context) error {
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
start := time.Now()
if err := s.initializeACLs(ctx); err != nil {
return err
}
// Hint the tombstone expiration timer. When we freshly establish leadership
// we become the authoritative timer, and so we need to start the clock
// on any pending GC events.
2015-01-05 22:58:59 +00:00
s.tombstoneGC.SetEnabled(true)
lastIndex := s.raft.LastIndex()
s.tombstoneGC.Hint(lastIndex)
// Setup the session timers. This is done both when starting up or when
// a leader fail over happens. Since the timers are maintained by the leader
// node along, effectively this means all the timers are renewed at the
// time of failover. The TTL contract is that the session will not be expired
// before the TTL, so expiring it later is allowable.
//
// This MUST be done after the initial barrier to ensure the latest Sessions
// are available to be initialized. Otherwise initialization may use stale
// data.
if err := s.initializeSessionTimers(); err != nil {
return err
}
if err := s.establishEnterpriseLeadership(ctx); err != nil {
return err
}
s.getOrCreateAutopilotConfig()
s.autopilot.EnableReconciliation()
s.startConfigReplication(ctx)
s.startFederationStateReplication(ctx)
wan federation via mesh gateways (#6884) This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch: There are several distinct chunks of code that are affected: * new flags and config options for the server * retry join WAN is slightly different * retry join code is shared to discover primary mesh gateways from secondary datacenters * because retry join logic runs in the *agent* and the results of that operation for primary mesh gateways are needed in the *server* there are some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur at multiple layers of abstraction just to pass the data down to the right layer. * new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers * the function signature for RPC dialing picked up a new required field (the node name of the destination) * several new RPCs for manipulating a FederationState object: `FederationState:{Apply,Get,List,ListMeshGateways}` * 3 read-only internal APIs for debugging use to invoke those RPCs from curl * raft and fsm changes to persist these FederationStates * replication for FederationStates as they are canonically stored in the Primary and replicated to the Secondaries. * a special derivative of anti-entropy that runs in secondaries to snapshot their local mesh gateway `CheckServiceNodes` and sync them into their upstream FederationState in the primary (this works in conjunction with the replication to distribute addresses for all mesh gateways in all DCs to all other DCs) * a "gateway locator" convenience object to make use of this data to choose the addresses of gateways to use for any given RPC or gossip operation to a remote DC. This gets data from the "retry join" logic in the agent and also directly calls into the FSM. * RPC (`:8300`) on the server sniffs the first byte of a new connection to determine if it's actually doing native TLS. If so it checks the ALPN header for protocol determination (just like how the existing system uses the type-byte marker). * 2 new kinds of protocols are exclusively decoded via this native TLS mechanism: one for ferrying "packet" operations (udp-like) from the gossip layer and one for "stream" operations (tcp-like). The packet operations re-use sockets (using length-prefixing) to cut down on TLS re-negotiation overhead. * the server instances specially wrap the `memberlist.NetTransport` when running with gateway federation enabled (in a `wanfed.Transport`). The general gist is that if it tries to dial a node in the SAME datacenter (deduced by looking at the suffix of the node name) there is no change. If dialing a DIFFERENT datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh gateways to eventually end up in a server's :8300 port. * a new flag when launching a mesh gateway via `consul connect envoy` to indicate that the servers are to be exposed. This sets a special service meta when registering the gateway into the catalog. * `proxycfg/xds` notice this metadata blob to activate additional watches for the FederationState objects as well as the location of all of the consul servers in that datacenter. * `xds:` if the extra metadata is in place additional clusters are defined in a DC to bulk sink all traffic to another DC's gateways. For the current datacenter we listen on a wildcard name (`server.<dc>.consul`) that load balances all servers as well as one mini-cluster per node (`<node>.server.<dc>.consul`) * the `consul tls cert create` command got a new flag (`-node`) to help create an additional SAN in certs that can be used with this flavor of federation.
2020-03-09 20:59:02 +00:00
s.startFederationStateAntiEntropy(ctx)
wan federation via mesh gateways (#6884) This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch: There are several distinct chunks of code that are affected: * new flags and config options for the server * retry join WAN is slightly different * retry join code is shared to discover primary mesh gateways from secondary datacenters * because retry join logic runs in the *agent* and the results of that operation for primary mesh gateways are needed in the *server* there are some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur at multiple layers of abstraction just to pass the data down to the right layer. * new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers * the function signature for RPC dialing picked up a new required field (the node name of the destination) * several new RPCs for manipulating a FederationState object: `FederationState:{Apply,Get,List,ListMeshGateways}` * 3 read-only internal APIs for debugging use to invoke those RPCs from curl * raft and fsm changes to persist these FederationStates * replication for FederationStates as they are canonically stored in the Primary and replicated to the Secondaries. * a special derivative of anti-entropy that runs in secondaries to snapshot their local mesh gateway `CheckServiceNodes` and sync them into their upstream FederationState in the primary (this works in conjunction with the replication to distribute addresses for all mesh gateways in all DCs to all other DCs) * a "gateway locator" convenience object to make use of this data to choose the addresses of gateways to use for any given RPC or gossip operation to a remote DC. This gets data from the "retry join" logic in the agent and also directly calls into the FSM. * RPC (`:8300`) on the server sniffs the first byte of a new connection to determine if it's actually doing native TLS. If so it checks the ALPN header for protocol determination (just like how the existing system uses the type-byte marker). * 2 new kinds of protocols are exclusively decoded via this native TLS mechanism: one for ferrying "packet" operations (udp-like) from the gossip layer and one for "stream" operations (tcp-like). The packet operations re-use sockets (using length-prefixing) to cut down on TLS re-negotiation overhead. * the server instances specially wrap the `memberlist.NetTransport` when running with gateway federation enabled (in a `wanfed.Transport`). The general gist is that if it tries to dial a node in the SAME datacenter (deduced by looking at the suffix of the node name) there is no change. If dialing a DIFFERENT datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh gateways to eventually end up in a server's :8300 port. * a new flag when launching a mesh gateway via `consul connect envoy` to indicate that the servers are to be exposed. This sets a special service meta when registering the gateway into the catalog. * `proxycfg/xds` notice this metadata blob to activate additional watches for the FederationState objects as well as the location of all of the consul servers in that datacenter. * `xds:` if the extra metadata is in place additional clusters are defined in a DC to bulk sink all traffic to another DC's gateways. For the current datacenter we listen on a wildcard name (`server.<dc>.consul`) that load balances all servers as well as one mini-cluster per node (`<node>.server.<dc>.consul`) * the `consul tls cert create` command got a new flag (`-node`) to help create an additional SAN in certs that can be used with this flavor of federation.
2020-03-09 20:59:02 +00:00
if s.config.PeeringEnabled {
s.startPeeringStreamSync(ctx)
}
s.startDeferredDeletion(ctx)
if err := s.startConnectLeader(ctx); err != nil {
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
return err
}
// Attempt to bootstrap config entries. We wait until after starting the
// Connect leader tasks so we hopefully have transitioned to supporting
// service-intentions.
if err := s.bootstrapConfigEntries(s.config.ConfigEntryBootstrap); err != nil {
return err
}
s.setConsistentReadReady()
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
if s.config.LogStoreConfig.Verification.Enabled {
s.startLogVerification(ctx)
}
2023-04-18 15:03:05 +00:00
if s.config.Reporting.License.Enabled && s.reportingManager != nil {
s.reportingManager.StartReportingAgent()
}
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
s.logger.Debug("successfully established leadership", "duration", time.Since(start))
return nil
}
2015-01-05 22:58:59 +00:00
// revokeLeadership is invoked once we step down as leader.
// This is used to cleanup any state that may be specific to a leader.
func (s *Server) revokeLeadership() {
s.stopLogVerification()
2015-01-05 22:58:59 +00:00
// Disable the tombstone GC, since it is only useful as a leader
s.tombstoneGC.SetEnabled(false)
// Clear the session timers on either shutdown or step down, since we
// are no longer responsible for session expirations.
s.clearAllSessionTimers()
s.revokeEnterpriseLeadership()
s.stopDeferredDeletion()
wan federation via mesh gateways (#6884) This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch: There are several distinct chunks of code that are affected: * new flags and config options for the server * retry join WAN is slightly different * retry join code is shared to discover primary mesh gateways from secondary datacenters * because retry join logic runs in the *agent* and the results of that operation for primary mesh gateways are needed in the *server* there are some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur at multiple layers of abstraction just to pass the data down to the right layer. * new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers * the function signature for RPC dialing picked up a new required field (the node name of the destination) * several new RPCs for manipulating a FederationState object: `FederationState:{Apply,Get,List,ListMeshGateways}` * 3 read-only internal APIs for debugging use to invoke those RPCs from curl * raft and fsm changes to persist these FederationStates * replication for FederationStates as they are canonically stored in the Primary and replicated to the Secondaries. * a special derivative of anti-entropy that runs in secondaries to snapshot their local mesh gateway `CheckServiceNodes` and sync them into their upstream FederationState in the primary (this works in conjunction with the replication to distribute addresses for all mesh gateways in all DCs to all other DCs) * a "gateway locator" convenience object to make use of this data to choose the addresses of gateways to use for any given RPC or gossip operation to a remote DC. This gets data from the "retry join" logic in the agent and also directly calls into the FSM. * RPC (`:8300`) on the server sniffs the first byte of a new connection to determine if it's actually doing native TLS. If so it checks the ALPN header for protocol determination (just like how the existing system uses the type-byte marker). * 2 new kinds of protocols are exclusively decoded via this native TLS mechanism: one for ferrying "packet" operations (udp-like) from the gossip layer and one for "stream" operations (tcp-like). The packet operations re-use sockets (using length-prefixing) to cut down on TLS re-negotiation overhead. * the server instances specially wrap the `memberlist.NetTransport` when running with gateway federation enabled (in a `wanfed.Transport`). The general gist is that if it tries to dial a node in the SAME datacenter (deduced by looking at the suffix of the node name) there is no change. If dialing a DIFFERENT datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh gateways to eventually end up in a server's :8300 port. * a new flag when launching a mesh gateway via `consul connect envoy` to indicate that the servers are to be exposed. This sets a special service meta when registering the gateway into the catalog. * `proxycfg/xds` notice this metadata blob to activate additional watches for the FederationState objects as well as the location of all of the consul servers in that datacenter. * `xds:` if the extra metadata is in place additional clusters are defined in a DC to bulk sink all traffic to another DC's gateways. For the current datacenter we listen on a wildcard name (`server.<dc>.consul`) that load balances all servers as well as one mini-cluster per node (`<node>.server.<dc>.consul`) * the `consul tls cert create` command got a new flag (`-node`) to help create an additional SAN in certs that can be used with this flavor of federation.
2020-03-09 20:59:02 +00:00
s.stopFederationStateAntiEntropy()
s.stopFederationStateReplication()
s.stopConfigReplication()
s.stopACLReplication()
s.stopPeeringStreamSync()
s.stopConnectLeader()
s.stopACLTokenReaping()
s.resetConsistentReadReady()
s.autopilot.DisableReconciliation()
2023-04-18 15:03:05 +00:00
s.reportingManager.StopReportingAgent()
2015-01-05 22:58:59 +00:00
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
// initializeACLs is used to setup the ACLs if we are the leader
// and need to do this.
func (s *Server) initializeACLs(ctx context.Context) error {
if !s.config.ACLsEnabled {
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
return nil
}
// Purge the cache, since it could've changed while we were not the
// leader.
s.ACLResolver.cache.Purge()
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
// Purge the auth method validators since they could've changed while we
// were not leader.
s.aclAuthMethodValidators.Purge()
// Remove any token affected by CVE-2019-8336
if !s.InPrimaryDatacenter() {
_, token, err := s.fsm.State().ACLTokenGetBySecret(nil, aclfilter.RedactedToken, nil)
if err == nil && token != nil {
req := structs.ACLTokenBatchDeleteRequest{
TokenIDs: []string{token.AccessorID},
}
_, err := s.raftApply(structs.ACLTokenDeleteRequestType, &req)
if err != nil {
return fmt.Errorf("failed to remove token with a redacted secret: %v", err)
}
}
}
if s.InPrimaryDatacenter() {
s.logger.Info("initializing acls")
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
Backport of [CC-5719] Add support for builtin global-read-only policy into release/1.16.x (#18345) * [OSS] Post Consul 1.16 updates (#17606) * chore: update dev build to 1.17 * chore(ci): add nightly 1.16 test Drop the oldest and add the newest running release branch to nightly builds. * Add writeAuditRPCEvent to agent_oss (#17607) * Add writeAuditRPCEvent to agent_oss * fix the other diffs * backport change log * Add Envoy and Consul version constraints to Envoy extensions (#17612) * [API Gateway] Fix trust domain for external peered services in synthesis code (#17609) * [API Gateway] Fix trust domain for external peered services in synthesis code * Add changelog * backport ent changes to oss (#17614) * backport ent changes to oss * Update .changelog/_5669.txt Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com> --------- Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com> * Update intentions.mdx (#17619) Make behaviour of L7 intentions clearer * enterprise changelog update for audit (#17625) * Update list of Envoy versions (#17546) * [API Gateway] Fix rate limiting for API gateways (#17631) * [API Gateway] Fix rate limiting for API gateways * Add changelog * Fix failing unit tests * Fix operator usage tests for api package * sort some imports that are wonky between oss and ent (#17637) * PmTLS and tproxy improvements with failover and L7 traffic mgmt for k8s (#17624) * porting over changes from enterprise repo to oss * applied feedback on service mesh for k8s overview * fixed typo * removed ent-only build script file * Apply suggestions from code review Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: David Yu <dyu@hashicorp.com> Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> --------- Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> * Delete check-legacy-links-format.yml (#17647) * docs: Reference doc updates for permissive mTLS settings (#17371) * Reference doc updates for permissive mTLS settings * Document config entry filtering * Fix minor doc errors (double slashes in link url paths) --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Add generic experiments configuration and use it to enable catalog v2 resources (#17604) * Add generic experiments configuration and use it to enable catalog v2 resources * Run formatting with -s as CI will validate that this has been done * api-gateway: stop adding all header filters to virtual host when generating xDS (#17644) * Add header filter to api-gateway xDS golden test * Stop adding all header filters to virtual host when generating xDS for api-gateway * Regenerate xDS golden file for api-gateway w/ header filter * fix: add agent info reporting log (#17654) * Add new Consul 1.16 docs (#17651) * Merge pull request #5773 from hashicorp/docs/rate-limiting-from-ip-addresses-1.16 updated docs for rate limiting for IP addresses - 1.16 * Merge pull request #5609 from hashicorp/docs/enterprise-utilization-reporting Add docs for enterprise utilization reporting * Merge pull request #5734 from hashicorp/docs/envoy-ext-1.16 Docs/envoy ext 1.16 * Merge pull request #5773 from hashicorp/docs/rate-limiting-from-ip-addresses-1.16 updated docs for rate limiting for IP addresses - 1.16 * Merge pull request #5609 from hashicorp/docs/enterprise-utilization-reporting Add docs for enterprise utilization reporting * Merge pull request #5734 from hashicorp/docs/envoy-ext-1.16 Docs/envoy ext 1.16 * fix build errors --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Default `ProxyType` for builtin extensions (#17657) * Post 1.16.0-rc1 updates (#17663) - Update changelog to include new entries from release - Update submodule versions to latest published * Update service-defaults.mdx (#17656) * docs: Sameness Groups (#17628) * port from enterprise branch * Apply suggestions from code review Co-authored-by: shanafarkas <105076572+shanafarkas@users.noreply.github.com> * Update website/content/docs/connect/cluster-peering/usage/create-sameness-groups.mdx * next steps * Update website/content/docs/connect/cluster-peering/usage/create-sameness-groups.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/k8s/connect/cluster-peering/usage/create-sameness-groups.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --------- Co-authored-by: shanafarkas <105076572+shanafarkas@users.noreply.github.com> Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Remove "BETA" marker from config entries (#17670) * CAPIgw for K8s installation updates for 1.16 (#17627) * trimmed CRD step and reqs from installation * updated tech specs * Apply suggestions from code review Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * added upgrade instruction * removed tcp port req * described downtime and DT-less upgrades * applied additional review feedback --------- Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * additional feedback on API gateway upgrades (#17677) * additional feedback * Update website/content/docs/api-gateway/upgrades.mdx Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> --------- Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * docs: JWT Authorization for intentions (#17643) * Initial page/nav creation * configuration entry reference page * Usage + fixes * service intentions page * usage * description * config entry updates * formatting fixes * Update website/content/docs/connect/config-entries/service-intentions.mdx Co-authored-by: Paul Glass <pglass@hashicorp.com> * service intentions review fixes * Overview page review fixes * Apply suggestions from code review Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --------- Co-authored-by: Paul Glass <pglass@hashicorp.com> Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * docs: minor fixes to JWT auth docs (#17680) * Fixes * service intentions fixes * Fix two WAL metrics in docs/agent/telemetry.mdx (#17593) * updated failover for k8s w-tproxy page title (#17683) * Add release notes 1.16 rc (#17665) * Merge pull request #5773 from hashicorp/docs/rate-limiting-from-ip-addresses-1.16 updated docs for rate limiting for IP addresses - 1.16 * Merge pull request #5609 from hashicorp/docs/enterprise-utilization-reporting Add docs for enterprise utilization reporting * Merge pull request #5734 from hashicorp/docs/envoy-ext-1.16 Docs/envoy ext 1.16 * Add release notes for 1.16-rc * Add consul-e license utlization reporting * Update with rc absolute links * Update with rc absolute links * fix typo * Apply suggestions from code review Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update to use callout component * address typo * docs: FIPS 140-2 Compliance (#17668) * Page + nav + formatting * link fix * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * link fix * Apply suggestions from code review Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * fix apigw install values file * fix typos in release notes --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * fix release notes links (#17687) * adding redirects for tproxy and envoy extensions (#17688) * adding redirects * Apply suggestions from code review * Fix FIPS copy (#17691) * fix release notes links * fix typos on fips docs * [NET-4107][Supportability] Log Level set to TRACE and duration set to 5m for consul-debug (#17596) * changed duration to 5 mins and log level to trace * documentation update * change log * ENT merge of ext-authz extension updates (#17684) * docs: Update default values for Envoy extension proxy types (#17676) * fix: stop peering delete routine on leader loss (#17483) * Refactor disco chain prioritize by locality structs (#17696) This includes prioritize by localities on disco chain targets rather than resolvers, allowing different targets within the same partition to have different policies. * agent: remove agent cache dependency from service mesh leaf certificate management (#17075) * agent: remove agent cache dependency from service mesh leaf certificate management This extracts the leaf cert management from within the agent cache. This code was produced by the following process: 1. All tests in agent/cache, agent/cache-types, agent/auto-config, agent/consul/servercert were run at each stage. - The tests in agent matching .*Leaf were run at each stage. - The tests in agent/leafcert were run at each stage after they existed. 2. The former leaf cert Fetch implementation was extracted into a new package behind a "fake RPC" endpoint to make it look almost like all other cache type internals. 3. The old cache type was shimmed to use the fake RPC endpoint and generally cleaned up. 4. I selectively duplicated all of Get/Notify/NotifyCallback/Prepopulate from the agent/cache.Cache implementation over into the new package. This was renamed as leafcert.Manager. - Code that was irrelevant to the leaf cert type was deleted (inlining blocking=true, refresh=false) 5. Everything that used the leaf cert cache type (including proxycfg stuff) was shifted to use the leafcert.Manager instead. 6. agent/cache-types tests were moved and gently replumbed to execute as-is against a leafcert.Manager. 7. Inspired by some of the locking changes from derek's branch I split the fat lock into N+1 locks. 8. The waiter chan struct{} was eventually replaced with a singleflight.Group around cache updates, which was likely the biggest net structural change. 9. The awkward two layers or logic produced as a byproduct of marrying the agent cache management code with the leaf cert type code was slowly coalesced and flattened to remove confusion. 10. The .*Leaf tests from the agent package were copied and made to work directly against a leafcert.Manager to increase direct coverage. I have done a best effort attempt to port the previous leaf-cert cache type's tests over in spirit, as well as to take the e2e-ish tests in the agent package with Leaf in the test name and copy those into the agent/leafcert package to get more direct coverage, rather than coverage tangled up in the agent logic. There is no net-new test coverage, just coverage that was pushed around from elsewhere. * [core]: Pin github action workflows (#17695) * docs: missing changelog for _5517 (#17706) * add enterprise notes for IP-based rate limits (#17711) * add enterprise notes for IP-based rate limits * Apply suggestions from code review Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> * added bolded 'Enterprise' in list items. --------- Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> * Update compatibility.mdx (#17713) * Remove extraneous version info for Config entries (#17716) * Update terminating-gateway.mdx * Update exported-services.mdx * Update mesh.mdx * fix: typo in link to section (#17527) Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Bump Alpine to 3.18 (#17719) * Update Dockerfile * Create 17719.txt * NET-1825: New ACL token creation docs (#16465) Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com> * [NET-3865] [Supportability] Additional Information in the output of 'consul operator raft list-peers' (#17582) * init * fix tests * added -detailed in docs * added change log * fix doc * checking for entry in map * fix tests * removed detailed flag * removed detailed flag * revert unwanted changes * removed unwanted changes * updated change log * pr review comment changes * pr comment changes single API instead of two * fix change log * fix tests * fix tests * fix test operator raft endpoint test * Update .changelog/17582.txt Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * nits * updated docs --------- Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * OSS merge: Update error handling login when applying extensions (#17740) * Bump atlassian/gajira-transition from 3.0.0 to 3.0.1 (#17741) Bumps [atlassian/gajira-transition](https://github.com/atlassian/gajira-transition) from 3.0.0 to 3.0.1. - [Release notes](https://github.com/atlassian/gajira-transition/releases) - [Commits](https://github.com/atlassian/gajira-transition/compare/4749176faf14633954d72af7a44d7f2af01cc92b...38fc9cd61b03d6a53dd35fcccda172fe04b36de3) --- updated-dependencies: - dependency-name: atlassian/gajira-transition dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Add truncation to body (#17723) * docs: Failover overview minor fix (#17743) * Incorrect symbol * Clarification * slight edit for clarity * docs - update Envoy and Dataplane compat matrix (#17752) * Update envoy.mdx added more detail around default versus other compatible versions * validate localities on agent configs and registration endpoints (#17712) * Updated docs added explanation. (#17751) * init * fix tests * added -detailed in docs * added change log * fix doc * checking for entry in map * fix tests * removed detailed flag * removed detailed flag * revert unwanted changes * removed unwanted changes * updated change log * pr review comment changes * pr comment changes single API instead of two * fix change log * fix tests * fix tests * fix test operator raft endpoint test * Update .changelog/17582.txt Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * nits * updated docs * explanation added --------- Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * Update index.mdx (#17749) * added redirects and updated links (#17764) * Add transparent proxy enhancements changelog (#17757) * docs - remove use of consul leave during upgrade instructions (#17758) * Fix issue with streaming service health watches. (#17775) Fix issue with streaming service health watches. This commit fixes an issue where the health streams were unaware of service export changes. Whenever an exported-services config entry is modified, it is effectively an ACL change. The bug would be triggered by the following situation: - no services are exported - an upstream watch to service X is spawned - the streaming backend filters out data for service X (due to lack of exports) - service X is finally exported In the situation above, the streaming backend does not trigger a refresh of its data. This means that any events that were supposed to have been received prior to the export are NOT backfilled, and the watches never see service X spawning. We currently have decided to not trigger a stream refresh in this situation due to the potential for a thundering herd effect (touching exports would cause a re-fetch of all watches for that partition, potentially). Therefore, a local blocking-query approach was added by this commit for agentless. It's also worth noting that the streaming subscription is currently bypassed most of the time with agentful, because proxycfg has a `req.Source.Node != ""` which prevents the `streamingEnabled` check from passing. This means that while agents should technically have this same issue, they don't experience it with mesh health watches. Note that this is a temporary fix that solves the issue for proxycfg, but not service-discovery use cases. * Property Override validation improvements (#17759) * Reject inbound Prop Override patch with Services Services filtering is only supported for outbound TrafficDirection patches. * Improve Prop Override unexpected type validation - Guard against additional invalid parent and target types - Add specific error handling for Any fields (unsupported) * Fixes (#17765) * Update license get explanation (#17782) This PR is to clarify what happens if the license get command is run on a follower if the leader hasn't been updated with a newer license. * Add Patch index to Prop Override validation errors (#17777) When a patch is found invalid, include its index for easier debugging when multiple patches are provided. * Stop referenced jwt providers from being deleted (#17755) * Stop referenced jwt providers from being deleted * Implement a Catalog Controllers Lifecycle Integration Test (#17435) * Implement a Catalog Controllers Lifecycle Integration Test * Prevent triggering the race detector. This allows defining some variables for protobuf constants and using those in comparisons. Without that, something internal in the fmt package ended up looking at the protobuf message size cache and triggering the race detector. * HCP Add node id/name to config (#17750) * Catalog V2 Container Based Integration Test (#17674) * Implement the Catalog V2 controller integration container tests This now allows the container tests to import things from the root module. However for now we want to be very restrictive about which packages we allow importing. * Add an upgrade test for the new catalog Currently this should be dormant and not executed. However its put in place to detect breaking changes in the future and show an example of how to do an upgrade test with integration tests structured like catalog v2. * Make testutil.Retry capable of performing cleanup operations These cleanup operations are executed after each retry attempt. * Move TestContext to taking an interface instead of a concrete testing.T This allows this to be used on a retry.R or generally anything that meets the interface. * Move to using TestContext instead of background contexts Also this forces all test methods to implement the Cleanup method now instead of that being an optional interface. Co-authored-by: Daniel Upton <daniel@floppy.co> * Fix Docs for Trails Leader By (#17763) * init * fix tests * added -detailed in docs * added change log * fix doc * checking for entry in map * fix tests * removed detailed flag * removed detailed flag * revert unwanted changes * removed unwanted changes * updated change log * pr review comment changes * pr comment changes single API instead of two * fix change log * fix tests * fix tests * fix test operator raft endpoint test * Update .changelog/17582.txt Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * nits * updated docs * explanation added * fix doc * fix docs --------- Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * Improve Prop Override docs examples (#17799) - Provide more realistics examples for setting properties not already supported natively by Consul - Remove superfluous commas from HCL, correct target service name, and fix service defaults vs. proxy defaults in examples - Align existing integration test to updated docs * Test permissive mTLS filter chain not configured with tproxy disabled (#17747) * Add documentation for remote debugging of integration tests. (#17800) * Add documentation for remote debugging of integration tests. * add link from main docs page. * changes related to PR feedback * Clarify limitations of Prop Override extension (#17801) Explicitly document the limitations of the extension, particularly what kind of fields it is capable of modifying. * Fix formatting for webhook-certs Consul tutorial (#17810) * Fix formatting for webhook-certs Consul tutorial * Make a small grammar change to also pick up whitespace changes necessary for formatting --------- Co-authored-by: David Yu <dyu@hashicorp.com> * Add jwt-authn metrics to jwt-provider docs (#17816) * [NET-3095] add jwt-authn metrics docs * Change URLs for redirects from RC to default latest (#17822) * Set GOPRIVATE for all hashicorp repos in CI (#17817) Consistently set GOPRIVATE to include all hashicorp repos, s.t. private modules are successfully pulled in enterprise CI. * Make locality aware routing xDS changes (#17826) * Fixup consul-container/test/debugging.md (#17815) Add missing `-t` flag and fix minor typo. * fixes #17732 - AccessorID in request body should be optional when updating ACL token (#17739) * AccessorID in request body should be optional when updating ACL token * add a test case * fix test case * add changelog entry for PR #17739 * CA provider doc updates and Vault provider minor update (#17831) Update CA provider docs Clarify that providers can differ between primary and secondary datacenters Provide a comparison chart for consul vs vault CA providers Loosen Vault CA provider validation for RootPKIPath Update Vault CA provider documentation * ext-authz Envoy extension: support `localhost` as a valid target URI. (#17821) * CI Updates (#17834) * Ensure that git access to private repos uses the ELEVATED_GITHUB_TOKEN * Bump the runner size for the protobuf generation check This has failed previously when the runner process that communicates with GitHub gets starved causing the job to fail. * counter part of ent pr (#17618) * watch: support -filter for consul watch: checks, services, nodes, service (#17780) * watch: support -filter for watch checks * Add filter for watch nodes, services, and service - unit test added - Add changelog - update doc * Trigger OSS => ENT merge for all release branches (#17853) Previously, this only triggered for release/*.*.x branches; however, our release process involves cutting a release/1.16.0 branch, for example, at time of code freeze these days. Any PRs to that branch after code freeze today do not make their way to consul-enterprise. This will make behavior for a .0 branch consistent with current behavior for a .x branch. * Update service-mesh.mdx (#17845) Deleted two commas which looks quite like some leftovers. * Add docs for sameness groups with resolvers. (#17851) * docs: add note about path prefix matching behavior for HTTPRoute config (#17860) * Add note about path prefix matching behavior for HTTPRoute config * Update website/content/docs/connect/gateways/api-gateway/configuration/http-route.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * docs: update upgrade to consul-dataplane docs on k8s (#17852) * resource: add `AuthorizerContext` helper method (#17393) * resource: enforce consistent naming of resource types (#17611) For consistency, resource type names must follow these rules: - `Group` must be snake case, and in most cases a single word. - `GroupVersion` must be lowercase, start with a "v" and end with a number. - `Kind` must be pascal case. These were chosen because they map to our protobuf type naming conventions. * tooling: generate protoset file (#17364) Extends the `proto` make target to generate a protoset file for use with grpcurl etc. * Fix a bug that wrongly trims domains when there is an overlap with DC name (#17160) * Fix a bug that wrongly trims domains when there is an overlap with DC name Before this change, when DC name and domain/alt-domain overlap, the domain name incorrectly trimmed from the query. Example: Given: datacenter = dc-test, alt-domain = test.consul. Querying for "test-node.node.dc-test.consul" will faile, because the code was trimming "test.consul" instead of just ".consul" This change, fixes the issue by adding dot (.) before trimming * trimDomain: ensure domain trimmed without modyfing original domains * update changelog --------- Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * deps: aws-sdk-go v1.44.289 (#17876) Signed-off-by: Dan Bond <danbond@protonmail.com> * api-gateway: add operation cannot be fulfilled error to common errors (#17874) * add error message * Update website/content/docs/api-gateway/usage/errors.mdx Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com> * fix formating issues --------- Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com> * api-gateway: add step to upgrade instructions for creating intentions (#17875) * Changelog - add 1.13.9, 1.14.8, and 1.15.4 (#17889) * docs: update config enable_debug (#17866) * update doc for config enable_debug * Update website/content/docs/agent/config/config-files.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update wording on WAN fed and intermediate_pki_path (#17850) * Allow service identity tokens the ability to read jwt-providers (#17893) * Allow service identity tokens the ability to read jwt-providers * more tests * service_prefix tests * Update docs (#17476) Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Add emit_tags_as_labels to envoy bootstrap config when using Consul Telemetry Collector (#17888) * Fix command from kg to kubectl get (#17903) * Create and update release notes for 1.16 and 1.2 (#17895) * update release notes for 1.16 and 1.2 * update latest consul core release * Propose new changes to APIgw upgrade instructions (#17693) * Propose new changes to APIgw upgrade instructions * fix build error * update callouts to render correctly * Add hideClipboard to log messages * Added clarification around consul k8s and crds * Add workflow to verify linux release packages (#17904) * adding docker files to verify linux packages. * add verifr-release-linux.yml * updating name * pass inputs directly into jobs * add other linux package platforms * remove on push * fix TARGETARCH on debian and ubuntu so it can check arm64 and amd64 * fixing amazon to use the continue line * add ubuntu i386 * fix comment lines * working * remove commented out workflow jobs * Apply suggestions from code review Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com> * update fedora and ubuntu to use latest tag --------- Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com> * Reference hashicorp/consul instead of consul for Docker image (#17914) * Reference hashicorp/consul instead of consul for Docker image * Update Make targets that pull consul directly * Update Consul K8s Upgrade Doc Updates (#17921) Updating upgrade procedures to encompass expected errors during upgrade process from v1.13.x to v1.14.x. * Update sameness-group.mdx (#17915) * Update create-sameness-groups.mdx (#17927) * deps: coredns v1.10.1 (#17912) * Ensure RSA keys are at least 2048 bits in length (#17911) * Ensure RSA keys are at least 2048 bits in length * Add changelog * update key length check for FIPS compliance * Fix no new variables error and failing to return when error exists from validating * clean up code for better readability * actually return value * tlsutil: Fix check TLS configuration (#17481) * tlsutil: Fix check TLS configuration * Rewording docs. * Update website/content/docs/services/configuration/checks-configuration-reference.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Fix typos and add changelog entry. --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * docs: Deprecations for connect-native SDK and specific connect native APIs (#17937) * Update v1_16_x.mdx * Update connect native golang page --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Revert "Add workflow to verify linux release packages (#17904)" (#17942) This reverts commit 3368f14fab500ebe9f6aeab5631dd1d5f5a453e5. * Fixes Secondary ConnectCA update (#17846) This fixes a bug that was identified which resulted in subsequent ConnectCA configuration update not to persist in the cluster. * fixing typo in link to jwt-validations-with-intentions doc (#17955) * Fix streaming backend link (#17958) * Fix streaming backend link * Update health.mdx * Dynamically create jwks clusters for jwt-providers (#17944) * website: remove deprecated agent rpc docs (#17962) * Fix missing BalanceOutboundConnections in v2 catalog. (#17964) * feature - [NET - 4005] - [Supportability] Reloadable Configuration - enable_debug (#17565) * # This is a combination of 9 commits. # This is the 1st commit message: init without tests # This is the commit message #2: change log # This is the commit message #3: fix tests # This is the commit message #4: fix tests # This is the commit message #5: added tests # This is the commit message #6: change log breaking change # This is the commit message #7: removed breaking change # This is the commit message #8: fix test # This is the commit message #9: keeping the test behaviour same * # This is a combination of 12 commits. # This is the 1st commit message: init without tests # This is the commit message #2: change log # This is the commit message #3: fix tests # This is the commit message #4: fix tests # This is the commit message #5: added tests # This is the commit message #6: change log breaking change # This is the commit message #7: removed breaking change # This is the commit message #8: fix test # This is the commit message #9: keeping the test behaviour same # This is the commit message #10: made enable debug atomic bool # This is the commit message #11: fix lint # This is the commit message #12: fix test true enable debug * parent 10f500e895d92cc3691ade7b74a33db755d22039 author absolutelightning <ashesh.vidyut@hashicorp.com> 1687352587 +0530 committer absolutelightning <ashesh.vidyut@hashicorp.com> 1687352592 +0530 init without tests change log fix tests fix tests added tests change log breaking change removed breaking change fix test keeping the test behaviour same made enable debug atomic bool fix lint fix test true enable debug using enable debug in agent as atomic bool test fixes fix tests fix tests added update on correct locaiton fix tests fix reloadable config enable debug fix tests fix init and acl 403 * revert commit * Fix formatting codeblocks on APIgw docs (#17970) * fix formatting codeblocks * remove unnecessary indents * Remove POC code (#17974) * update doc (#17910) * update doc * update link * Remove duplicate and unused newDecodeConfigEntry func (#17979) * docs: samenessGroup YAML examples (#17984) * configuration entry syntax * Example config * Add changelog entry for 1.16.0 (#17987) * Fix typo (#17198) servcies => services * Expose JWKS cluster config through JWTProviderConfigEntry (#17978) * Expose JWKS cluster config through JWTProviderConfigEntry * fix typos, rename trustedCa to trustedCA * Integration test for ext-authz Envoy extension (#17980) * Fix incorrect protocol for transparent proxy upstreams. (#17894) This PR fixes a bug that was introduced in: https://github.com/hashicorp/consul/pull/16021 A user setting a protocol in proxy-defaults would cause tproxy implicit upstreams to not honor the upstream service's protocol set in its `ServiceDefaults.Protocol` field, and would instead always use the proxy-defaults value. Due to the fact that upstreams configured with "tcp" can successfully contact upstream "http" services, this issue was not recognized until recently (a proxy-defaults with "tcp" and a listening service with "http" would make successful requests, but not the opposite). As a temporary work-around, users experiencing this issue can explicitly set the protocol on the `ServiceDefaults.UpstreamConfig.Overrides`, which should take precedence. The fix in this PR removes the proxy-defaults protocol from the wildcard upstream that tproxy uses to configure implicit upstreams. When the protocol was included, it would always overwrite the value during discovery chain compilation, which was not correct. The discovery chain compiler also consumes proxy defaults to determine the protocol, so simply excluding it from the wildcard upstream config map resolves the issue. * feat: include nodes count in operator usage endpoint and cli command (#17939) * feat: update operator usage api endpoint to include nodes count * feat: update operator usange cli command to includes nodes count * [OSS] Improve Gateway Test Coverage of Catalog Health (#18011) * fix(cli): remove failing check from 'connect envoy' registration for api gateway * test(integration): add tests to check catalog statsus of gateways on startup * remove extra sleep comment * Update test/integration/consul-container/libs/assert/service.go * changelog * Fixes Traffic rate limitting docs (#17997) * Fix removed service-to-service peering links (#17221) * docs: fix removed service-to-service peering links * docs: extend peering-via-mesh-gateways intro (thanks @trujillo-adam) --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * docs: Sameness "beta" warning (#18017) * Warning updates * .x * updated typo in tab heading (#18022) * updated typo in tab heading * updated tab group typo, too * Document that DNS lookups can target cluster peers (#17990) Static DNS lookups, in addition to explicitly targeting a datacenter, can target a cluster peer. This was added in 95dc0c7b301b70a6b955a8b7c9737c9b86f03df6 but didn't make the documentation. The driving function for the change is `parseLocality` here: https://github.com/hashicorp/consul/blob/0b1299c28d8127129d61310ee4280055298438e0/agent/dns_oss.go#L25 The biggest change in this is to adjust the standard lookup syntax to tie `.<datacenter>` to `.dc` as required-together, and to append in the similar `.<cluster-peer>.peer` optional argument, both to A record and SRV record lookups. Co-authored-by: David Yu <dyu@hashicorp.com> * Add first integration test for jwt auth with intention (#18005) * fix stand-in text for name field (#18030) * removed sameness conf entry from failover nav (#18033) * docs - add service sync annotations and k8s service weight annotation (#18032) * Docs for https://github.com/hashicorp/consul-k8s/pull/2293 * remove versions for enterprise features since they are old --------- Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> * docs - add jobs use case for service mesh k8s (#18037) * docs - add jobs use case for service mesh k8s * add code blocks * address feedback (#18045) * Add verify server hostname to tls default (#17155) * [OSS] Fix initial_fetch_timeout to wait for all xDS resources (#18024) * fix(connect): set initial_fetch_time to wait indefinitely * changelog * PR feedback 1 * ui: fix typos for peer service imports (#17999) * test: fix FIPS inline cert test message (#18076) * Fix a couple typos in Agent Telemetry Metrics docs (#18080) * Fix metrics docs * Add changelog Signed-off-by: josh <josh.timmons@hashicorp.com> --------- Signed-off-by: josh <josh.timmons@hashicorp.com> * docs updates - cluster peering and virtual services (#18069) * Update route-to-virtual-services.mdx * Update establish-peering.mdx * Update service-mesh-compare.mdx (#17279) grammar change * Update helm docs on main (#18085) * ci: use gotestsum v1.10.1 [NET-4042] (#18088) * Docs: Update proxy lifecycle annotations and consul-dataplane flags (#18075) * Update proxy lifecycle annotations and consul-dataplane flags * Pass configured role name to Vault for AWS auth in Connect CA (#17885) * Docs for dataplane upgrade on k8s (#18051) * Docs for dataplane upgrade on k8s --------- Co-authored-by: David Yu <dyu@hashicorp.com> Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * docs - update upgrade index page to not recommend consul leave. (#18100) * Displays Consul version of each nodes in UI nodes section (#17754) * update UINodes and UINodeInfo response with consul-version info added as NodeMeta, fetched from serf members * update test cases TestUINodes, TestUINodeInfo * added nil check for map * add consul-version in local agent node metadata * get consul version from serf member and add this as node meta in catalog register request * updated ui mock response to include consul versions as node meta * updated ui trans and added version as query param to node list route * updates in ui templates to display consul version with filter and sorts * updates in ui - model class, serializers,comparators,predicates for consul version feature * added change log for Consul Version Feature * updated to get version from consul service, if for some reason not available from serf * updated changelog text * updated dependent testcases * multiselection version filter * Update agent/consul/state/catalog.go comments updated Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com> --------- Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com> * api gw 1.16 updates (#18081) * api gw 1.16 updates * Apply suggestions from code review Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> * update CodeBlockConfig filename * Apply suggestions from code review Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> * remove non-standard intentions page * Update website/content/docs/api-gateway/configuration/index.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --------- Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * [NET-4103] ci: build s390x (#18067) * ci: build s390x * ci: test s390x * ci: dev build s390x * no GOOS * ent only * build: publish s390x * fix syntax error * fix syntax error again * fix syntax error again x2 * test branch * Move s390x conditionals to step level * remove test branch --------- Co-authored-by: emilymianeil <eneil@hashicorp.com> * :ermahgerd "Sevice Mesh" -> "Service Mesh" (#18116) Just a typo in the docs. * Split pbmesh.UpstreamsConfiguration as a resource out of pbmesh.Upstreams (#17991) Configuration that previously was inlined into the Upstreams resource applies to both explicit and implicit upstreams and so it makes sense to split it out into its own resource. It also has other minor changes: - Renames `proxy.proto` proxy_configuration.proto` - Changes the type of `Upstream.destination_ref` from `pbresource.ID` to `pbresource.Reference` - Adds comments to fields that didn't have them * [NET-4895] ci - api tests and consul container tests error because of dependency bugs with go 1.20.6. Pin go to 1.20.5. (#18124) ### Description The following jobs started failing when go 1.20.6 was released: - `go-test-api-1-19` - `go-test-api-1-20` - `compatibility-integration-tests` - `upgrade-integration-tests` `compatibility-integration-tests` and `compatibility-integration-tests` to this testcontainers issue: https://github.com/testcontainers/testcontainers-go/issues/1359. This issue calls for testcontainers to release a new version when one of their dependencies is fixed. When that is done, we will unpin the go versions in `compatibility-integration-tests` and `compatibility-integration-tests`. ### Testing & Reproduction steps See these jobs broken in CI and then see them work with this PR. --------- Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com> * Add ingress gateway deprecation notices to docs (#18102) ### Description This adds notices, that ingress gateway is deprecated, to several places in the product docs where ingress gateway is the topic. ### Testing & Reproduction steps Tested with a local copy of the website. ### Links Deprecation of ingress gateway was announced in the Release Notes for Consul 1.16 and Consul-K8s 1.2. See: [https://developer.hashicorp.com/consul/docs/release-notes/consul/v1_16_x#what-s-deprecated](https://developer.hashicorp.com/consul/docs/release-notes/consul/v1_16_x#what-s-deprecated ) [https://developer.hashicorp.com/consul/docs/release-notes/consul-k8s/v1_2_x#what-s-deprecated](https://developer.hashicorp.com/consul/docs/release-notes/consul-k8s/v1_2_x#what-s-deprecated) ### PR Checklist * [N/A] updated test coverage * [X] external facing docs updated * [X] appropriate backport labels added * [X] not a security concern --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Add docs for jwt cluster configuration (#18004) ### Description <!-- Please describe why you're making this change, in plain English. --> - Add jwt-provider docs for jwks cluster configuration. The configuration was added here: https://github.com/hashicorp/consul/pull/17978 * Docs: fix unmatched bracket for health checks page (#18134) * NET-4657/add resource service client (#18053) ### Description <!-- Please describe why you're making this change, in plain English. --> Dan had already started on this [task](https://github.com/hashicorp/consul/pull/17849) which is needed to start building the HTTP APIs. This just needed some cleanup to get it ready for review. Overview: - Rename `internalResourceServiceClient` to `insecureResourceServiceClient` for name consistency - Configure a `secureResourceServiceClient` with auth enabled ### PR Checklist * [ ] ~updated test coverage~ * [ ] ~external facing docs updated~ * [x] appropriate backport labels added * [ ] ~not a security concern~ * Fix bug with Vault CA provider (#18112) Updating RootPKIPath but not IntermediatePKIPath would not update leaf signing certs with the new root. Unsure if this happens in practice but manual testing showed it is a bug that would break mesh and agent connections once the old root is pruned. * [NET-4897] net/http host header is now verified and request.host that contains socked now error (#18129) ### Description This is related to https://github.com/hashicorp/consul/pull/18124 where we pinned the go versions in CI to 1.20.5 and 1.19.10. go 1.20.6 and 1.19.11 now validate request host headers for validity, including the hostname cannot be prefixed with slashes. For local communications (npipe://, unix://), the hostname is not used, but we need valid and meaningful hostname. Prior versions go Go would clean the host header, and strip slashes in the process, but go1.20.6 and go1.19.11 no longer do, and reject the host header. Around the community we are seeing that others are intercepting the req.host and if it starts with a slash or ends with .sock, they changing the host to localhost or another dummy value. [client: define a "dummy" hostname to use for local connections by thaJeztah · Pull Request #45942 · moby/moby](https://github.com/moby/moby/pull/45942) ### Testing & Reproduction steps Check CI tests. ### Links * [ ] updated test coverage * [ ] external facing docs updated * [ ] appropriate backport labels added * [ ] not a security concern * add a conditional around setting LANFilter.AllSegments to make sure it is valid (#18139) ### Description This is to correct a code problem because this assumes all segments, but when you get to Enterprise, you can be in partition that is not the default partition, in which case specifying all segments does not validate and fails. This is to correct the setting of this filter with `AllSegments` to `true` to only occur when in the the `default` partition. ### Testing & Reproduction steps <!-- * In the case of bugs, describe how to replicate * If any manual tests were done, document the steps and the conditions to replicate * Call out any important/ relevant unit tests, e2e tests or integration tests you have added or are adding --> ### Links <!-- Include any links here that might be helpful for people reviewing your PR (Tickets, GH issues, API docs, external benchmarks, tools docs, etc). If there are none, feel free to delete this section. Please be mindful not to leak any customer or confidential information. HashiCorp employees may want to use our internal URL shortener to obfuscate links. --> ### PR Checklist * [ ] updated test coverage * [ ] external facing docs updated * [ ] appropriate backport labels added * [ ] not a security concern * chore: bump upgrade integrations tests to 1.15, 116 [NET-4743] (#18130) * re org resource type registry (#18133) * fix: update delegateMock used in ENT (#18149) ### Description <!-- Please describe why you're making this change, in plain English. --> The mock is used in `http_ent_test` file which caused lint failures. For OSS->ENT parity adding the same change here. ### Links <!-- Include any links here that might be helpful for people reviewing your PR (Tickets, GH issues, API docs, external benchmarks, tools docs, etc). If there are none, feel free to delete this section. Please be mindful not to leak any customer or confidential information. HashiCorp employees may want to use our internal URL shortener to obfuscate links. --> Identified in OSS->ENT [merge PR](https://github.com/hashicorp/consul-enterprise/pull/6328) ### PR Checklist * [ ] ~updated test coverage~ * [ ] ~external facing docs updated~ * [x] appropriate backport labels added * [ ] ~not a security concern~ * Use JWT-auth filter in metadata mode & Delegate validation to RBAC filter (#18062) ### Description <!-- Please describe why you're making this change, in plain English. --> - Currently the jwt-auth filter doesn't take into account the service identity when validating jwt-auth, it only takes into account the path and jwt provider during validation. This causes issues when multiple source intentions restrict access to an endpoint with different JWT providers. - To fix these issues, rather than use the JWT auth filter for validation, we use it in metadata mode and allow it to forward the successful validated JWT token payload to the RBAC filter which will make the decisions. This PR ensures requests with and without JWT tokens successfully go through the jwt-authn filter. The filter however only forwards the data for successful/valid tokens. On the RBAC filter level, we check the payload for claims and token issuer + existing rbac rules. ### Testing & Reproduction steps <!-- * In the case of bugs, describe how to replicate * If any manual tests were done, document the steps and the conditions to replicate * Call out any important/ relevant unit tests, e2e tests or integration tests you have added or are adding --> - This test covers a multi level jwt requirements (requirements at top level and permissions level). It also assumes you have envoy running, you have a redis and a sidecar proxy service registered, and have a way to generate jwks with jwt. I mostly use: https://www.scottbrady91.com/tools/jwt for this. - first write your proxy defaults ``` Kind = "proxy-defaults" name = "global" config { protocol = "http" } ``` - Create two providers ``` Kind = "jwt-provider" Name = "auth0" Issuer = "https://ronald.local" JSONWebKeySet = { Local = { JWKS = "eyJrZXlzIjog....." } } ``` ``` Kind = "jwt-provider" Name = "okta" Issuer = "https://ronald.local" JSONWebKeySet = { Local = { JWKS = "eyJrZXlzIjogW3...." } } ``` - add a service intention ``` Kind = "service-intentions" Name = "redis" JWT = { Providers = [ { Name = "okta" }, ] } Sources = [ { Name = "*" Permissions = [{ Action = "allow" HTTP = { PathPrefix = "/workspace" } JWT = { Providers = [ { Name = "okta" VerifyClaims = [ { Path = ["aud"] Value = "my_client_app" }, { Path = ["sub"] Value = "5be86359073c434bad2da3932222dabe" } ] }, ] } }, { Action = "allow" HTTP = { PathPrefix = "/" } JWT = { Providers = [ { Name = "auth0" }, ] } }] } ] ``` - generate 3 jwt tokens: 1 from auth0 jwks, 1 from okta jwks with different claims than `/workspace` expects and 1 with correct claims - connect to your envoy (change service and address as needed) to view logs and potential errors. You can add: `-- --log-level debug` to see what data is being forwarded ``` consul connect envoy -sidecar-for redis1 -grpc-addr 127.0.0.1:8502 ``` - Make the following requests: ``` curl -s -H "Authorization: Bearer $Auth0_TOKEN" --insecure --cert leaf.cert --key leaf.key --cacert connect-ca.pem https://localhost:20000/workspace -v RBAC filter denied curl -s -H "Authorization: Bearer $Okta_TOKEN_with_wrong_claims" --insecure --cert leaf.cert --key leaf.key --cacert connect-ca.pem https://localhost:20000/workspace -v RBAC filter denied curl -s -H "Authorization: Bearer $Okta_TOKEN_with_correct_claims" --insecure --cert leaf.cert --key leaf.key --cacert connect-ca.pem https://localhost:20000/workspace -v Successful request ``` ### TODO * [x] Update test coverage * [ ] update integration tests (follow-up PR) * [x] appropriate backport labels added * Support Consul Connect Envoy Command on Windows (#17694) ### Description Add support for consul connect envoy command on windows. This PR fixes the comments of PR - https://github.com/hashicorp/consul/pull/15114 ### Testing * Built consul.exe from this branch on windows and hosted here - [AWS S3](https://asheshvidyut-bucket.s3.ap-southeast-2.amazonaws.com/consul.zip) * Updated the [tutorial](https://developer.hashicorp.com/consul/tutorials/developer-mesh/consul-windows-workloads) and changed the `consul_url.default` value to [AWS S3](https://asheshvidyut-bucket.s3.ap-southeast-2.amazonaws.com/consul.zip) * Followed the steps in the tutorial and verified that everything is working as described. ### PR Checklist * [x] updated test coverage * [ ] external facing docs updated * [x] appropriate backport labels added * [x] not a security concern --------- Co-authored-by: Franco Bruno Lavayen <cocolavayen@gmail.com> Co-authored-by: Jose Ignacio Lorenzo <74208929+joselo85@users.noreply.github.com> Co-authored-by: Jose Ignacio Lorenzo <joseignaciolorenzo85@gmail.com> Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * Change docs to say 168h instead of 7d for server_rejoin_age_max (#18154) ### Description Addresses https://github.com/hashicorp/consul/pull/17171#issuecomment-1636930705 * [OSS] test: improve xDS listener code coverage (#18138) test: improve xDS listener code coverage * Re-order expected/actual for assertContainerState in consul container tests (#18157) Re-order expected/actual, consul container tests * group and document make file (#17943) * group and document make file * Add `testing/deployer` (neé `consul-topology`) [NET-4610] (#17823) Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com> Co-authored-by: R.B. Boyer <rb@hashicorp.com> Co-authored-by: Freddy <freddygv@users.noreply.github.com> * [NET-4792] Add integrations tests for jwt-auth (#18169) * Add FIPS reference to consul enterprise docs (#18028) * Add FIPS reference to consul enterprise docs * Update website/content/docs/enterprise/index.mdx Co-authored-by: David Yu <dyu@hashicorp.com> * remove support for ecs client (fips) --------- Co-authored-by: David Yu <dyu@hashicorp.com> * add peering_commontopo tests [NET-3700] (#17951) Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com> Co-authored-by: R.B. Boyer <rb@hashicorp.com> Co-authored-by: Freddy <freddygv@users.noreply.github.com> Co-authored-by: NiniOak <anita.akaeze@hashicorp.com> * docs - remove Sentinel from enterprise features list (#18176) * Update index.mdx * Update kv.mdx * Update docs-nav-data.json * delete sentinel.mdx * Update redirects.js --------- Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> * [NET-4865] Bump golang.org/x/net to 0.12.0 (#18186) Bump golang.org/x/net to 0.12.0 While not necessary to directly address CVE-2023-29406 (which should be handled by using a patched version of Go when building), an accompanying change to HTTP/2 error handling does impact agent code. See https://go-review.googlesource.com/c/net/+/506995 for the HTTP/2 change. Bump this dependency across our submodules as well for the sake of potential indirect consumers of `x/net/http`. * Call resource mutate hook before validate hook (NET-4907) (#18178) * [NET-4865] security: Update Go version to 1.20.6 (#18190) Update Go version to 1.20.6 This resolves [CVE-2023-29406] (https://nvd.nist.gov/vuln/detail/CVE-2023-29406) for uses of the `net/http` standard library. Note that until the follow-up to #18124 is done, the version of Go used in those impacted tests will need to remain on 1.20.5. * Improve XDS test coverage: JWT auth edition (#18183) * Improve XDS test coverage: JWT auth edition more tests * test: xds coverage for jwt listeners --------- Co-authored-by: DanStough <dan.stough@hashicorp.com> * update readme.md (#18191) u[date readme.md * Update submodules to latest following 1.16.0 (#18197) Align all our internal use of submodules on the latest versions. * SEC-090: Automated trusted workflow pinning (2023-07-18) (#18174) Result of tsccr-helper -log-level=info -pin-all-workflows . Co-authored-by: hashicorp-tsccr[bot] <hashicorp-tsccr[bot]@users.noreply.github.com> * Fix Backport Assistant PR commenting (#18200) * Fix Backport Assistant failure PR commenting For general comments on a PR, it looks like you have to use the `/issue` endpoint rather than `/pulls`, which requires commit/other review-specific target details. This matches the endpoint used in `backport-reminder.yml`. * Remove Backport Reminder workflow This is noisy (even when adding multiple labels, individual comments per label are generated), and likely no longer needed: we haven't had this work in a long time due to an expired GH token, and we now have better automation for backport PR assignment. * resource: Pass resource to Write ACL hook instead of just resource Id [NET-4908] (#18192) * Explicitly enable WebSocket upgrades (#18150) This PR explicitly enables WebSocket upgrades in Envoy's UpgradeConfig for all proxy types. (API Gateway, Ingress, and Sidecar.) Fixes #8283 * docs: fix the description of client rpc (#18206) * NET-4804: Add dashboard for monitoring consul-k8s (#18208) * [OSS] Improve xDS Code Coverage - Clusters (#18165) test: improve xDS cluster code coverage * NET-4222 take config file consul container (#18218) Net 4222 take config file consul container * Envoy Integration Test Windows (#18007) * [CONSUL-395] Update check_hostport and Usage (#40) * [CONSUL-397] Copy envoy binary from Image (#41) * [CONSUL-382] Support openssl in unique test dockerfile (#43) * [CONSUL-405] Add bats to single container (#44) * [CONSUL-414] Run Prometheus Test Cases and Validate Changes (#46) * [CONSUL-410] Run Jaeger in Single container (#45) * [CONSUL-412] Run test-sds-server in single container (#48) * [CONSUL-408] Clean containers (#47) * [CONSUL-384] Rebase and sync fork (#50) * [CONSUL-415] Create Scenarios Troubleshooting Docs (#49) * [CONSUL-417] Update Docs Single Container (#51) * [CONSUL-428] Add Socat to single container (#54) * [CONSUL-424] Replace pkill in kill_envoy function (#52) * [CONSUL-434] Modify Docker run functions in Helper script (#53) * [CONSUL-435] Replace docker run in set_ttl_check_state & wait_for_agent_service_register functions (#55) * [CONSUL-438] Add netcat (nc) in the Single container Dockerfile (#56) * [CONSUL-429] Replace Docker run with Docker exec (#57) * [CONSUL-436] Curl timeout and run tests (#58) * [CONSUL-443] Create dogstatsd Function (#59) * [CONSUL-431] Update Docs Netcat (#60) * [CONSUL-439] Parse nc Command in function (#61) * [CONSUL-463] Review curl Exec and get_ca_root Func (#63) * [CONSUL-453] Docker hostname in Helper functions (#64) * [CONSUL-461] Test wipe volumes without extra cont (#66) * [CONSUL-454] Check ports in the Server and Agent containers (#65) * [CONSUL-441] Update windows dockerfile with version (#62) * [CONSUL-466] Review case-grpc Failing Test (#67) * [CONSUL-494] Review case-cfg-resolver-svc-failover (#68) * [CONSUL-496] Replace docker_wget & docker_curl (#69) * [CONSUL-499] Cleanup Scripts - Remove nanoserver (#70) * [CONSUL-500] Update Troubleshooting Docs (#72) * [CONSUL-502] Pull & Tag Envoy Windows Image (#73) * [CONSUL-504] Replace docker run in docker_consul (#76) * [CONSUL-505] Change admin_bind * [CONSUL-399] Update envoy to 1.23.1 (#78) * [CONSUL-510] Support case-wanfed-gw on Windows (#79) * [CONSUL-506] Update troubleshooting Documentation (#80) * [CONSUL-512] Review debug_dump_volumes Function (#81) * [CONSUL-514] Add zipkin to Docker Image (#82) * [CONSUL-515] Update Documentation (#83) * [CONSUL-529] Support case-consul-exec (#86) * [CONSUL-530] Update Documentation (#87) * [CONSUL-530] Update default consul version 1.13.3 * [CONSUL-539] Cleanup (#91) * [CONSUL-546] Scripts Clean-up (#92) * [CONSUL-491] Support admin_access_log_path value for Windows (#71) * [CONSUL-519] Implement mkfifo Alternative (#84) * [CONSUL-542] Create OS Specific Files for Envoy Package (#88) * [CONSUL-543] Create exec_supported.go (#89) * [CONSUL-544] Test and Build Changes (#90) * Implement os.DevNull * using mmap instead of disk files * fix import in exec-unix * fix nmap open too many arguemtn * go fmt on file * changelog file * fix go mod * Update .changelog/17694.txt Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * different mmap library * fix bootstrap json * some fixes * chocolatey version fix and image fix * using different library * fix Map funciton call * fix mmap call * fix tcp dump * fix tcp dump * windows tcp dump * Fix docker run * fix tests * fix go mod * fix version 16.0 * fix version * fix version dev * sleep to debug * fix sleep * fix permission issue * fix permission issue * fix permission issue * fix command * fix command * fix funciton * fix assert config entry status command not found * fix command not found assert_cert_has_cn * fix command not found assert_upstream_missing * fix command not found assert_upstream_missing_once * fix command not found get_upstream_endpoint * fix command not found get_envoy_public_listener_once * fix command not found * fix test cases * windows integration test workflow github * made code similar to unix using npipe * fix go.mod * fix dialing of npipe * dont wait * check size of written json * fix undefined n * running * fix dep * fix syntax error * fix workflow file * windows runner * fix runner * fix from json * fix runs on * merge connect envoy * fix cin path * build * fix file name * fix file name * fix dev build * remove unwanted code * fix upload * fix bin name * fix path * checkout current branch * fix path * fix tests * fix shell bash for windows sh files * fix permission of run-test.sh * removed docker dev * added shell bash for tests * fix tag * fix win=true * fix cd * added dev * fix variable undefined * removed failing tests * fix tcp dump image * fix curl * fix curl * tcp dump path * fix tcpdump path * fix curl * fix curl install * stop removing intermediate containers * fix tcpdump docker image * revert -rm * --rm=false * makeing docker image before * fix tcpdump * removed case consul exec * removed terminating gateway simple * comment case wasm * removed data dog * comment out upload coverage * uncomment case-consul-exec * comment case consul exec * if always * logs * using consul 1.17.0 * fix quotes * revert quotes * redirect to dev null * Revert version * revert consul connect * fix version * removed envoy connect * not using function * change log * docker logs * fix logs * restructure bad authz * rmeoved dev null * output * fix file descriptor * fix cacert * fix cacert * fix ca cert * cacert does not work in windows curl * fix func * removed docker logs * added sleep * fix tls * commented case-consul-exec * removed echo * retry docker consul * fix upload bin * uncomment consul exec * copying consul.exe to docker image * copy fix * fix paths * fix path * github workspace path * latest version * Revert "latest version" This reverts commit 5a7d7b82d9e7553bcb01b02557ec8969f9deba1d. * commented consul exec * added ssl revoke best effort * revert best effort * removed unused files * rename var name and change dir * windows runner * permission * needs setup fix * swtich to github runner * fix file path * fix path * fix path * fix path * fix path * fix path * fix build paths * fix tag * nightly runs * added matrix in github workflow, renamed files * fix job * fix matrix * removed brackes * from json * without using job matrix * fix quotes * revert job matrix * fix workflow * fix comment * added comment * nightly runs * removed datadog ci as it is already measured in linux one * running test * Revert "running test" This reverts commit 7013d15a23732179d18ec5d17336e16b26fab5d4. * pr comment fixes * running test now * running subset of test * running subset of test * job matrix * shell bash * removed bash shell * linux machine for job matrix * fix output * added cat to debug * using ubuntu latest * fix job matrix * fix win true * fix go test * revert job matrix --------- Co-authored-by: Jose Ignacio Lorenzo <74208929+joselo85@users.noreply.github.com> Co-authored-by: Franco Bruno Lavayen <cocolavayen@gmail.com> Co-authored-by: Ivan K Berlot <ivanberlot@gmail.com> Co-authored-by: Ezequiel Fernández Ponce <20102608+ezfepo@users.noreply.github.com> Co-authored-by: joselo85 <joseignaciolorenzo85@gmail.com> Co-authored-by: Ezequiel Fernández Ponce <ezequiel.fernandez@southworks.com> Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * fix typos and update ecs compat table (#18215) * fix typos and update ecs compat table * real info for the ecs compat matrix table * Update website/content/docs/ecs/compatibility.mdx Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com> --------- Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com> * [OSS] proxystate: add proxystate protos (#18216) * proxystate: add proxystate protos to pbmesh and resolve imports and conflicts between message names * ci: don't verify s390x (#18224) * [CC-5718] Remove HCP token requirement during bootstrap (#18140) * [CC-5718] Remove HCP token requirement during bootstrap * Re-add error for loading HCP management token * Remove old comment * Add changelog entry * Remove extra validation line * Apply suggestions from code review Co-authored-by: lornasong <lornasong@users.noreply.github.com> --------- Co-authored-by: lornasong <lornasong@users.noreply.github.com> * [NET-4122] Doc guidance for federation with externalServers (#18207) Doc guidance for federation with externalServers Add guidance for proper configuration when joining to a secondary cluster using WAN fed with external servers also enabled. Also clarify federation requirements and fix formatting for an unrelated value. Update both the Helm chart reference (synced from `consul-k8s`, see hashicorp/consul-k8s#2583) and the docs on using `externalServers`. * [OSS] Improve xDS Code Coverage - Endpoints and Misc (#18222) test: improve xDS endpoints code coverage * Clarify license reporting timing and GDPR compliance (#18237) Add Alicia's edits to clarify log timing and other details * Fix Github Workflow File (#18241) * [CONSUL-382] Support openssl in unique test dockerfile (#43) * [CONSUL-405] Add bats to single container (#44) * [CONSUL-414] Run Prometheus Test Cases and Validate Changes (#46) * [CONSUL-410] Run Jaeger in Single container (#45) * [CONSUL-412] Run test-sds-server in single container (#48) * [CONSUL-408] Clean containers (#47) * [CONSUL-384] Rebase and sync fork (#50) * [CONSUL-415] Create Scenarios Troubleshooting Docs (#49) * [CONSUL-417] Update Docs Single Container (#51) * [CONSUL-428] Add Socat to single container (#54) * [CONSUL-424] Replace pkill in kill_envoy function (#52) * [CONSUL-434] Modify Docker run functions in Helper script (#53) * [CONSUL-435] Replace docker run in set_ttl_check_state & wait_for_agent_service_register functions (#55) * [CONSUL-438] Add netcat (nc) in the Single container Dockerfile (#56) * [CONSUL-429] Replace Docker run with Docker exec (#57) * [CONSUL-436] Curl timeout and run tests (#58) * [CONSUL-443] Create dogstatsd Function (#59) * [CONSUL-431] Update Docs Netcat (#60) * [CONSUL-439] Parse nc Command in function (#61) * [CONSUL-463] Review curl Exec and get_ca_root Func (#63) * [CONSUL-453] Docker hostname in Helper functions (#64) * [CONSUL-461] Test wipe volumes without extra cont (#66) * [CONSUL-454] Check ports in the Server and Agent containers (#65) * [CONSUL-441] Update windows dockerfile with version (#62) * [CONSUL-466] Review case-grpc Failing Test (#67) * [CONSUL-494] Review case-cfg-resolver-svc-failover (#68) * [CONSUL-496] Replace docker_wget & docker_curl (#69) * [CONSUL-499] Cleanup Scripts - Remove nanoserver (#70) * [CONSUL-500] Update Troubleshooting Docs (#72) * [CONSUL-502] Pull & Tag Envoy Windows Image (#73) * [CONSUL-504] Replace docker run in docker_consul (#76) * [CONSUL-505] Change admin_bind * [CONSUL-399] Update envoy to 1.23.1 (#78) * [CONSUL-510] Support case-wanfed-gw on Windows (#79) * [CONSUL-506] Update troubleshooting Documentation (#80) * [CONSUL-512] Review debug_dump_volumes Function (#81) * [CONSUL-514] Add zipkin to Docker Image (#82) * [CONSUL-515] Update Documentation (#83) * [CONSUL-529] Support case-consul-exec (#86) * [CONSUL-530] Update Documentation (#87) * [CONSUL-530] Update default consul version 1.13.3 * [CONSUL-539] Cleanup (#91) * [CONSUL-546] Scripts Clean-up (#92) * [CONSUL-491] Support admin_access_log_path value for Windows (#71) * [CONSUL-519] Implement mkfifo Alternative (#84) * [CONSUL-542] Create OS Specific Files for Envoy Package (#88) * [CONSUL-543] Create exec_supported.go (#89) * [CONSUL-544] Test and Build Changes (#90) * Implement os.DevNull * using mmap instead of disk files * fix import in exec-unix * fix nmap open too many arguemtn * go fmt on file * changelog file * fix go mod * Update .changelog/17694.txt Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * different mmap library * fix bootstrap json * some fixes * chocolatey version fix and image fix * using different library * fix Map funciton call * fix mmap call * fix tcp dump * fix tcp dump * windows tcp dump * Fix docker run * fix tests * fix go mod * fix version 16.0 * fix version * fix version dev * sleep to debug * fix sleep * fix permission issue * fix permission issue * fix permission issue * fix command * fix command * fix funciton * fix assert config entry status command not found * fix command not found assert_cert_has_cn * fix command not found assert_upstream_missing * fix command not found assert_upstream_missing_once * fix command not found get_upstream_endpoint * fix command not found get_envoy_public_listener_once * fix command not found * fix test cases * windows integration test workflow github * made code similar to unix using npipe * fix go.mod * fix dialing of npipe * dont wait * check size of written json * fix undefined n * running * fix dep * fix syntax error * fix workflow file * windows runner * fix runner * fix from json * fix runs on * merge connect envoy * fix cin path * build * fix file name * fix file name * fix dev build * remove unwanted code * fix upload * fix bin name * fix path * checkout current branch * fix path * fix tests * fix shell bash for windows sh files * fix permission of run-test.sh * removed docker dev * added shell bash for tests * fix tag * fix win=true * fix cd * added dev * fix variable undefined * removed failing tests * fix tcp dump image * fix curl * fix curl * tcp dump path * fix tcpdump path * fix curl * fix curl install * stop removing intermediate containers * fix tcpdump docker image * revert -rm * --rm=false * makeing docker image before * fix tcpdump * removed case consul exec * removed terminating gateway simple * comment case wasm * removed data dog * comment out upload coverage * uncomment case-consul-exec * comment case consul exec * if always * logs * using consul 1.17.0 * fix quotes * revert quotes * redirect to dev null * Revert version * revert consul connect * fix version * removed envoy connect * not using function * change log * docker logs * fix logs * restructure bad authz * rmeoved dev null * output * fix file descriptor * fix cacert * fix cacert * fix ca cert * cacert does not work in windows curl * fix func * removed docker logs * added sleep * fix tls * commented case-consul-exec * removed echo * retry docker consul * fix upload bin * uncomment consul exec * copying consul.exe to docker image * copy fix * fix paths * fix path * github workspace path * latest version * Revert "latest version" This reverts commit 5a7d7b82d9e7553bcb01b02557ec8969f9deba1d. * commented consul exec * added ssl revoke best effort * revert best effort * removed unused files * rename var name and change dir * windows runner * permission * needs setup fix * swtich to github runner * fix file path * fix path * fix path * fix path * fix path * fix path * fix build paths * fix tag * nightly runs * added matrix in github workflow, renamed files * fix job * fix matrix * removed brackes * from json * without using job matrix * fix quotes * revert job matrix * fix workflow * fix comment * added comment * nightly runs * removed datadog ci as it is already measured in linux one * running test * Revert "running test" This reverts commit 7013d15a23732179d18ec5d17336e16b26fab5d4. * pr comment fixes * running test now * running subset of test * running subset of test * job matrix * shell bash * removed bash shell * linux machine for job matrix * fix output * added cat to debug * using ubuntu latest * fix job matrix * fix win true * fix go test * revert job matrix * Fix tests --------- Co-authored-by: Ivan K Berlot <ivanberlot@gmail.com> Co-authored-by: Jose Ignacio Lorenzo <74208929+joselo85@users.noreply.github.com> Co-authored-by: Franco Bruno Lavayen <cocolavayen@gmail.com> Co-authored-by: Ezequiel Fernández Ponce <20102608+ezfepo@users.noreply.github.com> Co-authored-by: joselo85 <joseignaciolorenzo85@gmail.com> Co-authored-by: Ezequiel Fernández Ponce <ezequiel.fernandez@southworks.com> Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * NET-4996 - filter go-tests and test-integration workflows from running on docs only and ui only changes (#18236) * Align build arch matrix with enterprise (#18235) Ensure that OSS remains in sync w/ Enterprise by aligning the format of arch matrix args for various build jobs. * Revert "NET-4996 - filter go-tests and test-integration workflows from running on docs only and ui only changes" (#18248) Revert "NET-4996 - filter go-tests and test-integration workflows from running on docs only and ui only changes (#18236)" This reverts commit a11dba710e6ce6f172c0fa6c9b61567cc1efffc8. * resource: Add scope to resource type registration [NET-4976] (#18214) Enables querying a resource type's registration to determine if a resource is cluster, partition, or partition and namespace scoped. * Fix some inconsistencies in jwt docs (#18234) * NET-1825: More new ACL token creation docs (#18063) Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * [CC-5719] Add support for builtin global-read-only policy * Add changelog * Add read-only to docs * Fix some minor issues. * Change from ReplaceAll to Sprintf * Change IsValidPolicy name to return an error instead of bool * Fix PolicyList test * Fix other tests * Apply suggestions from code review Co-authored-by: Paul Glass <pglass@hashicorp.com> * Fix state store test for policy list. * Fix naming issues * Update acl/validation.go Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com> * backport of commit d63fa5481dc02c6faae7cc2647b4073b3286af1d * backport of commit 3d099a6ed8ed10b6dc464c466cb1668914db8f08 --------- Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: Dan Bond <danbond@protonmail.com> Signed-off-by: josh <josh.timmons@hashicorp.com> Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com> Co-authored-by: Ronald <roncodingenthusiast@users.noreply.github.com> Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com> Co-authored-by: Andrew Stucki <andrew.stucki@hashicorp.com> Co-authored-by: Luke Kysow <1034429+lkysow@users.noreply.github.com> Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com> Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> Co-authored-by: Bryce Kalow <bkalow@hashicorp.com> Co-authored-by: Paul Glass <pglass@hashicorp.com> Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com> Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com> Co-authored-by: Poonam Jadhav <poonam.jadhav@hashicorp.com> Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com> Co-authored-by: Hariram Sankaran <56744845+ramramhariram@users.noreply.github.com> Co-authored-by: shanafarkas <105076572+shanafarkas@users.noreply.github.com> Co-authored-by: Thomas Eckert <teckert@hashicorp.com> Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> Co-authored-by: Joshua Timmons <josh.timmons@hashicorp.com> Co-authored-by: Ashesh Vidyut <134911583+absolutelightning@users.noreply.github.com> Co-authored-by: Dan Stough <dan.stough@hashicorp.com> Co-authored-by: Curt Bushko <cbushko@gmail.com> Co-authored-by: Tobias Birkefeld <t@craxs.de> Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com> Co-authored-by: Semir Patel <semir.patel@hashicorp.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: chappie <6537530+chapmanc@users.noreply.github.com> Co-authored-by: Derek Menteer <105233703+hashi-derek@users.noreply.github.com> Co-authored-by: John Murret <john.murret@hashicorp.com> Co-authored-by: Mark Campbell-Vincent <mnmvincent@gmail.com> Co-authored-by: Daniel Upton <daniel@floppy.co> Co-authored-by: Steven Zamborsky <97125550+stevenzamborsky@users.noreply.github.com> Co-authored-by: George Bolo <george.bolo@gmail.com> Co-authored-by: Chris S. Kim <ckim@hashicorp.com> Co-authored-by: wangxinyi7 <121973291+wangxinyi7@users.noreply.github.com> Co-authored-by: cskh <hui.kang@hashicorp.com> Co-authored-by: V. K <cn007b@gmail.com> Co-authored-by: Iryna Shustava <ishustava@users.noreply.github.com> Co-authored-by: Alex Simenduev <shamil.si@gmail.com> Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> Co-authored-by: Dan Bond <danbond@protonmail.com> Co-authored-by: sarahalsmiller <100602640+sarahalsmiller@users.noreply.github.com> Co-authored-by: Gerard Nguyen <gerard@hashicorp.com> Co-authored-by: mr-miles <miles.waller@gmail.com> Co-authored-by: natemollica-dev <57850649+natemollica-nm@users.noreply.github.com> Co-authored-by: John Maguire <john.maguire@hashicorp.com> Co-authored-by: Samantha <hello@entropy.cat> Co-authored-by: Ranjandas <thejranjan@gmail.com> Co-authored-by: Evan Phoenix <evan@phx.io> Co-authored-by: Michael Hofer <karras@users.noreply.github.com> Co-authored-by: J.C. Jones <james.jc.jones@gmail.com> Co-authored-by: Fulvio <fulviodenza823@gmail.com> Co-authored-by: Krastin Krastev <krastin@hashicorp.com> Co-authored-by: david3a <49253132+david3a@users.noreply.github.com> Co-authored-by: Nick Irvine <115657443+nfi-hashicorp@users.noreply.github.com> Co-authored-by: Tom Davies <tom@t-davies.com> Co-authored-by: Vijay <vijayraghav22@gmail.com> Co-authored-by: Eddie Rowe <74205376+eddie-rowe@users.noreply.github.com> Co-authored-by: emilymianeil <eneil@hashicorp.com> Co-authored-by: nv-hashi <80716011+nv-hashi@users.noreply.github.com> Co-authored-by: Franco Bruno Lavayen <cocolavayen@gmail.com> Co-authored-by: Jose Ignacio Lorenzo <74208929+joselo85@users.noreply.github.com> Co-authored-by: Jose Ignacio Lorenzo <joseignaciolorenzo85@gmail.com> Co-authored-by: R.B. Boyer <rb@hashicorp.com> Co-authored-by: Freddy <freddygv@users.noreply.github.com> Co-authored-by: NiniOak <anita.akaeze@hashicorp.com> Co-authored-by: hashicorp-tsccr[bot] <129506189+hashicorp-tsccr[bot]@users.noreply.github.com> Co-authored-by: hashicorp-tsccr[bot] <hashicorp-tsccr[bot]@users.noreply.github.com> Co-authored-by: Blake Covarrubias <blake@covarrubi.as> Co-authored-by: Ivan K Berlot <ivanberlot@gmail.com> Co-authored-by: Ezequiel Fernández Ponce <20102608+ezfepo@users.noreply.github.com> Co-authored-by: Ezequiel Fernández Ponce <ezequiel.fernandez@southworks.com> Co-authored-by: Nitya Dhanushkodi <nitya@hashicorp.com> Co-authored-by: Jeremy Jacobson <jjacobson93@users.noreply.github.com> Co-authored-by: lornasong <lornasong@users.noreply.github.com> Co-authored-by: Judith Malnick <judith@hashicorp.com> Co-authored-by: Jeremy Jacobson <jeremy.jacobson@hashicorp.com>
2023-08-01 17:37:13 +00:00
// Create/Upgrade the builtin policies
for _, policy := range structs.ACLBuiltinPolicies {
if err := s.writeBuiltinACLPolicy(policy); err != nil {
return err
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
}
}
// Check for configured initial management token.
if initialManagement := s.config.ACLInitialManagementToken; len(initialManagement) > 0 {
Update HCP bootstrapping to support existing clusters (#16916) * Persist HCP management token from server config We want to move away from injecting an initial management token into Consul clusters linked to HCP. The reasoning is that by using a separate class of token we can have more flexibility in terms of allowing HCP's token to co-exist with the user's management token. Down the line we can also more easily adjust the permissions attached to HCP's token to limit it's scope. With these changes, the cloud management token is like the initial management token in that iit has the same global management policy and if it is created it effectively bootstraps the ACL system. * Update SDK and mock HCP server The HCP management token will now be sent in a special field rather than as Consul's "initial management" token configuration. This commit also updates the mock HCP server to more accurately reflect the behavior of the CCM backend. * Refactor HCP bootstrapping logic and add tests We want to allow users to link Consul clusters that already exist to HCP. Existing clusters need care when bootstrapped by HCP, since we do not want to do things like change ACL/TLS settings for a running cluster. Additional changes: * Deconstruct MaybeBootstrap so that it can be tested. The HCP Go SDK requires HTTPS to fetch a token from the Auth URL, even if the backend server is mocked. By pulling the hcp.Client creation out we can modify its TLS configuration in tests while keeping the secure behavior in production code. * Add light validation for data received/loaded. * Sanitize initial_management token from received config, since HCP will only ever use the CloudConfig.MangementToken. * Add changelog entry
2023-04-27 20:27:39 +00:00
err := s.initializeManagementToken("Initial Management Token", initialManagement)
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
if err != nil {
Update HCP bootstrapping to support existing clusters (#16916) * Persist HCP management token from server config We want to move away from injecting an initial management token into Consul clusters linked to HCP. The reasoning is that by using a separate class of token we can have more flexibility in terms of allowing HCP's token to co-exist with the user's management token. Down the line we can also more easily adjust the permissions attached to HCP's token to limit it's scope. With these changes, the cloud management token is like the initial management token in that iit has the same global management policy and if it is created it effectively bootstraps the ACL system. * Update SDK and mock HCP server The HCP management token will now be sent in a special field rather than as Consul's "initial management" token configuration. This commit also updates the mock HCP server to more accurately reflect the behavior of the CCM backend. * Refactor HCP bootstrapping logic and add tests We want to allow users to link Consul clusters that already exist to HCP. Existing clusters need care when bootstrapped by HCP, since we do not want to do things like change ACL/TLS settings for a running cluster. Additional changes: * Deconstruct MaybeBootstrap so that it can be tested. The HCP Go SDK requires HTTPS to fetch a token from the Auth URL, even if the backend server is mocked. By pulling the hcp.Client creation out we can modify its TLS configuration in tests while keeping the secure behavior in production code. * Add light validation for data received/loaded. * Sanitize initial_management token from received config, since HCP will only ever use the CloudConfig.MangementToken. * Add changelog entry
2023-04-27 20:27:39 +00:00
return fmt.Errorf("failed to initialize initial management token: %w", err)
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
}
Update HCP bootstrapping to support existing clusters (#16916) * Persist HCP management token from server config We want to move away from injecting an initial management token into Consul clusters linked to HCP. The reasoning is that by using a separate class of token we can have more flexibility in terms of allowing HCP's token to co-exist with the user's management token. Down the line we can also more easily adjust the permissions attached to HCP's token to limit it's scope. With these changes, the cloud management token is like the initial management token in that iit has the same global management policy and if it is created it effectively bootstraps the ACL system. * Update SDK and mock HCP server The HCP management token will now be sent in a special field rather than as Consul's "initial management" token configuration. This commit also updates the mock HCP server to more accurately reflect the behavior of the CCM backend. * Refactor HCP bootstrapping logic and add tests We want to allow users to link Consul clusters that already exist to HCP. Existing clusters need care when bootstrapped by HCP, since we do not want to do things like change ACL/TLS settings for a running cluster. Additional changes: * Deconstruct MaybeBootstrap so that it can be tested. The HCP Go SDK requires HTTPS to fetch a token from the Auth URL, even if the backend server is mocked. By pulling the hcp.Client creation out we can modify its TLS configuration in tests while keeping the secure behavior in production code. * Add light validation for data received/loaded. * Sanitize initial_management token from received config, since HCP will only ever use the CloudConfig.MangementToken. * Add changelog entry
2023-04-27 20:27:39 +00:00
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
Update HCP bootstrapping to support existing clusters (#16916) * Persist HCP management token from server config We want to move away from injecting an initial management token into Consul clusters linked to HCP. The reasoning is that by using a separate class of token we can have more flexibility in terms of allowing HCP's token to co-exist with the user's management token. Down the line we can also more easily adjust the permissions attached to HCP's token to limit it's scope. With these changes, the cloud management token is like the initial management token in that iit has the same global management policy and if it is created it effectively bootstraps the ACL system. * Update SDK and mock HCP server The HCP management token will now be sent in a special field rather than as Consul's "initial management" token configuration. This commit also updates the mock HCP server to more accurately reflect the behavior of the CCM backend. * Refactor HCP bootstrapping logic and add tests We want to allow users to link Consul clusters that already exist to HCP. Existing clusters need care when bootstrapped by HCP, since we do not want to do things like change ACL/TLS settings for a running cluster. Additional changes: * Deconstruct MaybeBootstrap so that it can be tested. The HCP Go SDK requires HTTPS to fetch a token from the Auth URL, even if the backend server is mocked. By pulling the hcp.Client creation out we can modify its TLS configuration in tests while keeping the secure behavior in production code. * Add light validation for data received/loaded. * Sanitize initial_management token from received config, since HCP will only ever use the CloudConfig.MangementToken. * Add changelog entry
2023-04-27 20:27:39 +00:00
// Check for configured management token from HCP. It MUST NOT override the user-provided initial management token.
if hcpManagement := s.config.Cloud.ManagementToken; len(hcpManagement) > 0 {
err := s.initializeManagementToken("HCP Management Token", hcpManagement)
if err != nil {
return fmt.Errorf("failed to initialize HCP management token: %w", err)
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
}
}
// Insert the anonymous token if it does not exist.
Update HCP bootstrapping to support existing clusters (#16916) * Persist HCP management token from server config We want to move away from injecting an initial management token into Consul clusters linked to HCP. The reasoning is that by using a separate class of token we can have more flexibility in terms of allowing HCP's token to co-exist with the user's management token. Down the line we can also more easily adjust the permissions attached to HCP's token to limit it's scope. With these changes, the cloud management token is like the initial management token in that iit has the same global management policy and if it is created it effectively bootstraps the ACL system. * Update SDK and mock HCP server The HCP management token will now be sent in a special field rather than as Consul's "initial management" token configuration. This commit also updates the mock HCP server to more accurately reflect the behavior of the CCM backend. * Refactor HCP bootstrapping logic and add tests We want to allow users to link Consul clusters that already exist to HCP. Existing clusters need care when bootstrapped by HCP, since we do not want to do things like change ACL/TLS settings for a running cluster. Additional changes: * Deconstruct MaybeBootstrap so that it can be tested. The HCP Go SDK requires HTTPS to fetch a token from the Auth URL, even if the backend server is mocked. By pulling the hcp.Client creation out we can modify its TLS configuration in tests while keeping the secure behavior in production code. * Add light validation for data received/loaded. * Sanitize initial_management token from received config, since HCP will only ever use the CloudConfig.MangementToken. * Add changelog entry
2023-04-27 20:27:39 +00:00
if err := s.insertAnonymousToken(); err != nil {
return err
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
}
} else {
s.startACLReplication(ctx)
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
}
// Generate or rotate the server management token on leadership transitions.
// This token is used by Consul servers for authn/authz when making
// requests to themselves through public APIs such as the agent cache.
// It is stored as system metadata because it is internally
// managed and users are not meant to see it or interact with it.
secretID, err := lib.GenerateUUID(nil)
if err != nil {
return fmt.Errorf("failed to generate the secret ID for the server management token: %w", err)
}
2023-04-18 15:03:05 +00:00
if err := s.SetSystemMetadataKey(structs.ServerManagementTokenAccessorID, secretID); err != nil {
return fmt.Errorf("failed to persist server management token: %w", err)
}
s.startACLTokenReaping(ctx)
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
return nil
}
Backport of [CC-5719] Add support for builtin global-read-only policy into release/1.16.x (#18345) * [OSS] Post Consul 1.16 updates (#17606) * chore: update dev build to 1.17 * chore(ci): add nightly 1.16 test Drop the oldest and add the newest running release branch to nightly builds. * Add writeAuditRPCEvent to agent_oss (#17607) * Add writeAuditRPCEvent to agent_oss * fix the other diffs * backport change log * Add Envoy and Consul version constraints to Envoy extensions (#17612) * [API Gateway] Fix trust domain for external peered services in synthesis code (#17609) * [API Gateway] Fix trust domain for external peered services in synthesis code * Add changelog * backport ent changes to oss (#17614) * backport ent changes to oss * Update .changelog/_5669.txt Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com> --------- Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com> * Update intentions.mdx (#17619) Make behaviour of L7 intentions clearer * enterprise changelog update for audit (#17625) * Update list of Envoy versions (#17546) * [API Gateway] Fix rate limiting for API gateways (#17631) * [API Gateway] Fix rate limiting for API gateways * Add changelog * Fix failing unit tests * Fix operator usage tests for api package * sort some imports that are wonky between oss and ent (#17637) * PmTLS and tproxy improvements with failover and L7 traffic mgmt for k8s (#17624) * porting over changes from enterprise repo to oss * applied feedback on service mesh for k8s overview * fixed typo * removed ent-only build script file * Apply suggestions from code review Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: David Yu <dyu@hashicorp.com> Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> --------- Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> * Delete check-legacy-links-format.yml (#17647) * docs: Reference doc updates for permissive mTLS settings (#17371) * Reference doc updates for permissive mTLS settings * Document config entry filtering * Fix minor doc errors (double slashes in link url paths) --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Add generic experiments configuration and use it to enable catalog v2 resources (#17604) * Add generic experiments configuration and use it to enable catalog v2 resources * Run formatting with -s as CI will validate that this has been done * api-gateway: stop adding all header filters to virtual host when generating xDS (#17644) * Add header filter to api-gateway xDS golden test * Stop adding all header filters to virtual host when generating xDS for api-gateway * Regenerate xDS golden file for api-gateway w/ header filter * fix: add agent info reporting log (#17654) * Add new Consul 1.16 docs (#17651) * Merge pull request #5773 from hashicorp/docs/rate-limiting-from-ip-addresses-1.16 updated docs for rate limiting for IP addresses - 1.16 * Merge pull request #5609 from hashicorp/docs/enterprise-utilization-reporting Add docs for enterprise utilization reporting * Merge pull request #5734 from hashicorp/docs/envoy-ext-1.16 Docs/envoy ext 1.16 * Merge pull request #5773 from hashicorp/docs/rate-limiting-from-ip-addresses-1.16 updated docs for rate limiting for IP addresses - 1.16 * Merge pull request #5609 from hashicorp/docs/enterprise-utilization-reporting Add docs for enterprise utilization reporting * Merge pull request #5734 from hashicorp/docs/envoy-ext-1.16 Docs/envoy ext 1.16 * fix build errors --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Default `ProxyType` for builtin extensions (#17657) * Post 1.16.0-rc1 updates (#17663) - Update changelog to include new entries from release - Update submodule versions to latest published * Update service-defaults.mdx (#17656) * docs: Sameness Groups (#17628) * port from enterprise branch * Apply suggestions from code review Co-authored-by: shanafarkas <105076572+shanafarkas@users.noreply.github.com> * Update website/content/docs/connect/cluster-peering/usage/create-sameness-groups.mdx * next steps * Update website/content/docs/connect/cluster-peering/usage/create-sameness-groups.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/k8s/connect/cluster-peering/usage/create-sameness-groups.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --------- Co-authored-by: shanafarkas <105076572+shanafarkas@users.noreply.github.com> Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Remove "BETA" marker from config entries (#17670) * CAPIgw for K8s installation updates for 1.16 (#17627) * trimmed CRD step and reqs from installation * updated tech specs * Apply suggestions from code review Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * added upgrade instruction * removed tcp port req * described downtime and DT-less upgrades * applied additional review feedback --------- Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * additional feedback on API gateway upgrades (#17677) * additional feedback * Update website/content/docs/api-gateway/upgrades.mdx Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> --------- Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * docs: JWT Authorization for intentions (#17643) * Initial page/nav creation * configuration entry reference page * Usage + fixes * service intentions page * usage * description * config entry updates * formatting fixes * Update website/content/docs/connect/config-entries/service-intentions.mdx Co-authored-by: Paul Glass <pglass@hashicorp.com> * service intentions review fixes * Overview page review fixes * Apply suggestions from code review Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --------- Co-authored-by: Paul Glass <pglass@hashicorp.com> Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * docs: minor fixes to JWT auth docs (#17680) * Fixes * service intentions fixes * Fix two WAL metrics in docs/agent/telemetry.mdx (#17593) * updated failover for k8s w-tproxy page title (#17683) * Add release notes 1.16 rc (#17665) * Merge pull request #5773 from hashicorp/docs/rate-limiting-from-ip-addresses-1.16 updated docs for rate limiting for IP addresses - 1.16 * Merge pull request #5609 from hashicorp/docs/enterprise-utilization-reporting Add docs for enterprise utilization reporting * Merge pull request #5734 from hashicorp/docs/envoy-ext-1.16 Docs/envoy ext 1.16 * Add release notes for 1.16-rc * Add consul-e license utlization reporting * Update with rc absolute links * Update with rc absolute links * fix typo * Apply suggestions from code review Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update to use callout component * address typo * docs: FIPS 140-2 Compliance (#17668) * Page + nav + formatting * link fix * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * link fix * Apply suggestions from code review Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * Update website/content/docs/enterprise/fips.mdx --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * fix apigw install values file * fix typos in release notes --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> * fix release notes links (#17687) * adding redirects for tproxy and envoy extensions (#17688) * adding redirects * Apply suggestions from code review * Fix FIPS copy (#17691) * fix release notes links * fix typos on fips docs * [NET-4107][Supportability] Log Level set to TRACE and duration set to 5m for consul-debug (#17596) * changed duration to 5 mins and log level to trace * documentation update * change log * ENT merge of ext-authz extension updates (#17684) * docs: Update default values for Envoy extension proxy types (#17676) * fix: stop peering delete routine on leader loss (#17483) * Refactor disco chain prioritize by locality structs (#17696) This includes prioritize by localities on disco chain targets rather than resolvers, allowing different targets within the same partition to have different policies. * agent: remove agent cache dependency from service mesh leaf certificate management (#17075) * agent: remove agent cache dependency from service mesh leaf certificate management This extracts the leaf cert management from within the agent cache. This code was produced by the following process: 1. All tests in agent/cache, agent/cache-types, agent/auto-config, agent/consul/servercert were run at each stage. - The tests in agent matching .*Leaf were run at each stage. - The tests in agent/leafcert were run at each stage after they existed. 2. The former leaf cert Fetch implementation was extracted into a new package behind a "fake RPC" endpoint to make it look almost like all other cache type internals. 3. The old cache type was shimmed to use the fake RPC endpoint and generally cleaned up. 4. I selectively duplicated all of Get/Notify/NotifyCallback/Prepopulate from the agent/cache.Cache implementation over into the new package. This was renamed as leafcert.Manager. - Code that was irrelevant to the leaf cert type was deleted (inlining blocking=true, refresh=false) 5. Everything that used the leaf cert cache type (including proxycfg stuff) was shifted to use the leafcert.Manager instead. 6. agent/cache-types tests were moved and gently replumbed to execute as-is against a leafcert.Manager. 7. Inspired by some of the locking changes from derek's branch I split the fat lock into N+1 locks. 8. The waiter chan struct{} was eventually replaced with a singleflight.Group around cache updates, which was likely the biggest net structural change. 9. The awkward two layers or logic produced as a byproduct of marrying the agent cache management code with the leaf cert type code was slowly coalesced and flattened to remove confusion. 10. The .*Leaf tests from the agent package were copied and made to work directly against a leafcert.Manager to increase direct coverage. I have done a best effort attempt to port the previous leaf-cert cache type's tests over in spirit, as well as to take the e2e-ish tests in the agent package with Leaf in the test name and copy those into the agent/leafcert package to get more direct coverage, rather than coverage tangled up in the agent logic. There is no net-new test coverage, just coverage that was pushed around from elsewhere. * [core]: Pin github action workflows (#17695) * docs: missing changelog for _5517 (#17706) * add enterprise notes for IP-based rate limits (#17711) * add enterprise notes for IP-based rate limits * Apply suggestions from code review Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> * added bolded 'Enterprise' in list items. --------- Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> * Update compatibility.mdx (#17713) * Remove extraneous version info for Config entries (#17716) * Update terminating-gateway.mdx * Update exported-services.mdx * Update mesh.mdx * fix: typo in link to section (#17527) Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Bump Alpine to 3.18 (#17719) * Update Dockerfile * Create 17719.txt * NET-1825: New ACL token creation docs (#16465) Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com> * [NET-3865] [Supportability] Additional Information in the output of 'consul operator raft list-peers' (#17582) * init * fix tests * added -detailed in docs * added change log * fix doc * checking for entry in map * fix tests * removed detailed flag * removed detailed flag * revert unwanted changes * removed unwanted changes * updated change log * pr review comment changes * pr comment changes single API instead of two * fix change log * fix tests * fix tests * fix test operator raft endpoint test * Update .changelog/17582.txt Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * nits * updated docs --------- Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * OSS merge: Update error handling login when applying extensions (#17740) * Bump atlassian/gajira-transition from 3.0.0 to 3.0.1 (#17741) Bumps [atlassian/gajira-transition](https://github.com/atlassian/gajira-transition) from 3.0.0 to 3.0.1. - [Release notes](https://github.com/atlassian/gajira-transition/releases) - [Commits](https://github.com/atlassian/gajira-transition/compare/4749176faf14633954d72af7a44d7f2af01cc92b...38fc9cd61b03d6a53dd35fcccda172fe04b36de3) --- updated-dependencies: - dependency-name: atlassian/gajira-transition dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Add truncation to body (#17723) * docs: Failover overview minor fix (#17743) * Incorrect symbol * Clarification * slight edit for clarity * docs - update Envoy and Dataplane compat matrix (#17752) * Update envoy.mdx added more detail around default versus other compatible versions * validate localities on agent configs and registration endpoints (#17712) * Updated docs added explanation. (#17751) * init * fix tests * added -detailed in docs * added change log * fix doc * checking for entry in map * fix tests * removed detailed flag * removed detailed flag * revert unwanted changes * removed unwanted changes * updated change log * pr review comment changes * pr comment changes single API instead of two * fix change log * fix tests * fix tests * fix test operator raft endpoint test * Update .changelog/17582.txt Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * nits * updated docs * explanation added --------- Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * Update index.mdx (#17749) * added redirects and updated links (#17764) * Add transparent proxy enhancements changelog (#17757) * docs - remove use of consul leave during upgrade instructions (#17758) * Fix issue with streaming service health watches. (#17775) Fix issue with streaming service health watches. This commit fixes an issue where the health streams were unaware of service export changes. Whenever an exported-services config entry is modified, it is effectively an ACL change. The bug would be triggered by the following situation: - no services are exported - an upstream watch to service X is spawned - the streaming backend filters out data for service X (due to lack of exports) - service X is finally exported In the situation above, the streaming backend does not trigger a refresh of its data. This means that any events that were supposed to have been received prior to the export are NOT backfilled, and the watches never see service X spawning. We currently have decided to not trigger a stream refresh in this situation due to the potential for a thundering herd effect (touching exports would cause a re-fetch of all watches for that partition, potentially). Therefore, a local blocking-query approach was added by this commit for agentless. It's also worth noting that the streaming subscription is currently bypassed most of the time with agentful, because proxycfg has a `req.Source.Node != ""` which prevents the `streamingEnabled` check from passing. This means that while agents should technically have this same issue, they don't experience it with mesh health watches. Note that this is a temporary fix that solves the issue for proxycfg, but not service-discovery use cases. * Property Override validation improvements (#17759) * Reject inbound Prop Override patch with Services Services filtering is only supported for outbound TrafficDirection patches. * Improve Prop Override unexpected type validation - Guard against additional invalid parent and target types - Add specific error handling for Any fields (unsupported) * Fixes (#17765) * Update license get explanation (#17782) This PR is to clarify what happens if the license get command is run on a follower if the leader hasn't been updated with a newer license. * Add Patch index to Prop Override validation errors (#17777) When a patch is found invalid, include its index for easier debugging when multiple patches are provided. * Stop referenced jwt providers from being deleted (#17755) * Stop referenced jwt providers from being deleted * Implement a Catalog Controllers Lifecycle Integration Test (#17435) * Implement a Catalog Controllers Lifecycle Integration Test * Prevent triggering the race detector. This allows defining some variables for protobuf constants and using those in comparisons. Without that, something internal in the fmt package ended up looking at the protobuf message size cache and triggering the race detector. * HCP Add node id/name to config (#17750) * Catalog V2 Container Based Integration Test (#17674) * Implement the Catalog V2 controller integration container tests This now allows the container tests to import things from the root module. However for now we want to be very restrictive about which packages we allow importing. * Add an upgrade test for the new catalog Currently this should be dormant and not executed. However its put in place to detect breaking changes in the future and show an example of how to do an upgrade test with integration tests structured like catalog v2. * Make testutil.Retry capable of performing cleanup operations These cleanup operations are executed after each retry attempt. * Move TestContext to taking an interface instead of a concrete testing.T This allows this to be used on a retry.R or generally anything that meets the interface. * Move to using TestContext instead of background contexts Also this forces all test methods to implement the Cleanup method now instead of that being an optional interface. Co-authored-by: Daniel Upton <daniel@floppy.co> * Fix Docs for Trails Leader By (#17763) * init * fix tests * added -detailed in docs * added change log * fix doc * checking for entry in map * fix tests * removed detailed flag * removed detailed flag * revert unwanted changes * removed unwanted changes * updated change log * pr review comment changes * pr comment changes single API instead of two * fix change log * fix tests * fix tests * fix test operator raft endpoint test * Update .changelog/17582.txt Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * nits * updated docs * explanation added * fix doc * fix docs --------- Co-authored-by: Semir Patel <semir.patel@hashicorp.com> * Improve Prop Override docs examples (#17799) - Provide more realistics examples for setting properties not already supported natively by Consul - Remove superfluous commas from HCL, correct target service name, and fix service defaults vs. proxy defaults in examples - Align existing integration test to updated docs * Test permissive mTLS filter chain not configured with tproxy disabled (#17747) * Add documentation for remote debugging of integration tests. (#17800) * Add documentation for remote debugging of integration tests. * add link from main docs page. * changes related to PR feedback * Clarify limitations of Prop Override extension (#17801) Explicitly document the limitations of the extension, particularly what kind of fields it is capable of modifying. * Fix formatting for webhook-certs Consul tutorial (#17810) * Fix formatting for webhook-certs Consul tutorial * Make a small grammar change to also pick up whitespace changes necessary for formatting --------- Co-authored-by: David Yu <dyu@hashicorp.com> * Add jwt-authn metrics to jwt-provider docs (#17816) * [NET-3095] add jwt-authn metrics docs * Change URLs for redirects from RC to default latest (#17822) * Set GOPRIVATE for all hashicorp repos in CI (#17817) Consistently set GOPRIVATE to include all hashicorp repos, s.t. private modules are successfully pulled in enterprise CI. * Make locality aware routing xDS changes (#17826) * Fixup consul-container/test/debugging.md (#17815) Add missing `-t` flag and fix minor typo. * fixes #17732 - AccessorID in request body should be optional when updating ACL token (#17739) * AccessorID in request body should be optional when updating ACL token * add a test case * fix test case * add changelog entry for PR #17739 * CA provider doc updates and Vault provider minor update (#17831) Update CA provider docs Clarify that providers can differ between primary and secondary datacenters Provide a comparison chart for consul vs vault CA providers Loosen Vault CA provider validation for RootPKIPath Update Vault CA provider documentation * ext-authz Envoy extension: support `localhost` as a valid target URI. (#17821) * CI Updates (#17834) * Ensure that git access to private repos uses the ELEVATED_GITHUB_TOKEN * Bump the runner size for the protobuf generation check This has failed previously when the runner process that communicates with GitHub gets starved causing the job to fail. * counter part of ent pr (#17618) * watch: support -filter for consul watch: checks, services, nodes, service (#17780) * watch: support -filter for watch checks * Add filter for watch nodes, services, and service - unit test added - Add changelog - update doc * Trigger OSS => ENT merge for all release branches (#17853) Previously, this only triggered for release/*.*.x branches; however, our release process involves cutting a release/1.16.0 branch, for example, at time of code freeze these days. Any PRs to that branch after code freeze today do not make their way to consul-enterprise. This will make behavior for a .0 branch consistent with current behavior for a .x branch. * Update service-mesh.mdx (#17845) Deleted two commas which looks quite like some leftovers. * Add docs for sameness groups with resolvers. (#17851) * docs: add note about path prefix matching behavior for HTTPRoute config (#17860) * Add note about path prefix matching behavior for HTTPRoute config * Update website/content/docs/connect/gateways/api-gateway/configuration/http-route.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * docs: update upgrade to consul-dataplane docs on k8s (#17852) * resource: add `AuthorizerContext` helper method (#17393) * resource: enforce consistent naming of resource types (#17611) For consistency, resource type names must follow these rules: - `Group` must be snake case, and in most cases a single word. - `GroupVersion` must be lowercase, start with a "v" and end with a number. - `Kind` must be pascal case. These were chosen because they map to our protobuf type naming conventions. * tooling: generate protoset file (#17364) Extends the `proto` make target to generate a protoset file for use with grpcurl etc. * Fix a bug that wrongly trims domains when there is an overlap with DC name (#17160) * Fix a bug that wrongly trims domains when there is an overlap with DC name Before this change, when DC name and domain/alt-domain overlap, the domain name incorrectly trimmed from the query. Example: Given: datacenter = dc-test, alt-domain = test.consul. Querying for "test-node.node.dc-test.consul" will faile, because the code was trimming "test.consul" instead of just ".consul" This change, fixes the issue by adding dot (.) before trimming * trimDomain: ensure domain trimmed without modyfing original domains * update changelog --------- Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * deps: aws-sdk-go v1.44.289 (#17876) Signed-off-by: Dan Bond <danbond@protonmail.com> * api-gateway: add operation cannot be fulfilled error to common errors (#17874) * add error message * Update website/content/docs/api-gateway/usage/errors.mdx Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com> * fix formating issues --------- Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com> * api-gateway: add step to upgrade instructions for creating intentions (#17875) * Changelog - add 1.13.9, 1.14.8, and 1.15.4 (#17889) * docs: update config enable_debug (#17866) * update doc for config enable_debug * Update website/content/docs/agent/config/config-files.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Update wording on WAN fed and intermediate_pki_path (#17850) * Allow service identity tokens the ability to read jwt-providers (#17893) * Allow service identity tokens the ability to read jwt-providers * more tests * service_prefix tests * Update docs (#17476) Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Add emit_tags_as_labels to envoy bootstrap config when using Consul Telemetry Collector (#17888) * Fix command from kg to kubectl get (#17903) * Create and update release notes for 1.16 and 1.2 (#17895) * update release notes for 1.16 and 1.2 * update latest consul core release * Propose new changes to APIgw upgrade instructions (#17693) * Propose new changes to APIgw upgrade instructions * fix build error * update callouts to render correctly * Add hideClipboard to log messages * Added clarification around consul k8s and crds * Add workflow to verify linux release packages (#17904) * adding docker files to verify linux packages. * add verifr-release-linux.yml * updating name * pass inputs directly into jobs * add other linux package platforms * remove on push * fix TARGETARCH on debian and ubuntu so it can check arm64 and amd64 * fixing amazon to use the continue line * add ubuntu i386 * fix comment lines * working * remove commented out workflow jobs * Apply suggestions from code review Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com> * update fedora and ubuntu to use latest tag --------- Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com> * Reference hashicorp/consul instead of consul for Docker image (#17914) * Reference hashicorp/consul instead of consul for Docker image * Update Make targets that pull consul directly * Update Consul K8s Upgrade Doc Updates (#17921) Updating upgrade procedures to encompass expected errors during upgrade process from v1.13.x to v1.14.x. * Update sameness-group.mdx (#17915) * Update create-sameness-groups.mdx (#17927) * deps: coredns v1.10.1 (#17912) * Ensure RSA keys are at least 2048 bits in length (#17911) * Ensure RSA keys are at least 2048 bits in length * Add changelog * update key length check for FIPS compliance * Fix no new variables error and failing to return when error exists from validating * clean up code for better readability * actually return value * tlsutil: Fix check TLS configuration (#17481) * tlsutil: Fix check TLS configuration * Rewording docs. * Update website/content/docs/services/configuration/checks-configuration-reference.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Fix typos and add changelog entry. --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * docs: Deprecations for connect-native SDK and specific connect native APIs (#17937) * Update v1_16_x.mdx * Update connect native golang page --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Revert "Add workflow to verify linux release packages (#17904)" (#17942) This reverts commit 3368f14fab500ebe9f6aeab5631dd1d5f5a453e5. * Fixes Secondary ConnectCA update (#17846) This fixes a bug that was identified which resulted in subsequent ConnectCA configuration update not to persist in the cluster. * fixing typo in link to jwt-validations-with-intentions doc (#17955) * Fix streaming backend link (#17958) * Fix streaming backend link * Update health.mdx * Dynamically create jwks clusters for jwt-providers (#17944) * website: remove deprecated agent rpc docs (#17962) * Fix missing BalanceOutboundConnections in v2 catalog. (#17964) * feature - [NET - 4005] - [Supportability] Reloadable Configuration - enable_debug (#17565) * # This is a combination of 9 commits. # This is the 1st commit message: init without tests # This is the commit message #2: change log # This is the commit message #3: fix tests # This is the commit message #4: fix tests # This is the commit message #5: added tests # This is the commit message #6: change log breaking change # This is the commit message #7: removed breaking change # This is the commit message #8: fix test # This is the commit message #9: keeping the test behaviour same * # This is a combination of 12 commits. # This is the 1st commit message: init without tests # This is the commit message #2: change log # This is the commit message #3: fix tests # This is the commit message #4: fix tests # This is the commit message #5: added tests # This is the commit message #6: change log breaking change # This is the commit message #7: removed breaking change # This is the commit message #8: fix test # This is the commit message #9: keeping the test behaviour same # This is the commit message #10: made enable debug atomic bool # This is the commit message #11: fix lint # This is the commit message #12: fix test true enable debug * parent 10f500e895d92cc3691ade7b74a33db755d22039 author absolutelightning <ashesh.vidyut@hashicorp.com> 1687352587 +0530 committer absolutelightning <ashesh.vidyut@hashicorp.com> 1687352592 +0530 init without tests change log fix tests fix tests added tests change log breaking change removed breaking change fix test keeping the test behaviour same made enable debug atomic bool fix lint fix test true enable debug using enable debug in agent as atomic bool test fixes fix tests fix tests added update on correct locaiton fix tests fix reloadable config enable debug fix tests fix init and acl 403 * revert commit * Fix formatting codeblocks on APIgw docs (#17970) * fix formatting codeblocks * remove unnecessary indents * Remove POC code (#17974) * update doc (#17910) * update doc * update link * Remove duplicate and unused newDecodeConfigEntry func (#17979) * docs: samenessGroup YAML examples (#17984) * configuration entry syntax * Example config * Add changelog entry for 1.16.0 (#17987) * Fix typo (#17198) servcies => services * Expose JWKS cluster config through JWTProviderConfigEntry (#17978) * Expose JWKS cluster config through JWTProviderConfigEntry * fix typos, rename trustedCa to trustedCA * Integration test for ext-authz Envoy extension (#17980) * Fix incorrect protocol for transparent proxy upstreams. (#17894) This PR fixes a bug that was introduced in: https://github.com/hashicorp/consul/pull/16021 A user setting a protocol in proxy-defaults would cause tproxy implicit upstreams to not honor the upstream service's protocol set in its `ServiceDefaults.Protocol` field, and would instead always use the proxy-defaults value. Due to the fact that upstreams configured with "tcp" can successfully contact upstream "http" services, this issue was not recognized until recently (a proxy-defaults with "tcp" and a listening service with "http" would make successful requests, but not the opposite). As a temporary work-around, users experiencing this issue can explicitly set the protocol on the `ServiceDefaults.UpstreamConfig.Overrides`, which should take precedence. The fix in this PR removes the proxy-defaults protocol from the wildcard upstream that tproxy uses to configure implicit upstreams. When the protocol was included, it would always overwrite the value during discovery chain compilation, which was not correct. The discovery chain compiler also consumes proxy defaults to determine the protocol, so simply excluding it from the wildcard upstream config map resolves the issue. * feat: include nodes count in operator usage endpoint and cli command (#17939) * feat: update operator usage api endpoint to include nodes count * feat: update operator usange cli command to includes nodes count * [OSS] Improve Gateway Test Coverage of Catalog Health (#18011) * fix(cli): remove failing check from 'connect envoy' registration for api gateway * test(integration): add tests to check catalog statsus of gateways on startup * remove extra sleep comment * Update test/integration/consul-container/libs/assert/service.go * changelog * Fixes Traffic rate limitting docs (#17997) * Fix removed service-to-service peering links (#17221) * docs: fix removed service-to-service peering links * docs: extend peering-via-mesh-gateways intro (thanks @trujillo-adam) --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * docs: Sameness "beta" warning (#18017) * Warning updates * .x * updated typo in tab heading (#18022) * updated typo in tab heading * updated tab group typo, too * Document that DNS lookups can target cluster peers (#17990) Static DNS lookups, in addition to explicitly targeting a datacenter, can target a cluster peer. This was added in 95dc0c7b301b70a6b955a8b7c9737c9b86f03df6 but didn't make the documentation. The driving function for the change is `parseLocality` here: https://github.com/hashicorp/consul/blob/0b1299c28d8127129d61310ee4280055298438e0/agent/dns_oss.go#L25 The biggest change in this is to adjust the standard lookup syntax to tie `.<datacenter>` to `.dc` as required-together, and to append in the similar `.<cluster-peer>.peer` optional argument, both to A record and SRV record lookups. Co-authored-by: David Yu <dyu@hashicorp.com> * Add first integration test for jwt auth with intention (#18005) * fix stand-in text for name field (#18030) * removed sameness conf entry from failover nav (#18033) * docs - add service sync annotations and k8s service weight annotation (#18032) * Docs for https://github.com/hashicorp/consul-k8s/pull/2293 * remove versions for enterprise features since they are old --------- Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> * docs - add jobs use case for service mesh k8s (#18037) * docs - add jobs use case for service mesh k8s * add code blocks * address feedback (#18045) * Add verify server hostname to tls default (#17155) * [OSS] Fix initial_fetch_timeout to wait for all xDS resources (#18024) * fix(connect): set initial_fetch_time to wait indefinitely * changelog * PR feedback 1 * ui: fix typos for peer service imports (#17999) * test: fix FIPS inline cert test message (#18076) * Fix a couple typos in Agent Telemetry Metrics docs (#18080) * Fix metrics docs * Add changelog Signed-off-by: josh <josh.timmons@hashicorp.com> --------- Signed-off-by: josh <josh.timmons@hashicorp.com> * docs updates - cluster peering and virtual services (#18069) * Update route-to-virtual-services.mdx * Update establish-peering.mdx * Update service-mesh-compare.mdx (#17279) grammar change * Update helm docs on main (#18085) * ci: use gotestsum v1.10.1 [NET-4042] (#18088) * Docs: Update proxy lifecycle annotations and consul-dataplane flags (#18075) * Update proxy lifecycle annotations and consul-dataplane flags * Pass configured role name to Vault for AWS auth in Connect CA (#17885) * Docs for dataplane upgrade on k8s (#18051) * Docs for dataplane upgrade on k8s --------- Co-authored-by: David Yu <dyu@hashicorp.com> Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * docs - update upgrade index page to not recommend consul leave. (#18100) * Displays Consul version of each nodes in UI nodes section (#17754) * update UINodes and UINodeInfo response with consul-version info added as NodeMeta, fetched from serf members * update test cases TestUINodes, TestUINodeInfo * added nil check for map * add consul-version in local agent node metadata * get consul version from serf member and add this as node meta in catalog register request * updated ui mock response to include consul versions as node meta * updated ui trans and added version as query param to node list route * updates in ui templates to display consul version with filter and sorts * updates in ui - model class, serializers,comparators,predicates for consul version feature * added change log for Consul Version Feature * updated to get version from consul service, if for some reason not available from serf * updated changelog text * updated dependent testcases * multiselection version filter * Update agent/consul/state/catalog.go comments updated Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com> --------- Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com> * api gw 1.16 updates (#18081) * api gw 1.16 updates * Apply suggestions from code review Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> * update CodeBlockConfig filename * Apply suggestions from code review Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> * remove non-standard intentions page * Update website/content/docs/api-gateway/configuration/index.mdx Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --------- Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * [NET-4103] ci: build s390x (#18067) * ci: build s390x * ci: test s390x * ci: dev build s390x * no GOOS * ent only * build: publish s390x * fix syntax error * fix syntax error again * fix syntax error again x2 * test branch * Move s390x conditionals to step level * remove test branch --------- Co-authored-by: emilymianeil <eneil@hashicorp.com> * :ermahgerd "Sevice Mesh" -> "Service Mesh" (#18116) Just a typo in the docs. * Split pbmesh.UpstreamsConfiguration as a resource out of pbmesh.Upstreams (#17991) Configuration that previously was inlined into the Upstreams resource applies to both explicit and implicit upstreams and so it makes sense to split it out into its own resource. It also has other minor changes: - Renames `proxy.proto` proxy_configuration.proto` - Changes the type of `Upstream.destination_ref` from `pbresource.ID` to `pbresource.Reference` - Adds comments to fields that didn't have them * [NET-4895] ci - api tests and consul container tests error because of dependency bugs with go 1.20.6. Pin go to 1.20.5. (#18124) ### Description The following jobs started failing when go 1.20.6 was released: - `go-test-api-1-19` - `go-test-api-1-20` - `compatibility-integration-tests` - `upgrade-integration-tests` `compatibility-integration-tests` and `compatibility-integration-tests` to this testcontainers issue: https://github.com/testcontainers/testcontainers-go/issues/1359. This issue calls for testcontainers to release a new version when one of their dependencies is fixed. When that is done, we will unpin the go versions in `compatibility-integration-tests` and `compatibility-integration-tests`. ### Testing & Reproduction steps See these jobs broken in CI and then see them work with this PR. --------- Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com> * Add ingress gateway deprecation notices to docs (#18102) ### Description This adds notices, that ingress gateway is deprecated, to several places in the product docs where ingress gateway is the topic. ### Testing & Reproduction steps Tested with a local copy of the website. ### Links Deprecation of ingress gateway was announced in the Release Notes for Consul 1.16 and Consul-K8s 1.2. See: [https://developer.hashicorp.com/consul/docs/release-notes/consul/v1_16_x#what-s-deprecated](https://developer.hashicorp.com/consul/docs/release-notes/consul/v1_16_x#what-s-deprecated ) [https://developer.hashicorp.com/consul/docs/release-notes/consul-k8s/v1_2_x#what-s-deprecated](https://developer.hashicorp.com/consul/docs/release-notes/consul-k8s/v1_2_x#what-s-deprecated) ### PR Checklist * [N/A] updated test coverage * [X] external facing docs updated * [X] appropriate backport labels added * [X] not a security concern --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * Add docs for jwt cluster configuration (#18004) ### Description <!-- Please describe why you're making this change, in plain English. --> - Add jwt-provider docs for jwks cluster configuration. The configuration was added here: https://github.com/hashicorp/consul/pull/17978 * Docs: fix unmatched bracket for health checks page (#18134) * NET-4657/add resource service client (#18053) ### Description <!-- Please describe why you're making this change, in plain English. --> Dan had already started on this [task](https://github.com/hashicorp/consul/pull/17849) which is needed to start building the HTTP APIs. This just needed some cleanup to get it ready for review. Overview: - Rename `internalResourceServiceClient` to `insecureResourceServiceClient` for name consistency - Configure a `secureResourceServiceClient` with auth enabled ### PR Checklist * [ ] ~updated test coverage~ * [ ] ~external facing docs updated~ * [x] appropriate backport labels added * [ ] ~not a security concern~ * Fix bug with Vault CA provider (#18112) Updating RootPKIPath but not IntermediatePKIPath would not update leaf signing certs with the new root. Unsure if this happens in practice but manual testing showed it is a bug that would break mesh and agent connections once the old root is pruned. * [NET-4897] net/http host header is now verified and request.host that contains socked now error (#18129) ### Description This is related to https://github.com/hashicorp/consul/pull/18124 where we pinned the go versions in CI to 1.20.5 and 1.19.10. go 1.20.6 and 1.19.11 now validate request host headers for validity, including the hostname cannot be prefixed with slashes. For local communications (npipe://, unix://), the hostname is not used, but we need valid and meaningful hostname. Prior versions go Go would clean the host header, and strip slashes in the process, but go1.20.6 and go1.19.11 no longer do, and reject the host header. Around the community we are seeing that others are intercepting the req.host and if it starts with a slash or ends with .sock, they changing the host to localhost or another dummy value. [client: define a "dummy" hostname to use for local connections by thaJeztah · Pull Request #45942 · moby/moby](https://github.com/moby/moby/pull/45942) ### Testing & Reproduction steps Check CI tests. ### Links * [ ] updated test coverage * [ ] external facing docs updated * [ ] appropriate backport labels added * [ ] not a security concern * add a conditional around setting LANFilter.AllSegments to make sure it is valid (#18139) ### Description This is to correct a code problem because this assumes all segments, but when you get to Enterprise, you can be in partition that is not the default partition, in which case specifying all segments does not validate and fails. This is to correct the setting of this filter with `AllSegments` to `true` to only occur when in the the `default` partition. ### Testing & Reproduction steps <!-- * In the case of bugs, describe how to replicate * If any manual tests were done, document the steps and the conditions to replicate * Call out any important/ relevant unit tests, e2e tests or integration tests you have added or are adding --> ### Links <!-- Include any links here that might be helpful for people reviewing your PR (Tickets, GH issues, API docs, external benchmarks, tools docs, etc). If there are none, feel free to delete this section. Please be mindful not to leak any customer or confidential information. HashiCorp employees may want to use our internal URL shortener to obfuscate links. --> ### PR Checklist * [ ] updated test coverage * [ ] external facing docs updated * [ ] appropriate backport labels added * [ ] not a security concern * chore: bump upgrade integrations tests to 1.15, 116 [NET-4743] (#18130) * re org resource type registry (#18133) * fix: update delegateMock used in ENT (#18149) ### Description <!-- Please describe why you're making this change, in plain English. --> The mock is used in `http_ent_test` file which caused lint failures. For OSS->ENT parity adding the same change here. ### Links <!-- Include any links here that might be helpful for people reviewing your PR (Tickets, GH issues, API docs, external benchmarks, tools docs, etc). If there are none, feel free to delete this section. Please be mindful not to leak any customer or confidential information. HashiCorp employees may want to use our internal URL shortener to obfuscate links. --> Identified in OSS->ENT [merge PR](https://github.com/hashicorp/consul-enterprise/pull/6328) ### PR Checklist * [ ] ~updated test coverage~ * [ ] ~external facing docs updated~ * [x] appropriate backport labels added * [ ] ~not a security concern~ * Use JWT-auth filter in metadata mode & Delegate validation to RBAC filter (#18062) ### Description <!-- Please describe why you're making this change, in plain English. --> - Currently the jwt-auth filter doesn't take into account the service identity when validating jwt-auth, it only takes into account the path and jwt provider during validation. This causes issues when multiple source intentions restrict access to an endpoint with different JWT providers. - To fix these issues, rather than use the JWT auth filter for validation, we use it in metadata mode and allow it to forward the successful validated JWT token payload to the RBAC filter which will make the decisions. This PR ensures requests with and without JWT tokens successfully go through the jwt-authn filter. The filter however only forwards the data for successful/valid tokens. On the RBAC filter level, we check the payload for claims and token issuer + existing rbac rules. ### Testing & Reproduction steps <!-- * In the case of bugs, describe how to replicate * If any manual tests were done, document the steps and the conditions to replicate * Call out any important/ relevant unit tests, e2e tests or integration tests you have added or are adding --> - This test covers a multi level jwt requirements (requirements at top level and permissions level). It also assumes you have envoy running, you have a redis and a sidecar proxy service registered, and have a way to generate jwks with jwt. I mostly use: https://www.scottbrady91.com/tools/jwt for this. - first write your proxy defaults ``` Kind = "proxy-defaults" name = "global" config { protocol = "http" } ``` - Create two providers ``` Kind = "jwt-provider" Name = "auth0" Issuer = "https://ronald.local" JSONWebKeySet = { Local = { JWKS = "eyJrZXlzIjog....." } } ``` ``` Kind = "jwt-provider" Name = "okta" Issuer = "https://ronald.local" JSONWebKeySet = { Local = { JWKS = "eyJrZXlzIjogW3...." } } ``` - add a service intention ``` Kind = "service-intentions" Name = "redis" JWT = { Providers = [ { Name = "okta" }, ] } Sources = [ { Name = "*" Permissions = [{ Action = "allow" HTTP = { PathPrefix = "/workspace" } JWT = { Providers = [ { Name = "okta" VerifyClaims = [ { Path = ["aud"] Value = "my_client_app" }, { Path = ["sub"] Value = "5be86359073c434bad2da3932222dabe" } ] }, ] } }, { Action = "allow" HTTP = { PathPrefix = "/" } JWT = { Providers = [ { Name = "auth0" }, ] } }] } ] ``` - generate 3 jwt tokens: 1 from auth0 jwks, 1 from okta jwks with different claims than `/workspace` expects and 1 with correct claims - connect to your envoy (change service and address as needed) to view logs and potential errors. You can add: `-- --log-level debug` to see what data is being forwarded ``` consul connect envoy -sidecar-for redis1 -grpc-addr 127.0.0.1:8502 ``` - Make the following requests: ``` curl -s -H "Authorization: Bearer $Auth0_TOKEN" --insecure --cert leaf.cert --key leaf.key --cacert connect-ca.pem https://localhost:20000/workspace -v RBAC filter denied curl -s -H "Authorization: Bearer $Okta_TOKEN_with_wrong_claims" --insecure --cert leaf.cert --key leaf.key --cacert connect-ca.pem https://localhost:20000/workspace -v RBAC filter denied curl -s -H "Authorization: Bearer $Okta_TOKEN_with_correct_claims" --insecure --cert leaf.cert --key leaf.key --cacert connect-ca.pem https://localhost:20000/workspace -v Successful request ``` ### TODO * [x] Update test coverage * [ ] update integration tests (follow-up PR) * [x] appropriate backport labels added * Support Consul Connect Envoy Command on Windows (#17694) ### Description Add support for consul connect envoy command on windows. This PR fixes the comments of PR - https://github.com/hashicorp/consul/pull/15114 ### Testing * Built consul.exe from this branch on windows and hosted here - [AWS S3](https://asheshvidyut-bucket.s3.ap-southeast-2.amazonaws.com/consul.zip) * Updated the [tutorial](https://developer.hashicorp.com/consul/tutorials/developer-mesh/consul-windows-workloads) and changed the `consul_url.default` value to [AWS S3](https://asheshvidyut-bucket.s3.ap-southeast-2.amazonaws.com/consul.zip) * Followed the steps in the tutorial and verified that everything is working as described. ### PR Checklist * [x] updated test coverage * [ ] external facing docs updated * [x] appropriate backport labels added * [x] not a security concern --------- Co-authored-by: Franco Bruno Lavayen <cocolavayen@gmail.com> Co-authored-by: Jose Ignacio Lorenzo <74208929+joselo85@users.noreply.github.com> Co-authored-by: Jose Ignacio Lorenzo <joseignaciolorenzo85@gmail.com> Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * Change docs to say 168h instead of 7d for server_rejoin_age_max (#18154) ### Description Addresses https://github.com/hashicorp/consul/pull/17171#issuecomment-1636930705 * [OSS] test: improve xDS listener code coverage (#18138) test: improve xDS listener code coverage * Re-order expected/actual for assertContainerState in consul container tests (#18157) Re-order expected/actual, consul container tests * group and document make file (#17943) * group and document make file * Add `testing/deployer` (neé `consul-topology`) [NET-4610] (#17823) Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com> Co-authored-by: R.B. Boyer <rb@hashicorp.com> Co-authored-by: Freddy <freddygv@users.noreply.github.com> * [NET-4792] Add integrations tests for jwt-auth (#18169) * Add FIPS reference to consul enterprise docs (#18028) * Add FIPS reference to consul enterprise docs * Update website/content/docs/enterprise/index.mdx Co-authored-by: David Yu <dyu@hashicorp.com> * remove support for ecs client (fips) --------- Co-authored-by: David Yu <dyu@hashicorp.com> * add peering_commontopo tests [NET-3700] (#17951) Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com> Co-authored-by: R.B. Boyer <rb@hashicorp.com> Co-authored-by: Freddy <freddygv@users.noreply.github.com> Co-authored-by: NiniOak <anita.akaeze@hashicorp.com> * docs - remove Sentinel from enterprise features list (#18176) * Update index.mdx * Update kv.mdx * Update docs-nav-data.json * delete sentinel.mdx * Update redirects.js --------- Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> * [NET-4865] Bump golang.org/x/net to 0.12.0 (#18186) Bump golang.org/x/net to 0.12.0 While not necessary to directly address CVE-2023-29406 (which should be handled by using a patched version of Go when building), an accompanying change to HTTP/2 error handling does impact agent code. See https://go-review.googlesource.com/c/net/+/506995 for the HTTP/2 change. Bump this dependency across our submodules as well for the sake of potential indirect consumers of `x/net/http`. * Call resource mutate hook before validate hook (NET-4907) (#18178) * [NET-4865] security: Update Go version to 1.20.6 (#18190) Update Go version to 1.20.6 This resolves [CVE-2023-29406] (https://nvd.nist.gov/vuln/detail/CVE-2023-29406) for uses of the `net/http` standard library. Note that until the follow-up to #18124 is done, the version of Go used in those impacted tests will need to remain on 1.20.5. * Improve XDS test coverage: JWT auth edition (#18183) * Improve XDS test coverage: JWT auth edition more tests * test: xds coverage for jwt listeners --------- Co-authored-by: DanStough <dan.stough@hashicorp.com> * update readme.md (#18191) u[date readme.md * Update submodules to latest following 1.16.0 (#18197) Align all our internal use of submodules on the latest versions. * SEC-090: Automated trusted workflow pinning (2023-07-18) (#18174) Result of tsccr-helper -log-level=info -pin-all-workflows . Co-authored-by: hashicorp-tsccr[bot] <hashicorp-tsccr[bot]@users.noreply.github.com> * Fix Backport Assistant PR commenting (#18200) * Fix Backport Assistant failure PR commenting For general comments on a PR, it looks like you have to use the `/issue` endpoint rather than `/pulls`, which requires commit/other review-specific target details. This matches the endpoint used in `backport-reminder.yml`. * Remove Backport Reminder workflow This is noisy (even when adding multiple labels, individual comments per label are generated), and likely no longer needed: we haven't had this work in a long time due to an expired GH token, and we now have better automation for backport PR assignment. * resource: Pass resource to Write ACL hook instead of just resource Id [NET-4908] (#18192) * Explicitly enable WebSocket upgrades (#18150) This PR explicitly enables WebSocket upgrades in Envoy's UpgradeConfig for all proxy types. (API Gateway, Ingress, and Sidecar.) Fixes #8283 * docs: fix the description of client rpc (#18206) * NET-4804: Add dashboard for monitoring consul-k8s (#18208) * [OSS] Improve xDS Code Coverage - Clusters (#18165) test: improve xDS cluster code coverage * NET-4222 take config file consul container (#18218) Net 4222 take config file consul container * Envoy Integration Test Windows (#18007) * [CONSUL-395] Update check_hostport and Usage (#40) * [CONSUL-397] Copy envoy binary from Image (#41) * [CONSUL-382] Support openssl in unique test dockerfile (#43) * [CONSUL-405] Add bats to single container (#44) * [CONSUL-414] Run Prometheus Test Cases and Validate Changes (#46) * [CONSUL-410] Run Jaeger in Single container (#45) * [CONSUL-412] Run test-sds-server in single container (#48) * [CONSUL-408] Clean containers (#47) * [CONSUL-384] Rebase and sync fork (#50) * [CONSUL-415] Create Scenarios Troubleshooting Docs (#49) * [CONSUL-417] Update Docs Single Container (#51) * [CONSUL-428] Add Socat to single container (#54) * [CONSUL-424] Replace pkill in kill_envoy function (#52) * [CONSUL-434] Modify Docker run functions in Helper script (#53) * [CONSUL-435] Replace docker run in set_ttl_check_state & wait_for_agent_service_register functions (#55) * [CONSUL-438] Add netcat (nc) in the Single container Dockerfile (#56) * [CONSUL-429] Replace Docker run with Docker exec (#57) * [CONSUL-436] Curl timeout and run tests (#58) * [CONSUL-443] Create dogstatsd Function (#59) * [CONSUL-431] Update Docs Netcat (#60) * [CONSUL-439] Parse nc Command in function (#61) * [CONSUL-463] Review curl Exec and get_ca_root Func (#63) * [CONSUL-453] Docker hostname in Helper functions (#64) * [CONSUL-461] Test wipe volumes without extra cont (#66) * [CONSUL-454] Check ports in the Server and Agent containers (#65) * [CONSUL-441] Update windows dockerfile with version (#62) * [CONSUL-466] Review case-grpc Failing Test (#67) * [CONSUL-494] Review case-cfg-resolver-svc-failover (#68) * [CONSUL-496] Replace docker_wget & docker_curl (#69) * [CONSUL-499] Cleanup Scripts - Remove nanoserver (#70) * [CONSUL-500] Update Troubleshooting Docs (#72) * [CONSUL-502] Pull & Tag Envoy Windows Image (#73) * [CONSUL-504] Replace docker run in docker_consul (#76) * [CONSUL-505] Change admin_bind * [CONSUL-399] Update envoy to 1.23.1 (#78) * [CONSUL-510] Support case-wanfed-gw on Windows (#79) * [CONSUL-506] Update troubleshooting Documentation (#80) * [CONSUL-512] Review debug_dump_volumes Function (#81) * [CONSUL-514] Add zipkin to Docker Image (#82) * [CONSUL-515] Update Documentation (#83) * [CONSUL-529] Support case-consul-exec (#86) * [CONSUL-530] Update Documentation (#87) * [CONSUL-530] Update default consul version 1.13.3 * [CONSUL-539] Cleanup (#91) * [CONSUL-546] Scripts Clean-up (#92) * [CONSUL-491] Support admin_access_log_path value for Windows (#71) * [CONSUL-519] Implement mkfifo Alternative (#84) * [CONSUL-542] Create OS Specific Files for Envoy Package (#88) * [CONSUL-543] Create exec_supported.go (#89) * [CONSUL-544] Test and Build Changes (#90) * Implement os.DevNull * using mmap instead of disk files * fix import in exec-unix * fix nmap open too many arguemtn * go fmt on file * changelog file * fix go mod * Update .changelog/17694.txt Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * different mmap library * fix bootstrap json * some fixes * chocolatey version fix and image fix * using different library * fix Map funciton call * fix mmap call * fix tcp dump * fix tcp dump * windows tcp dump * Fix docker run * fix tests * fix go mod * fix version 16.0 * fix version * fix version dev * sleep to debug * fix sleep * fix permission issue * fix permission issue * fix permission issue * fix command * fix command * fix funciton * fix assert config entry status command not found * fix command not found assert_cert_has_cn * fix command not found assert_upstream_missing * fix command not found assert_upstream_missing_once * fix command not found get_upstream_endpoint * fix command not found get_envoy_public_listener_once * fix command not found * fix test cases * windows integration test workflow github * made code similar to unix using npipe * fix go.mod * fix dialing of npipe * dont wait * check size of written json * fix undefined n * running * fix dep * fix syntax error * fix workflow file * windows runner * fix runner * fix from json * fix runs on * merge connect envoy * fix cin path * build * fix file name * fix file name * fix dev build * remove unwanted code * fix upload * fix bin name * fix path * checkout current branch * fix path * fix tests * fix shell bash for windows sh files * fix permission of run-test.sh * removed docker dev * added shell bash for tests * fix tag * fix win=true * fix cd * added dev * fix variable undefined * removed failing tests * fix tcp dump image * fix curl * fix curl * tcp dump path * fix tcpdump path * fix curl * fix curl install * stop removing intermediate containers * fix tcpdump docker image * revert -rm * --rm=false * makeing docker image before * fix tcpdump * removed case consul exec * removed terminating gateway simple * comment case wasm * removed data dog * comment out upload coverage * uncomment case-consul-exec * comment case consul exec * if always * logs * using consul 1.17.0 * fix quotes * revert quotes * redirect to dev null * Revert version * revert consul connect * fix version * removed envoy connect * not using function * change log * docker logs * fix logs * restructure bad authz * rmeoved dev null * output * fix file descriptor * fix cacert * fix cacert * fix ca cert * cacert does not work in windows curl * fix func * removed docker logs * added sleep * fix tls * commented case-consul-exec * removed echo * retry docker consul * fix upload bin * uncomment consul exec * copying consul.exe to docker image * copy fix * fix paths * fix path * github workspace path * latest version * Revert "latest version" This reverts commit 5a7d7b82d9e7553bcb01b02557ec8969f9deba1d. * commented consul exec * added ssl revoke best effort * revert best effort * removed unused files * rename var name and change dir * windows runner * permission * needs setup fix * swtich to github runner * fix file path * fix path * fix path * fix path * fix path * fix path * fix build paths * fix tag * nightly runs * added matrix in github workflow, renamed files * fix job * fix matrix * removed brackes * from json * without using job matrix * fix quotes * revert job matrix * fix workflow * fix comment * added comment * nightly runs * removed datadog ci as it is already measured in linux one * running test * Revert "running test" This reverts commit 7013d15a23732179d18ec5d17336e16b26fab5d4. * pr comment fixes * running test now * running subset of test * running subset of test * job matrix * shell bash * removed bash shell * linux machine for job matrix * fix output * added cat to debug * using ubuntu latest * fix job matrix * fix win true * fix go test * revert job matrix --------- Co-authored-by: Jose Ignacio Lorenzo <74208929+joselo85@users.noreply.github.com> Co-authored-by: Franco Bruno Lavayen <cocolavayen@gmail.com> Co-authored-by: Ivan K Berlot <ivanberlot@gmail.com> Co-authored-by: Ezequiel Fernández Ponce <20102608+ezfepo@users.noreply.github.com> Co-authored-by: joselo85 <joseignaciolorenzo85@gmail.com> Co-authored-by: Ezequiel Fernández Ponce <ezequiel.fernandez@southworks.com> Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * fix typos and update ecs compat table (#18215) * fix typos and update ecs compat table * real info for the ecs compat matrix table * Update website/content/docs/ecs/compatibility.mdx Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com> --------- Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com> * [OSS] proxystate: add proxystate protos (#18216) * proxystate: add proxystate protos to pbmesh and resolve imports and conflicts between message names * ci: don't verify s390x (#18224) * [CC-5718] Remove HCP token requirement during bootstrap (#18140) * [CC-5718] Remove HCP token requirement during bootstrap * Re-add error for loading HCP management token * Remove old comment * Add changelog entry * Remove extra validation line * Apply suggestions from code review Co-authored-by: lornasong <lornasong@users.noreply.github.com> --------- Co-authored-by: lornasong <lornasong@users.noreply.github.com> * [NET-4122] Doc guidance for federation with externalServers (#18207) Doc guidance for federation with externalServers Add guidance for proper configuration when joining to a secondary cluster using WAN fed with external servers also enabled. Also clarify federation requirements and fix formatting for an unrelated value. Update both the Helm chart reference (synced from `consul-k8s`, see hashicorp/consul-k8s#2583) and the docs on using `externalServers`. * [OSS] Improve xDS Code Coverage - Endpoints and Misc (#18222) test: improve xDS endpoints code coverage * Clarify license reporting timing and GDPR compliance (#18237) Add Alicia's edits to clarify log timing and other details * Fix Github Workflow File (#18241) * [CONSUL-382] Support openssl in unique test dockerfile (#43) * [CONSUL-405] Add bats to single container (#44) * [CONSUL-414] Run Prometheus Test Cases and Validate Changes (#46) * [CONSUL-410] Run Jaeger in Single container (#45) * [CONSUL-412] Run test-sds-server in single container (#48) * [CONSUL-408] Clean containers (#47) * [CONSUL-384] Rebase and sync fork (#50) * [CONSUL-415] Create Scenarios Troubleshooting Docs (#49) * [CONSUL-417] Update Docs Single Container (#51) * [CONSUL-428] Add Socat to single container (#54) * [CONSUL-424] Replace pkill in kill_envoy function (#52) * [CONSUL-434] Modify Docker run functions in Helper script (#53) * [CONSUL-435] Replace docker run in set_ttl_check_state & wait_for_agent_service_register functions (#55) * [CONSUL-438] Add netcat (nc) in the Single container Dockerfile (#56) * [CONSUL-429] Replace Docker run with Docker exec (#57) * [CONSUL-436] Curl timeout and run tests (#58) * [CONSUL-443] Create dogstatsd Function (#59) * [CONSUL-431] Update Docs Netcat (#60) * [CONSUL-439] Parse nc Command in function (#61) * [CONSUL-463] Review curl Exec and get_ca_root Func (#63) * [CONSUL-453] Docker hostname in Helper functions (#64) * [CONSUL-461] Test wipe volumes without extra cont (#66) * [CONSUL-454] Check ports in the Server and Agent containers (#65) * [CONSUL-441] Update windows dockerfile with version (#62) * [CONSUL-466] Review case-grpc Failing Test (#67) * [CONSUL-494] Review case-cfg-resolver-svc-failover (#68) * [CONSUL-496] Replace docker_wget & docker_curl (#69) * [CONSUL-499] Cleanup Scripts - Remove nanoserver (#70) * [CONSUL-500] Update Troubleshooting Docs (#72) * [CONSUL-502] Pull & Tag Envoy Windows Image (#73) * [CONSUL-504] Replace docker run in docker_consul (#76) * [CONSUL-505] Change admin_bind * [CONSUL-399] Update envoy to 1.23.1 (#78) * [CONSUL-510] Support case-wanfed-gw on Windows (#79) * [CONSUL-506] Update troubleshooting Documentation (#80) * [CONSUL-512] Review debug_dump_volumes Function (#81) * [CONSUL-514] Add zipkin to Docker Image (#82) * [CONSUL-515] Update Documentation (#83) * [CONSUL-529] Support case-consul-exec (#86) * [CONSUL-530] Update Documentation (#87) * [CONSUL-530] Update default consul version 1.13.3 * [CONSUL-539] Cleanup (#91) * [CONSUL-546] Scripts Clean-up (#92) * [CONSUL-491] Support admin_access_log_path value for Windows (#71) * [CONSUL-519] Implement mkfifo Alternative (#84) * [CONSUL-542] Create OS Specific Files for Envoy Package (#88) * [CONSUL-543] Create exec_supported.go (#89) * [CONSUL-544] Test and Build Changes (#90) * Implement os.DevNull * using mmap instead of disk files * fix import in exec-unix * fix nmap open too many arguemtn * go fmt on file * changelog file * fix go mod * Update .changelog/17694.txt Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * different mmap library * fix bootstrap json * some fixes * chocolatey version fix and image fix * using different library * fix Map funciton call * fix mmap call * fix tcp dump * fix tcp dump * windows tcp dump * Fix docker run * fix tests * fix go mod * fix version 16.0 * fix version * fix version dev * sleep to debug * fix sleep * fix permission issue * fix permission issue * fix permission issue * fix command * fix command * fix funciton * fix assert config entry status command not found * fix command not found assert_cert_has_cn * fix command not found assert_upstream_missing * fix command not found assert_upstream_missing_once * fix command not found get_upstream_endpoint * fix command not found get_envoy_public_listener_once * fix command not found * fix test cases * windows integration test workflow github * made code similar to unix using npipe * fix go.mod * fix dialing of npipe * dont wait * check size of written json * fix undefined n * running * fix dep * fix syntax error * fix workflow file * windows runner * fix runner * fix from json * fix runs on * merge connect envoy * fix cin path * build * fix file name * fix file name * fix dev build * remove unwanted code * fix upload * fix bin name * fix path * checkout current branch * fix path * fix tests * fix shell bash for windows sh files * fix permission of run-test.sh * removed docker dev * added shell bash for tests * fix tag * fix win=true * fix cd * added dev * fix variable undefined * removed failing tests * fix tcp dump image * fix curl * fix curl * tcp dump path * fix tcpdump path * fix curl * fix curl install * stop removing intermediate containers * fix tcpdump docker image * revert -rm * --rm=false * makeing docker image before * fix tcpdump * removed case consul exec * removed terminating gateway simple * comment case wasm * removed data dog * comment out upload coverage * uncomment case-consul-exec * comment case consul exec * if always * logs * using consul 1.17.0 * fix quotes * revert quotes * redirect to dev null * Revert version * revert consul connect * fix version * removed envoy connect * not using function * change log * docker logs * fix logs * restructure bad authz * rmeoved dev null * output * fix file descriptor * fix cacert * fix cacert * fix ca cert * cacert does not work in windows curl * fix func * removed docker logs * added sleep * fix tls * commented case-consul-exec * removed echo * retry docker consul * fix upload bin * uncomment consul exec * copying consul.exe to docker image * copy fix * fix paths * fix path * github workspace path * latest version * Revert "latest version" This reverts commit 5a7d7b82d9e7553bcb01b02557ec8969f9deba1d. * commented consul exec * added ssl revoke best effort * revert best effort * removed unused files * rename var name and change dir * windows runner * permission * needs setup fix * swtich to github runner * fix file path * fix path * fix path * fix path * fix path * fix path * fix build paths * fix tag * nightly runs * added matrix in github workflow, renamed files * fix job * fix matrix * removed brackes * from json * without using job matrix * fix quotes * revert job matrix * fix workflow * fix comment * added comment * nightly runs * removed datadog ci as it is already measured in linux one * running test * Revert "running test" This reverts commit 7013d15a23732179d18ec5d17336e16b26fab5d4. * pr comment fixes * running test now * running subset of test * running subset of test * job matrix * shell bash * removed bash shell * linux machine for job matrix * fix output * added cat to debug * using ubuntu latest * fix job matrix * fix win true * fix go test * revert job matrix * Fix tests --------- Co-authored-by: Ivan K Berlot <ivanberlot@gmail.com> Co-authored-by: Jose Ignacio Lorenzo <74208929+joselo85@users.noreply.github.com> Co-authored-by: Franco Bruno Lavayen <cocolavayen@gmail.com> Co-authored-by: Ezequiel Fernández Ponce <20102608+ezfepo@users.noreply.github.com> Co-authored-by: joselo85 <joseignaciolorenzo85@gmail.com> Co-authored-by: Ezequiel Fernández Ponce <ezequiel.fernandez@southworks.com> Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> * NET-4996 - filter go-tests and test-integration workflows from running on docs only and ui only changes (#18236) * Align build arch matrix with enterprise (#18235) Ensure that OSS remains in sync w/ Enterprise by aligning the format of arch matrix args for various build jobs. * Revert "NET-4996 - filter go-tests and test-integration workflows from running on docs only and ui only changes" (#18248) Revert "NET-4996 - filter go-tests and test-integration workflows from running on docs only and ui only changes (#18236)" This reverts commit a11dba710e6ce6f172c0fa6c9b61567cc1efffc8. * resource: Add scope to resource type registration [NET-4976] (#18214) Enables querying a resource type's registration to determine if a resource is cluster, partition, or partition and namespace scoped. * Fix some inconsistencies in jwt docs (#18234) * NET-1825: More new ACL token creation docs (#18063) Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> * [CC-5719] Add support for builtin global-read-only policy * Add changelog * Add read-only to docs * Fix some minor issues. * Change from ReplaceAll to Sprintf * Change IsValidPolicy name to return an error instead of bool * Fix PolicyList test * Fix other tests * Apply suggestions from code review Co-authored-by: Paul Glass <pglass@hashicorp.com> * Fix state store test for policy list. * Fix naming issues * Update acl/validation.go Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com> * backport of commit d63fa5481dc02c6faae7cc2647b4073b3286af1d * backport of commit 3d099a6ed8ed10b6dc464c466cb1668914db8f08 --------- Signed-off-by: dependabot[bot] <support@github.com> Signed-off-by: Dan Bond <danbond@protonmail.com> Signed-off-by: josh <josh.timmons@hashicorp.com> Co-authored-by: Michael Zalimeni <michael.zalimeni@hashicorp.com> Co-authored-by: Ronald <roncodingenthusiast@users.noreply.github.com> Co-authored-by: Eric Haberkorn <erichaberkorn@gmail.com> Co-authored-by: Andrew Stucki <andrew.stucki@hashicorp.com> Co-authored-by: Luke Kysow <1034429+lkysow@users.noreply.github.com> Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com> Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> Co-authored-by: Bryce Kalow <bkalow@hashicorp.com> Co-authored-by: Paul Glass <pglass@hashicorp.com> Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com> Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com> Co-authored-by: Poonam Jadhav <poonam.jadhav@hashicorp.com> Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: Chris Thain <32781396+cthain@users.noreply.github.com> Co-authored-by: Hariram Sankaran <56744845+ramramhariram@users.noreply.github.com> Co-authored-by: shanafarkas <105076572+shanafarkas@users.noreply.github.com> Co-authored-by: Thomas Eckert <teckert@hashicorp.com> Co-authored-by: Jeff Apple <79924108+Jeff-Apple@users.noreply.github.com> Co-authored-by: Joshua Timmons <josh.timmons@hashicorp.com> Co-authored-by: Ashesh Vidyut <134911583+absolutelightning@users.noreply.github.com> Co-authored-by: Dan Stough <dan.stough@hashicorp.com> Co-authored-by: Curt Bushko <cbushko@gmail.com> Co-authored-by: Tobias Birkefeld <t@craxs.de> Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com> Co-authored-by: Semir Patel <semir.patel@hashicorp.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: chappie <6537530+chapmanc@users.noreply.github.com> Co-authored-by: Derek Menteer <105233703+hashi-derek@users.noreply.github.com> Co-authored-by: John Murret <john.murret@hashicorp.com> Co-authored-by: Mark Campbell-Vincent <mnmvincent@gmail.com> Co-authored-by: Daniel Upton <daniel@floppy.co> Co-authored-by: Steven Zamborsky <97125550+stevenzamborsky@users.noreply.github.com> Co-authored-by: George Bolo <george.bolo@gmail.com> Co-authored-by: Chris S. Kim <ckim@hashicorp.com> Co-authored-by: wangxinyi7 <121973291+wangxinyi7@users.noreply.github.com> Co-authored-by: cskh <hui.kang@hashicorp.com> Co-authored-by: V. K <cn007b@gmail.com> Co-authored-by: Iryna Shustava <ishustava@users.noreply.github.com> Co-authored-by: Alex Simenduev <shamil.si@gmail.com> Co-authored-by: Dhia Ayachi <dhia@hashicorp.com> Co-authored-by: Dan Bond <danbond@protonmail.com> Co-authored-by: sarahalsmiller <100602640+sarahalsmiller@users.noreply.github.com> Co-authored-by: Gerard Nguyen <gerard@hashicorp.com> Co-authored-by: mr-miles <miles.waller@gmail.com> Co-authored-by: natemollica-dev <57850649+natemollica-nm@users.noreply.github.com> Co-authored-by: John Maguire <john.maguire@hashicorp.com> Co-authored-by: Samantha <hello@entropy.cat> Co-authored-by: Ranjandas <thejranjan@gmail.com> Co-authored-by: Evan Phoenix <evan@phx.io> Co-authored-by: Michael Hofer <karras@users.noreply.github.com> Co-authored-by: J.C. Jones <james.jc.jones@gmail.com> Co-authored-by: Fulvio <fulviodenza823@gmail.com> Co-authored-by: Krastin Krastev <krastin@hashicorp.com> Co-authored-by: david3a <49253132+david3a@users.noreply.github.com> Co-authored-by: Nick Irvine <115657443+nfi-hashicorp@users.noreply.github.com> Co-authored-by: Tom Davies <tom@t-davies.com> Co-authored-by: Vijay <vijayraghav22@gmail.com> Co-authored-by: Eddie Rowe <74205376+eddie-rowe@users.noreply.github.com> Co-authored-by: emilymianeil <eneil@hashicorp.com> Co-authored-by: nv-hashi <80716011+nv-hashi@users.noreply.github.com> Co-authored-by: Franco Bruno Lavayen <cocolavayen@gmail.com> Co-authored-by: Jose Ignacio Lorenzo <74208929+joselo85@users.noreply.github.com> Co-authored-by: Jose Ignacio Lorenzo <joseignaciolorenzo85@gmail.com> Co-authored-by: R.B. Boyer <rb@hashicorp.com> Co-authored-by: Freddy <freddygv@users.noreply.github.com> Co-authored-by: NiniOak <anita.akaeze@hashicorp.com> Co-authored-by: hashicorp-tsccr[bot] <129506189+hashicorp-tsccr[bot]@users.noreply.github.com> Co-authored-by: hashicorp-tsccr[bot] <hashicorp-tsccr[bot]@users.noreply.github.com> Co-authored-by: Blake Covarrubias <blake@covarrubi.as> Co-authored-by: Ivan K Berlot <ivanberlot@gmail.com> Co-authored-by: Ezequiel Fernández Ponce <20102608+ezfepo@users.noreply.github.com> Co-authored-by: Ezequiel Fernández Ponce <ezequiel.fernandez@southworks.com> Co-authored-by: Nitya Dhanushkodi <nitya@hashicorp.com> Co-authored-by: Jeremy Jacobson <jjacobson93@users.noreply.github.com> Co-authored-by: lornasong <lornasong@users.noreply.github.com> Co-authored-by: Judith Malnick <judith@hashicorp.com> Co-authored-by: Jeremy Jacobson <jeremy.jacobson@hashicorp.com>
2023-08-01 17:37:13 +00:00
// writeBuiltinACLPolicy writes the given built-in policy to Raft if the policy
// is not found or if the policy rules have been changed. The name and
// description of a built-in policy are user-editable and must be preserved
// during updates. This function must only be called in a primary datacenter.
func (s *Server) writeBuiltinACLPolicy(newPolicy structs.ACLPolicy) error {
_, policy, err := s.fsm.State().ACLPolicyGetByID(nil, newPolicy.ID, structs.DefaultEnterpriseMetaInDefaultPartition())
if err != nil {
return fmt.Errorf("failed to get the builtin %s policy", newPolicy.Name)
}
if policy == nil || policy.Rules != newPolicy.Rules {
if policy != nil {
newPolicy.Name = policy.Name
newPolicy.Description = policy.Description
}
newPolicy.EnterpriseMeta = *structs.DefaultEnterpriseMetaInDefaultPartition()
newPolicy.SetHash(true)
req := structs.ACLPolicyBatchSetRequest{
Policies: structs.ACLPolicies{&newPolicy},
}
_, err := s.raftApply(structs.ACLPolicySetRequestType, &req)
if err != nil {
return fmt.Errorf("failed to create %s policy: %v", newPolicy.Name, err)
}
s.logger.Info(fmt.Sprintf("Created ACL '%s' policy", newPolicy.Name))
}
return nil
}
Update HCP bootstrapping to support existing clusters (#16916) * Persist HCP management token from server config We want to move away from injecting an initial management token into Consul clusters linked to HCP. The reasoning is that by using a separate class of token we can have more flexibility in terms of allowing HCP's token to co-exist with the user's management token. Down the line we can also more easily adjust the permissions attached to HCP's token to limit it's scope. With these changes, the cloud management token is like the initial management token in that iit has the same global management policy and if it is created it effectively bootstraps the ACL system. * Update SDK and mock HCP server The HCP management token will now be sent in a special field rather than as Consul's "initial management" token configuration. This commit also updates the mock HCP server to more accurately reflect the behavior of the CCM backend. * Refactor HCP bootstrapping logic and add tests We want to allow users to link Consul clusters that already exist to HCP. Existing clusters need care when bootstrapped by HCP, since we do not want to do things like change ACL/TLS settings for a running cluster. Additional changes: * Deconstruct MaybeBootstrap so that it can be tested. The HCP Go SDK requires HTTPS to fetch a token from the Auth URL, even if the backend server is mocked. By pulling the hcp.Client creation out we can modify its TLS configuration in tests while keeping the secure behavior in production code. * Add light validation for data received/loaded. * Sanitize initial_management token from received config, since HCP will only ever use the CloudConfig.MangementToken. * Add changelog entry
2023-04-27 20:27:39 +00:00
func (s *Server) initializeManagementToken(name, secretID string) error {
state := s.fsm.State()
if _, err := uuid.ParseUUID(secretID); err != nil {
s.logger.Warn("Configuring a non-UUID management token is deprecated")
}
_, token, err := state.ACLTokenGetBySecret(nil, secretID, nil)
if err != nil {
return fmt.Errorf("failed to get %s: %v", name, err)
}
// Ignoring expiration times to avoid an insertion collision.
if token == nil {
accessor, err := lib.GenerateUUID(s.checkTokenUUID)
if err != nil {
return fmt.Errorf("failed to generate the accessor ID for %s: %v", name, err)
}
token := structs.ACLToken{
AccessorID: accessor,
SecretID: secretID,
Description: name,
Policies: []structs.ACLTokenPolicyLink{
{
ID: structs.ACLPolicyGlobalManagementID,
},
},
CreateTime: time.Now(),
Local: false,
EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(),
}
token.SetHash(true)
done := false
if canBootstrap, _, err := state.CanBootstrapACLToken(); err == nil && canBootstrap {
req := structs.ACLTokenBootstrapRequest{
Token: token,
ResetIndex: 0,
}
if _, err := s.raftApply(structs.ACLBootstrapRequestType, &req); err == nil {
s.logger.Info("Bootstrapped ACL token from configuration", "description", name)
done = true
} else {
if err.Error() != structs.ACLBootstrapNotAllowedErr.Error() &&
err.Error() != structs.ACLBootstrapInvalidResetIndexErr.Error() {
return fmt.Errorf("failed to bootstrap with %s: %v", name, err)
}
}
}
if !done {
// either we didn't attempt to or setting the token with a bootstrap request failed.
req := structs.ACLTokenBatchSetRequest{
Tokens: structs.ACLTokens{&token},
CAS: false,
}
if _, err := s.raftApply(structs.ACLTokenSetRequestType, &req); err != nil {
return fmt.Errorf("failed to create %s: %v", name, err)
}
s.logger.Info("Created ACL token from configuration", "description", name)
}
}
return nil
}
func (s *Server) insertAnonymousToken() error {
state := s.fsm.State()
_, token, err := state.ACLTokenGetBySecret(nil, anonymousToken, nil)
if err != nil {
return fmt.Errorf("failed to get anonymous token: %v", err)
}
// Ignoring expiration times to avoid an insertion collision.
if token == nil {
token = &structs.ACLToken{
AccessorID: acl.AnonymousTokenID,
SecretID: anonymousToken,
Description: "Anonymous Token",
CreateTime: time.Now(),
EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(),
}
token.SetHash(true)
req := structs.ACLTokenBatchSetRequest{
Tokens: structs.ACLTokens{token},
CAS: false,
}
_, err := s.raftApply(structs.ACLTokenSetRequestType, &req)
if err != nil {
return fmt.Errorf("failed to create anonymous token: %v", err)
}
s.logger.Info("Created ACL anonymous token from configuration")
}
return nil
}
func (s *Server) startACLReplication(ctx context.Context) {
if s.InPrimaryDatacenter() {
return
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
// unlike some other leader routines this initializes some extra state
// and therefore we want to prevent re-initialization if things are already
// running
if s.leaderRoutineManager.IsRunning(aclPolicyReplicationRoutineName) {
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
return
}
s.initReplicationStatus()
s.leaderRoutineManager.Start(ctx, aclPolicyReplicationRoutineName, s.runACLPolicyReplicator)
s.leaderRoutineManager.Start(ctx, aclRoleReplicationRoutineName, s.runACLRoleReplicator)
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
if s.config.ACLTokenReplication {
s.leaderRoutineManager.Start(ctx, aclTokenReplicationRoutineName, s.runACLTokenReplicator)
s.updateACLReplicationStatusRunning(structs.ACLReplicateTokens)
} else {
s.updateACLReplicationStatusRunning(structs.ACLReplicatePolicies)
}
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
type replicateFunc func(ctx context.Context, logger hclog.Logger, lastRemoteIndex uint64) (uint64, bool, error)
// This function is only intended to be run as a managed go routine, it will block until
// the context passed in indicates that it should exit.
func (s *Server) runACLPolicyReplicator(ctx context.Context) error {
policyLogger := s.aclReplicationLogger(structs.ACLReplicatePolicies.SingularNoun())
policyLogger.Info("started ACL Policy replication")
2021-04-22 15:20:53 +00:00
return s.runACLReplicator(ctx, policyLogger, structs.ACLReplicatePolicies, s.replicateACLPolicies, "acl-policies")
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
// This function is only intended to be run as a managed go routine, it will block until
// the context passed in indicates that it should exit.
func (s *Server) runACLRoleReplicator(ctx context.Context) error {
roleLogger := s.aclReplicationLogger(structs.ACLReplicateRoles.SingularNoun())
roleLogger.Info("started ACL Role replication")
2021-04-22 15:20:53 +00:00
return s.runACLReplicator(ctx, roleLogger, structs.ACLReplicateRoles, s.replicateACLRoles, "acl-roles")
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
// This function is only intended to be run as a managed go routine, it will block until
// the context passed in indicates that it should exit.
func (s *Server) runACLTokenReplicator(ctx context.Context) error {
tokenLogger := s.aclReplicationLogger(structs.ACLReplicateTokens.SingularNoun())
tokenLogger.Info("started ACL Token replication")
2021-04-22 15:20:53 +00:00
return s.runACLReplicator(ctx, tokenLogger, structs.ACLReplicateTokens, s.replicateACLTokens, "acl-tokens")
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
// This function is only intended to be run as a managed go routine, it will block until
// the context passed in indicates that it should exit.
func (s *Server) runACLReplicator(
ctx context.Context,
logger hclog.Logger,
replicationType structs.ACLReplicationType,
replicateFunc replicateFunc,
2021-04-22 15:20:53 +00:00
metricName string,
) error {
var failedAttempts uint
limiter := rate.NewLimiter(rate.Limit(s.config.ACLReplicationRate), s.config.ACLReplicationBurst)
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
var lastRemoteIndex uint64
for {
if err := limiter.Wait(ctx); err != nil {
return err
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
}
if s.tokens.ReplicationToken() == "" {
continue
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
index, exit, err := replicateFunc(ctx, logger, lastRemoteIndex)
if exit {
return nil
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
if err != nil {
2021-04-22 15:20:53 +00:00
metrics.SetGauge([]string{"leader", "replication", metricName, "status"},
0,
)
lastRemoteIndex = 0
s.updateACLReplicationStatusError(err.Error())
logger.Warn("ACL replication error (will retry if still leader)",
"error", err,
)
if (1 << failedAttempts) < aclReplicationMaxRetryBackoff {
failedAttempts++
}
select {
case <-ctx.Done():
return nil
case <-time.After((1 << failedAttempts) * time.Second):
// do nothing
}
} else {
2021-04-22 15:20:53 +00:00
metrics.SetGauge([]string{"leader", "replication", metricName, "status"},
1,
)
metrics.SetGauge([]string{"leader", "replication", metricName, "index"},
float32(index),
)
lastRemoteIndex = index
s.updateACLReplicationStatusIndex(replicationType, index)
logger.Debug("ACL replication completed through remote index",
"index", index,
)
failedAttempts = 0
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
}
}
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
func (s *Server) aclReplicationLogger(singularNoun string) hclog.Logger {
return s.loggers.
Named(logging.Replication).
Named(logging.ACL).
Named(singularNoun)
}
func (s *Server) stopACLReplication() {
// these will be no-ops when not started
s.leaderRoutineManager.Stop(aclPolicyReplicationRoutineName)
s.leaderRoutineManager.Stop(aclRoleReplicationRoutineName)
s.leaderRoutineManager.Stop(aclTokenReplicationRoutineName)
New ACLs (#4791) This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week. Description At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers. On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though. Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though. All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management. Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are: A server running the new system must still support other clients using the legacy system. A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system. The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode. So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 16:04:07 +00:00
}
func (s *Server) startDeferredDeletion(ctx context.Context) {
if s.config.PeeringEnabled {
s.startPeeringDeferredDeletion(ctx)
}
s.startTenancyDeferredDeletion(ctx)
}
func (s *Server) stopDeferredDeletion() {
s.leaderRoutineManager.Stop(peeringDeletionRoutineName)
s.stopTenancyDeferredDeletion()
}
func (s *Server) startConfigReplication(ctx context.Context) {
if s.config.PrimaryDatacenter == "" || s.config.PrimaryDatacenter == s.config.Datacenter {
// replication shouldn't run in the primary DC
return
}
s.leaderRoutineManager.Start(ctx, configReplicationRoutineName, s.configReplicator.Run)
}
func (s *Server) stopConfigReplication() {
// will be a no-op when not started
s.leaderRoutineManager.Stop(configReplicationRoutineName)
}
func (s *Server) startFederationStateReplication(ctx context.Context) {
wan federation via mesh gateways (#6884) This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch: There are several distinct chunks of code that are affected: * new flags and config options for the server * retry join WAN is slightly different * retry join code is shared to discover primary mesh gateways from secondary datacenters * because retry join logic runs in the *agent* and the results of that operation for primary mesh gateways are needed in the *server* there are some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur at multiple layers of abstraction just to pass the data down to the right layer. * new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers * the function signature for RPC dialing picked up a new required field (the node name of the destination) * several new RPCs for manipulating a FederationState object: `FederationState:{Apply,Get,List,ListMeshGateways}` * 3 read-only internal APIs for debugging use to invoke those RPCs from curl * raft and fsm changes to persist these FederationStates * replication for FederationStates as they are canonically stored in the Primary and replicated to the Secondaries. * a special derivative of anti-entropy that runs in secondaries to snapshot their local mesh gateway `CheckServiceNodes` and sync them into their upstream FederationState in the primary (this works in conjunction with the replication to distribute addresses for all mesh gateways in all DCs to all other DCs) * a "gateway locator" convenience object to make use of this data to choose the addresses of gateways to use for any given RPC or gossip operation to a remote DC. This gets data from the "retry join" logic in the agent and also directly calls into the FSM. * RPC (`:8300`) on the server sniffs the first byte of a new connection to determine if it's actually doing native TLS. If so it checks the ALPN header for protocol determination (just like how the existing system uses the type-byte marker). * 2 new kinds of protocols are exclusively decoded via this native TLS mechanism: one for ferrying "packet" operations (udp-like) from the gossip layer and one for "stream" operations (tcp-like). The packet operations re-use sockets (using length-prefixing) to cut down on TLS re-negotiation overhead. * the server instances specially wrap the `memberlist.NetTransport` when running with gateway federation enabled (in a `wanfed.Transport`). The general gist is that if it tries to dial a node in the SAME datacenter (deduced by looking at the suffix of the node name) there is no change. If dialing a DIFFERENT datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh gateways to eventually end up in a server's :8300 port. * a new flag when launching a mesh gateway via `consul connect envoy` to indicate that the servers are to be exposed. This sets a special service meta when registering the gateway into the catalog. * `proxycfg/xds` notice this metadata blob to activate additional watches for the FederationState objects as well as the location of all of the consul servers in that datacenter. * `xds:` if the extra metadata is in place additional clusters are defined in a DC to bulk sink all traffic to another DC's gateways. For the current datacenter we listen on a wildcard name (`server.<dc>.consul`) that load balances all servers as well as one mini-cluster per node (`<node>.server.<dc>.consul`) * the `consul tls cert create` command got a new flag (`-node`) to help create an additional SAN in certs that can be used with this flavor of federation.
2020-03-09 20:59:02 +00:00
if s.config.PrimaryDatacenter == "" || s.config.PrimaryDatacenter == s.config.Datacenter {
// replication shouldn't run in the primary DC
return
}
if s.gatewayLocator != nil {
s.gatewayLocator.SetUseReplicationSignal(true)
s.gatewayLocator.SetLastFederationStateReplicationError(nil, false)
}
s.leaderRoutineManager.Start(ctx, federationStateReplicationRoutineName, s.federationStateReplicator.Run)
wan federation via mesh gateways (#6884) This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch: There are several distinct chunks of code that are affected: * new flags and config options for the server * retry join WAN is slightly different * retry join code is shared to discover primary mesh gateways from secondary datacenters * because retry join logic runs in the *agent* and the results of that operation for primary mesh gateways are needed in the *server* there are some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur at multiple layers of abstraction just to pass the data down to the right layer. * new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers * the function signature for RPC dialing picked up a new required field (the node name of the destination) * several new RPCs for manipulating a FederationState object: `FederationState:{Apply,Get,List,ListMeshGateways}` * 3 read-only internal APIs for debugging use to invoke those RPCs from curl * raft and fsm changes to persist these FederationStates * replication for FederationStates as they are canonically stored in the Primary and replicated to the Secondaries. * a special derivative of anti-entropy that runs in secondaries to snapshot their local mesh gateway `CheckServiceNodes` and sync them into their upstream FederationState in the primary (this works in conjunction with the replication to distribute addresses for all mesh gateways in all DCs to all other DCs) * a "gateway locator" convenience object to make use of this data to choose the addresses of gateways to use for any given RPC or gossip operation to a remote DC. This gets data from the "retry join" logic in the agent and also directly calls into the FSM. * RPC (`:8300`) on the server sniffs the first byte of a new connection to determine if it's actually doing native TLS. If so it checks the ALPN header for protocol determination (just like how the existing system uses the type-byte marker). * 2 new kinds of protocols are exclusively decoded via this native TLS mechanism: one for ferrying "packet" operations (udp-like) from the gossip layer and one for "stream" operations (tcp-like). The packet operations re-use sockets (using length-prefixing) to cut down on TLS re-negotiation overhead. * the server instances specially wrap the `memberlist.NetTransport` when running with gateway federation enabled (in a `wanfed.Transport`). The general gist is that if it tries to dial a node in the SAME datacenter (deduced by looking at the suffix of the node name) there is no change. If dialing a DIFFERENT datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh gateways to eventually end up in a server's :8300 port. * a new flag when launching a mesh gateway via `consul connect envoy` to indicate that the servers are to be exposed. This sets a special service meta when registering the gateway into the catalog. * `proxycfg/xds` notice this metadata blob to activate additional watches for the FederationState objects as well as the location of all of the consul servers in that datacenter. * `xds:` if the extra metadata is in place additional clusters are defined in a DC to bulk sink all traffic to another DC's gateways. For the current datacenter we listen on a wildcard name (`server.<dc>.consul`) that load balances all servers as well as one mini-cluster per node (`<node>.server.<dc>.consul`) * the `consul tls cert create` command got a new flag (`-node`) to help create an additional SAN in certs that can be used with this flavor of federation.
2020-03-09 20:59:02 +00:00
}
func (s *Server) stopFederationStateReplication() {
// will be a no-op when not started
s.leaderRoutineManager.Stop(federationStateReplicationRoutineName)
if s.gatewayLocator != nil {
s.gatewayLocator.SetUseReplicationSignal(false)
s.gatewayLocator.SetLastFederationStateReplicationError(nil, false)
}
wan federation via mesh gateways (#6884) This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch: There are several distinct chunks of code that are affected: * new flags and config options for the server * retry join WAN is slightly different * retry join code is shared to discover primary mesh gateways from secondary datacenters * because retry join logic runs in the *agent* and the results of that operation for primary mesh gateways are needed in the *server* there are some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur at multiple layers of abstraction just to pass the data down to the right layer. * new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers * the function signature for RPC dialing picked up a new required field (the node name of the destination) * several new RPCs for manipulating a FederationState object: `FederationState:{Apply,Get,List,ListMeshGateways}` * 3 read-only internal APIs for debugging use to invoke those RPCs from curl * raft and fsm changes to persist these FederationStates * replication for FederationStates as they are canonically stored in the Primary and replicated to the Secondaries. * a special derivative of anti-entropy that runs in secondaries to snapshot their local mesh gateway `CheckServiceNodes` and sync them into their upstream FederationState in the primary (this works in conjunction with the replication to distribute addresses for all mesh gateways in all DCs to all other DCs) * a "gateway locator" convenience object to make use of this data to choose the addresses of gateways to use for any given RPC or gossip operation to a remote DC. This gets data from the "retry join" logic in the agent and also directly calls into the FSM. * RPC (`:8300`) on the server sniffs the first byte of a new connection to determine if it's actually doing native TLS. If so it checks the ALPN header for protocol determination (just like how the existing system uses the type-byte marker). * 2 new kinds of protocols are exclusively decoded via this native TLS mechanism: one for ferrying "packet" operations (udp-like) from the gossip layer and one for "stream" operations (tcp-like). The packet operations re-use sockets (using length-prefixing) to cut down on TLS re-negotiation overhead. * the server instances specially wrap the `memberlist.NetTransport` when running with gateway federation enabled (in a `wanfed.Transport`). The general gist is that if it tries to dial a node in the SAME datacenter (deduced by looking at the suffix of the node name) there is no change. If dialing a DIFFERENT datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh gateways to eventually end up in a server's :8300 port. * a new flag when launching a mesh gateway via `consul connect envoy` to indicate that the servers are to be exposed. This sets a special service meta when registering the gateway into the catalog. * `proxycfg/xds` notice this metadata blob to activate additional watches for the FederationState objects as well as the location of all of the consul servers in that datacenter. * `xds:` if the extra metadata is in place additional clusters are defined in a DC to bulk sink all traffic to another DC's gateways. For the current datacenter we listen on a wildcard name (`server.<dc>.consul`) that load balances all servers as well as one mini-cluster per node (`<node>.server.<dc>.consul`) * the `consul tls cert create` command got a new flag (`-node`) to help create an additional SAN in certs that can be used with this flavor of federation.
2020-03-09 20:59:02 +00:00
}
// getOrCreateAutopilotConfig is used to get the autopilot config, initializing it if necessary
func (s *Server) getOrCreateAutopilotConfig() *structs.AutopilotConfig {
logger := s.loggers.Named(logging.Autopilot)
state := s.fsm.State()
_, config, err := state.AutopilotConfig()
if err != nil {
logger.Error("failed to get config", "error", err)
return nil
}
if config != nil {
return config
}
config = s.config.AutopilotConfig
req := structs.AutopilotSetConfigRequest{Config: *config}
if _, err = s.leaderRaftApply("AutopilotRequest.Apply", structs.AutopilotRequestType, req); err != nil {
logger.Error("failed to initialize config", "error", err)
return nil
}
return config
}
func (s *Server) bootstrapConfigEntries(entries []structs.ConfigEntry) error {
if s.config.PrimaryDatacenter != "" && s.config.PrimaryDatacenter != s.config.Datacenter {
// only bootstrap in the primary datacenter
return nil
}
if len(entries) < 1 {
// nothing to initialize
return nil
}
if ok, _ := ServersInDCMeetMinimumVersion(s, s.config.Datacenter, minCentralizedConfigVersion); !ok {
s.loggers.
Named(logging.CentralConfig).
Warn("config: can't initialize until all servers >=" + minCentralizedConfigVersion.String())
return nil
}
state := s.fsm.State()
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
// Do some quick preflight checks to see if someone is doing something
// that's not allowed at this time:
//
// - Trying to upgrade from an older pre-1.9.0 version of consul with
// intentions AND are trying to bootstrap a service-intentions config entry
// at the same time.
//
// - Trying to insert service-intentions config entries when connect is
// disabled.
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
usingConfigEntries, err := s.fsm.State().AreIntentionsInConfigEntries()
if err != nil {
return fmt.Errorf("Failed to determine if we are migrating intentions yet: %v", err)
}
if !usingConfigEntries || !s.config.ConnectEnabled {
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
for _, entry := range entries {
if entry.GetKind() == structs.ServiceIntentions {
if !s.config.ConnectEnabled {
return fmt.Errorf("Refusing to apply configuration entry %q / %q because Connect must be enabled to bootstrap intentions",
entry.GetKind(), entry.GetName())
}
if !usingConfigEntries {
return fmt.Errorf("Refusing to apply configuration entry %q / %q because intentions are still being migrated to config entries",
entry.GetKind(), entry.GetName())
}
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
}
}
}
for _, entry := range entries {
// avoid a round trip through Raft if we know the CAS is going to fail
_, existing, err := state.ConfigEntry(nil, entry.GetKind(), entry.GetName(), entry.GetEnterpriseMeta())
if err != nil {
return fmt.Errorf("Failed to determine whether the configuration for %q / %q already exists: %v", entry.GetKind(), entry.GetName(), err)
}
if existing == nil {
// ensure the ModifyIndex is set to 0 for the CAS request
entry.GetRaftIndex().ModifyIndex = 0
req := structs.ConfigEntryRequest{
Op: structs.ConfigEntryUpsertCAS,
Datacenter: s.config.Datacenter,
Entry: entry,
}
_, err := s.leaderRaftApply("ConfigEntry.Apply", structs.ConfigEntryRequestType, &req)
if err != nil {
return fmt.Errorf("Failed to apply configuration entry %q / %q: %v", entry.GetKind(), entry.GetName(), err)
}
}
}
return nil
}
// reconcileReaped is used to reconcile nodes that have failed and been reaped
// from Serf but remain in the catalog. This is done by looking for unknown nodes with serfHealth checks registered.
// We generate a "reap" event to cause the node to be cleaned up.
func (s *Server) reconcileReaped(known map[string]struct{}, nodeEntMeta *acl.EnterpriseMeta) error {
if nodeEntMeta == nil {
nodeEntMeta = structs.NodeEnterpriseMetaInDefaultPartition()
}
state := s.fsm.State()
_, checks, err := state.ChecksInState(nil, api.HealthAny, nodeEntMeta, structs.DefaultPeerKeyword)
if err != nil {
return err
}
2014-10-14 18:04:43 +00:00
for _, check := range checks {
// Ignore any non serf checks
if check.CheckID != structs.SerfCheckID {
continue
}
// Check if this node is "known" by serf
if _, ok := known[strings.ToLower(check.Node)]; ok {
continue
}
// Get the node services, look for ConsulServiceID
_, services, err := state.NodeServices(nil, check.Node, nodeEntMeta, structs.DefaultPeerKeyword)
if err != nil {
return err
}
serverPort := 0
serverAddr := ""
serverID := ""
CHECKS:
for _, service := range services.Services {
if service.ID == structs.ConsulServiceID {
_, node, err := state.GetNode(check.Node, nodeEntMeta, check.PeerName)
if err != nil {
s.logger.Error("Unable to look up node with name", "name", check.Node, "error", err)
continue CHECKS
}
serverAddr = node.Address
serverPort = service.Port
lookupAddr := net.JoinHostPort(serverAddr, strconv.Itoa(serverPort))
svr := s.serverLookup.Server(raft.ServerAddress(lookupAddr))
if svr != nil {
serverID = svr.ID
}
break
}
}
// Create a fake member
member := serf.Member{
Name: check.Node,
Tags: map[string]string{
"dc": s.config.Datacenter,
"role": "node",
},
}
addEnterpriseSerfTags(member.Tags, nodeEntMeta)
// Create the appropriate tags if this was a server node
if serverPort > 0 {
member.Tags["role"] = "consul"
member.Tags["port"] = strconv.FormatUint(uint64(serverPort), 10)
member.Tags["id"] = serverID
member.Addr = net.ParseIP(serverAddr)
}
// Attempt to reap this member
if err := s.handleReapMember(member, nodeEntMeta); err != nil {
return err
}
2014-01-09 23:49:09 +00:00
}
return nil
}
// reconcileMember is used to do an async reconcile of a single
// serf member
func (s *Server) reconcileMember(member serf.Member) error {
// Check if this is a member we should handle
if !s.shouldHandleMember(member) {
s.logger.Warn("skipping reconcile of node",
"member", member,
"partition", getSerfMemberEnterpriseMeta(member).PartitionOrDefault(),
)
2014-01-09 23:49:09 +00:00
return nil
}
defer metrics.MeasureSince([]string{"leader", "reconcileMember"}, time.Now())
nodeEntMeta := getSerfMemberEnterpriseMeta(member)
2014-01-09 23:49:09 +00:00
var err error
switch member.Status {
case serf.StatusAlive:
err = s.handleAliveMember(member, nodeEntMeta)
2014-01-09 23:49:09 +00:00
case serf.StatusFailed:
err = s.handleFailedMember(member, nodeEntMeta)
2014-01-09 23:49:09 +00:00
case serf.StatusLeft:
err = s.handleLeftMember(member, nodeEntMeta)
2014-03-20 19:51:49 +00:00
case StatusReap:
err = s.handleReapMember(member, nodeEntMeta)
2014-01-09 23:49:09 +00:00
}
if err != nil {
s.logger.Error("failed to reconcile member",
"member", member,
"partition", getSerfMemberEnterpriseMeta(member).PartitionOrDefault(),
"error", err,
)
// Permission denied should not bubble up
if acl.IsErrPermissionDenied(err) {
return nil
}
2014-01-09 23:49:09 +00:00
}
2014-01-09 23:49:09 +00:00
return nil
}
// shouldHandleMember checks if this is a Consul pool member
func (s *Server) shouldHandleMember(member serf.Member) bool {
if valid, dc := isConsulNode(member); valid && dc == s.config.Datacenter {
return true
}
if valid, parts := metadata.IsConsulServer(member); valid &&
parts.Segment == "" &&
parts.Datacenter == s.config.Datacenter {
2014-01-09 23:49:09 +00:00
return true
}
return false
}
// handleAliveMember is used to ensure the node
// is registered, with a passing health check.
func (s *Server) handleAliveMember(member serf.Member, nodeEntMeta *acl.EnterpriseMeta) error {
if nodeEntMeta == nil {
nodeEntMeta = structs.NodeEnterpriseMetaInDefaultPartition()
}
// Register consul service if a server
var service *structs.NodeService
if valid, parts := metadata.IsConsulServer(member); valid {
service = &structs.NodeService{
ID: structs.ConsulServiceID,
Service: structs.ConsulServiceName,
2014-01-20 23:39:07 +00:00
Port: parts.Port,
Weights: &structs.Weights{
Passing: 1,
Warning: 1,
},
EnterpriseMeta: *nodeEntMeta,
Meta: map[string]string{
// DEPRECATED - remove nonvoter in favor of read_replica in a future version of consul
"non_voter": strconv.FormatBool(member.Tags["nonvoter"] == "1"),
"read_replica": strconv.FormatBool(member.Tags["read_replica"] == "1"),
"raft_version": strconv.Itoa(parts.RaftVersion),
"serf_protocol_current": strconv.FormatUint(uint64(member.ProtocolCur), 10),
"serf_protocol_min": strconv.FormatUint(uint64(member.ProtocolMin), 10),
"serf_protocol_max": strconv.FormatUint(uint64(member.ProtocolMax), 10),
"version": parts.Build.String(),
},
}
if parts.ExternalGRPCPort > 0 {
service.Meta["grpc_port"] = strconv.Itoa(parts.ExternalGRPCPort)
}
if parts.ExternalGRPCTLSPort > 0 {
service.Meta["grpc_tls_port"] = strconv.Itoa(parts.ExternalGRPCTLSPort)
}
// Attempt to join the consul server
if err := s.joinConsulServer(member, parts); err != nil {
return err
}
}
2014-01-09 23:49:09 +00:00
// Check if the node exists
state := s.fsm.State()
_, node, err := state.GetNode(member.Name, nodeEntMeta, structs.DefaultPeerKeyword)
if err != nil {
return err
}
if node != nil && node.Address == member.Addr.String() {
// Check if the associated service is available
if service != nil {
match := false
_, services, err := state.NodeServices(nil, member.Name, nodeEntMeta, structs.DefaultPeerKeyword)
if err != nil {
return err
}
if services != nil {
for id, serv := range services.Services {
if id == service.ID {
// If metadata are different, be sure to update it
match = reflect.DeepEqual(serv.Meta, service.Meta)
}
}
}
if !match {
goto AFTER_CHECK
}
}
2014-01-09 23:49:09 +00:00
// Check if the serfCheck is in the passing state
_, checks, err := state.NodeChecks(nil, member.Name, nodeEntMeta, structs.DefaultPeerKeyword)
if err != nil {
return err
}
2014-01-09 23:49:09 +00:00
for _, check := range checks {
if check.CheckID == structs.SerfCheckID && check.Status == api.HealthPassing {
2014-01-09 23:49:09 +00:00
return nil
}
}
}
AFTER_CHECK:
s.logger.Info("member joined, marking health alive",
"member", member.Name,
"partition", getSerfMemberEnterpriseMeta(member).PartitionOrDefault(),
)
2014-01-09 23:49:09 +00:00
Backport of Displays Consul version of each nodes in UI nodes section into release/1.16.x (#18113) ## Backport This PR is auto-generated from #17754 to be assessed for backporting due to the inclusion of the label backport/1.16. :rotating_light: >**Warning** automatic cherry-pick of commits failed. If the first commit failed, you will see a blank no-op commit below. If at least one commit succeeded, you will see the cherry-picked commits up to, _not including_, the commit where the merge conflict occurred. The person who merged in the original PR is: @WenInCode This person should manually cherry-pick the original PR into a new backport PR, and close this one when the manual backport PR is merged in. > merge conflict error: unable to process merge commit: "1c757b8a2c1160ad53421b7b8bd7f74b205c4b89", automatic backport requires rebase workflow The below text is copied from the body of the original PR. --- fixes #17097 Consul version of each nodes in UI nodes section @jkirschner-hashicorp @huikang @team @Maintainers Updated consul version in the request to register consul. Added this as Node MetaData. Fetching this new metadata in UI <img width="1512" alt="Screenshot 2023-06-15 at 4 21 33 PM" src="https://github.com/hashicorp/consul/assets/3139634/94f7cf6b-701f-4230-b9f7-d8c4342d0737"> Also made this backward compatible and tested. Backward compatible in this context means - If consul binary with above PR changes is deployed to one of node, and if UI is run from this node, then the version of not only current (upgraded) node is displayed in UI , but also of older nodes given that they are consul servers only. For older (non-server or client) nodes the version is not added in NodeMeta Data and hence the version will not be displayed for them. If a old node is consul server, the version will be displayed. As the endpoint - "v1/internal/ui/nodes?dc=dc1" was already returning version in service meta. This is made use of in current UI changes. <img width="1480" alt="Screenshot 2023-06-16 at 6 58 32 PM" src="https://github.com/hashicorp/consul/assets/3139634/257942f4-fbed-437d-a492-37849d2bec4c"> --- <details> <summary> Overview of commits </summary> - 931fdfc7ecdc26bb7cc20b698c5e14c1b65fcc6e - b3e2ec1ccaca3832a088ffcac54257fa6653c6c1 - 8d0e9a54907039c09330c6cd7b9e761566af6856 - 04e5d88cca37821f6667be381c16aaa5958b5c92 - 28286a2e98f8cd66ef8593c2e2893b4db6080417 - 43e50ad38207952a9c4d04d45d08b6b8f71b31fe - 0cf1b7077cdf255596254d9dc1624a269c42b94d - 27f34ce1c2973591f75b1e38a81ccbe7cee6cee3 - 2ac76d62b8cbae76b1a903021aebb9b865e29d6e - 3d618df9ef1d10dd5056c8b1ed865839c553a0e0 - 1c757b8a2c1160ad53421b7b8bd7f74b205c4b89 - 23ce82b4cee8f74dd634dbe145313e9a56c0077d - 4dc1c9b4c5aafdb8883ef977dfa9b39da138b6cb - 85a12a92528bfa267a039a9bb258170be914abf7 - 25d30a3fa980d130a30d445d26d47ef2356cb553 - 7f1d6192dce3352e92307175848b89f91e728c24 - 5174cbff84b0795d4cb36eb8980d0d5336091ac9 </details> --------- Co-authored-by: Vijay Srinivas <vijayraghav22@gmail.com> Co-authored-by: John Murret <john.murret@hashicorp.com> Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
2023-07-17 17:27:50 +00:00
// Get consul version from serf member
// add this as node meta in catalog register request
buildVersion, err := metadata.Build(&member)
if err != nil {
return err
}
// Register with the catalog.
2014-01-09 23:49:09 +00:00
req := structs.RegisterRequest{
Datacenter: s.config.Datacenter,
Node: member.Name,
ID: types.NodeID(member.Tags["id"]),
2014-01-09 23:49:09 +00:00
Address: member.Addr.String(),
2014-01-10 01:57:13 +00:00
Service: service,
2014-01-09 23:49:09 +00:00
Check: &structs.HealthCheck{
Node: member.Name,
CheckID: structs.SerfCheckID,
Name: structs.SerfCheckName,
Status: api.HealthPassing,
Output: structs.SerfCheckAliveOutput,
2014-01-09 23:49:09 +00:00
},
EnterpriseMeta: *nodeEntMeta,
Backport of Displays Consul version of each nodes in UI nodes section into release/1.16.x (#18113) ## Backport This PR is auto-generated from #17754 to be assessed for backporting due to the inclusion of the label backport/1.16. :rotating_light: >**Warning** automatic cherry-pick of commits failed. If the first commit failed, you will see a blank no-op commit below. If at least one commit succeeded, you will see the cherry-picked commits up to, _not including_, the commit where the merge conflict occurred. The person who merged in the original PR is: @WenInCode This person should manually cherry-pick the original PR into a new backport PR, and close this one when the manual backport PR is merged in. > merge conflict error: unable to process merge commit: "1c757b8a2c1160ad53421b7b8bd7f74b205c4b89", automatic backport requires rebase workflow The below text is copied from the body of the original PR. --- fixes #17097 Consul version of each nodes in UI nodes section @jkirschner-hashicorp @huikang @team @Maintainers Updated consul version in the request to register consul. Added this as Node MetaData. Fetching this new metadata in UI <img width="1512" alt="Screenshot 2023-06-15 at 4 21 33 PM" src="https://github.com/hashicorp/consul/assets/3139634/94f7cf6b-701f-4230-b9f7-d8c4342d0737"> Also made this backward compatible and tested. Backward compatible in this context means - If consul binary with above PR changes is deployed to one of node, and if UI is run from this node, then the version of not only current (upgraded) node is displayed in UI , but also of older nodes given that they are consul servers only. For older (non-server or client) nodes the version is not added in NodeMeta Data and hence the version will not be displayed for them. If a old node is consul server, the version will be displayed. As the endpoint - "v1/internal/ui/nodes?dc=dc1" was already returning version in service meta. This is made use of in current UI changes. <img width="1480" alt="Screenshot 2023-06-16 at 6 58 32 PM" src="https://github.com/hashicorp/consul/assets/3139634/257942f4-fbed-437d-a492-37849d2bec4c"> --- <details> <summary> Overview of commits </summary> - 931fdfc7ecdc26bb7cc20b698c5e14c1b65fcc6e - b3e2ec1ccaca3832a088ffcac54257fa6653c6c1 - 8d0e9a54907039c09330c6cd7b9e761566af6856 - 04e5d88cca37821f6667be381c16aaa5958b5c92 - 28286a2e98f8cd66ef8593c2e2893b4db6080417 - 43e50ad38207952a9c4d04d45d08b6b8f71b31fe - 0cf1b7077cdf255596254d9dc1624a269c42b94d - 27f34ce1c2973591f75b1e38a81ccbe7cee6cee3 - 2ac76d62b8cbae76b1a903021aebb9b865e29d6e - 3d618df9ef1d10dd5056c8b1ed865839c553a0e0 - 1c757b8a2c1160ad53421b7b8bd7f74b205c4b89 - 23ce82b4cee8f74dd634dbe145313e9a56c0077d - 4dc1c9b4c5aafdb8883ef977dfa9b39da138b6cb - 85a12a92528bfa267a039a9bb258170be914abf7 - 25d30a3fa980d130a30d445d26d47ef2356cb553 - 7f1d6192dce3352e92307175848b89f91e728c24 - 5174cbff84b0795d4cb36eb8980d0d5336091ac9 </details> --------- Co-authored-by: Vijay Srinivas <vijayraghav22@gmail.com> Co-authored-by: John Murret <john.murret@hashicorp.com> Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
2023-07-17 17:27:50 +00:00
NodeMeta: map[string]string{
structs.MetaConsulVersion: buildVersion.String(),
},
2014-01-09 23:49:09 +00:00
}
if node != nil {
req.TaggedAddresses = node.TaggedAddresses
req.NodeMeta = node.Meta
}
_, err = s.raftApply(structs.RegisterRequestType, &req)
return err
2014-01-09 23:49:09 +00:00
}
// handleFailedMember is used to mark the node's status
// as being critical, along with all checks as unknown.
func (s *Server) handleFailedMember(member serf.Member, nodeEntMeta *acl.EnterpriseMeta) error {
if nodeEntMeta == nil {
nodeEntMeta = structs.NodeEnterpriseMetaInDefaultPartition()
}
2014-01-09 23:49:09 +00:00
// Check if the node exists
state := s.fsm.State()
_, node, err := state.GetNode(member.Name, nodeEntMeta, structs.DefaultPeerKeyword)
if err != nil {
return err
}
if node == nil {
s.logger.Info("ignoring failed event for member because it does not exist in the catalog",
"member", member.Name,
"partition", getSerfMemberEnterpriseMeta(member).PartitionOrDefault(),
)
return nil
}
if node.Address == member.Addr.String() {
2014-01-09 23:49:09 +00:00
// Check if the serfCheck is in the critical state
_, checks, err := state.NodeChecks(nil, member.Name, nodeEntMeta, structs.DefaultPeerKeyword)
if err != nil {
return err
}
2014-01-09 23:49:09 +00:00
for _, check := range checks {
if check.CheckID == structs.SerfCheckID && check.Status == api.HealthCritical {
2014-01-09 23:49:09 +00:00
return nil
}
}
}
s.logger.Info("member failed, marking health critical",
"member", member.Name,
"partition", getSerfMemberEnterpriseMeta(member).PartitionOrDefault(),
)
2014-01-09 23:49:09 +00:00
// Register with the catalog
req := structs.RegisterRequest{
Datacenter: s.config.Datacenter,
Node: member.Name,
EnterpriseMeta: *nodeEntMeta,
ID: types.NodeID(member.Tags["id"]),
Address: member.Addr.String(),
2014-01-09 23:49:09 +00:00
Check: &structs.HealthCheck{
Node: member.Name,
CheckID: structs.SerfCheckID,
Name: structs.SerfCheckName,
Status: api.HealthCritical,
Output: structs.SerfCheckFailedOutput,
2014-01-09 23:49:09 +00:00
},
// If there's existing information about the node, do not
// clobber it.
SkipNodeUpdate: true,
2014-01-09 23:49:09 +00:00
}
_, err = s.raftApply(structs.RegisterRequestType, &req)
return err
2014-01-09 23:49:09 +00:00
}
// handleLeftMember is used to handle members that gracefully
// left. They are deregistered if necessary.
func (s *Server) handleLeftMember(member serf.Member, nodeEntMeta *acl.EnterpriseMeta) error {
return s.handleDeregisterMember("left", member, nodeEntMeta)
2014-03-20 19:51:49 +00:00
}
// handleReapMember is used to handle members that have been
// reaped after a prolonged failure. They are deregistered.
func (s *Server) handleReapMember(member serf.Member, nodeEntMeta *acl.EnterpriseMeta) error {
return s.handleDeregisterMember("reaped", member, nodeEntMeta)
2014-03-20 19:51:49 +00:00
}
// handleDeregisterMember is used to deregister a member of a given reason
func (s *Server) handleDeregisterMember(reason string, member serf.Member, nodeEntMeta *acl.EnterpriseMeta) error {
if nodeEntMeta == nil {
nodeEntMeta = structs.NodeEnterpriseMetaInDefaultPartition()
}
// Do not deregister ourself. This can only happen if the current leader
// is leaving. Instead, we should allow a follower to take-over and
// deregister us later.
//
// TODO(partitions): check partitions here too? server names should be unique in general though
if strings.EqualFold(member.Name, s.config.NodeName) {
s.logger.Warn("deregistering self should be done by follower",
"name", s.config.NodeName,
"partition", getSerfMemberEnterpriseMeta(member).PartitionOrDefault(),
)
2014-01-09 23:49:09 +00:00
return nil
}
// Remove from Raft peers if this was a server
if valid, _ := metadata.IsConsulServer(member); valid {
if err := s.removeConsulServer(member); err != nil {
return err
}
}
// Check if the node does not exist
state := s.fsm.State()
_, node, err := state.GetNode(member.Name, nodeEntMeta, structs.DefaultPeerKeyword)
if err != nil {
return err
}
if node == nil {
return nil
}
2014-01-09 23:49:09 +00:00
// Deregister the node
s.logger.Info("deregistering member",
"member", member.Name,
"partition", getSerfMemberEnterpriseMeta(member).PartitionOrDefault(),
"reason", reason,
)
2014-01-09 23:49:09 +00:00
req := structs.DeregisterRequest{
Datacenter: s.config.Datacenter,
Node: member.Name,
EnterpriseMeta: *nodeEntMeta,
2014-01-09 23:49:09 +00:00
}
_, err = s.raftApply(structs.DeregisterRequestType, &req)
return err
2014-01-09 23:49:09 +00:00
}
// joinConsulServer is used to try to join another consul server
func (s *Server) joinConsulServer(m serf.Member, parts *metadata.Server) error {
// Check for possibility of multiple bootstrap nodes
2014-01-30 21:13:29 +00:00
if parts.Bootstrap {
members := s.serfLAN.Members()
for _, member := range members {
valid, p := metadata.IsConsulServer(member)
2014-01-30 21:13:29 +00:00
if valid && member.Name != m.Name && p.Bootstrap {
s.logger.Error("Two nodes are in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.",
"node_to_add", m.Name,
"other", member.Name,
)
return nil
}
}
}
// We used to do a check here and prevent adding the server if the cluster size was too small (1 or 2 servers) as a means
// of preventing the case where we may remove ourselves and cause a loss of leadership. The Autopilot AddServer function
// will now handle simple address updates better and so long as the address doesn't conflict with another node
// it will not require a removal but will instead just update the address. If it would require a removal of other nodes
// due to conflicts then the logic regarding cluster sizes will kick in and prevent doing anything dangerous that could
// cause loss of leadership.
// get the autpilot library version of a server from the serf member
apServer, err := s.autopilotServer(m)
if err != nil {
return err
}
// now ask autopilot to add it
return s.autopilot.AddServer(apServer)
}
2014-01-20 23:39:07 +00:00
// removeConsulServer is used to try to remove a consul server that has left
func (s *Server) removeConsulServer(m serf.Member) error {
server, err := s.autopilotServer(m)
if err != nil || server == nil {
2017-02-22 20:53:32 +00:00
return err
}
return s.autopilot.RemoveServer(server.ID)
}
// reapTombstones is invoked by the current leader to manage garbage
// collection of tombstones. When a key is deleted, we trigger a tombstone
// GC clock. Once the expiration is reached, this routine is invoked
// to clear all tombstones before this index. This must be replicated
// through Raft to ensure consistency. We do this outside the leader loop
// to avoid blocking.
func (s *Server) reapTombstones(index uint64) {
defer metrics.MeasureSince([]string{"leader", "reapTombstones"}, time.Now())
req := structs.TombstoneRequest{
Datacenter: s.config.Datacenter,
Op: structs.TombstoneReap,
ReapIndex: index,
}
_, err := s.raftApply(structs.TombstoneRequestType, &req)
if err != nil {
s.logger.Error("failed to reap tombstones up to index",
"index", index,
"error", err,
)
}
}
func (s *Server) setDatacenterSupportsFederationStates() {
atomic.StoreInt32(&s.dcSupportsFederationStates, 1)
}
func (s *Server) DatacenterSupportsFederationStates() bool {
if atomic.LoadInt32(&s.dcSupportsFederationStates) != 0 {
return true
}
state := serversFederationStatesInfo{
supported: true,
found: false,
}
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
// if we are in a secondary, check if they are supported in the primary dc
if s.config.PrimaryDatacenter != s.config.Datacenter {
s.router.CheckServers(s.config.PrimaryDatacenter, state.update)
if !state.supported || !state.found {
s.logger.Debug("federation states are not enabled in the primary dc")
return false
}
}
// check the servers in the local DC
s.router.CheckServers(s.config.Datacenter, state.update)
if state.supported && state.found {
s.setDatacenterSupportsFederationStates()
return true
}
s.logger.Debug("federation states are not enabled in this datacenter", "datacenter", s.config.Datacenter)
return false
}
type serversFederationStatesInfo struct {
// supported indicates whether every processed server supports federation states
supported bool
// found indicates that at least one server was processed
found bool
}
func (s *serversFederationStatesInfo) update(srv *metadata.Server) bool {
if srv.Status != serf.StatusAlive && srv.Status != serf.StatusFailed {
// they are left or something so regardless we treat these servers as meeting
// the version requirement
return true
}
// mark that we processed at least one server
s.found = true
if supported, ok := srv.FeatureFlags["fs"]; ok && supported == 1 {
return true
}
// mark that at least one server does not support federation states
s.supported = false
// prevent continuing server evaluation
return false
}
connect: intentions are now managed as a new config entry kind "service-intentions" (#8834) - Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older copies of consul will not see new config entries it doesn't understand replicate down. - Add shim conversion code so that the old API/CLI method of interacting with intentions will continue to work so long as none of these are edited via config entry endpoints. Almost all of the read-only APIs will continue to function indefinitely. - Add new APIs that operate on individual intentions without IDs so that the UI doesn't need to implement CAS operations. - Add a new serf feature flag indicating support for intentions-as-config-entries. - The old line-item intentions way of interacting with the state store will transparently flip between the legacy memdb table and the config entry representations so that readers will never see a hiccup during migration where the results are incomplete. It uses a piece of system metadata to control the flip. - The primary datacenter will begin migrating intentions into config entries on startup once all servers in the datacenter are on a version of Consul with the intentions-as-config-entries feature flag. When it is complete the old state store representations will be cleared. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up. - The secondary datacenters continue to run the old intentions replicator until all servers in the secondary DC and primary DC support intentions-as-config-entries (via serf flag). Once this condition it met the old intentions replicator ceases. - The secondary datacenters replicate the new config entries as they are migrated in the primary. When they detect that the primary has zeroed it's old state store table it waits until all config entries up to that point are replicated and then zeroes its own copy of the old state store table. We also record a piece of system metadata indicating this has occurred. We use this metadata to skip ALL of this code the next time the leader starts up.
2020-10-06 18:24:05 +00:00
func (s *Server) setDatacenterSupportsIntentionsAsConfigEntries() {
atomic.StoreInt32(&s.dcSupportsIntentionsAsConfigEntries, 1)
}
func (s *Server) DatacenterSupportsIntentionsAsConfigEntries() bool {
if atomic.LoadInt32(&s.dcSupportsIntentionsAsConfigEntries) != 0 {
return true
}
state := serversIntentionsAsConfigEntriesInfo{
supported: true,
found: false,
}
// if we are in a secondary, check if they are supported in the primary dc
if s.config.PrimaryDatacenter != s.config.Datacenter {
s.router.CheckServers(s.config.PrimaryDatacenter, state.update)
if !state.supported || !state.found {
s.logger.Debug("intentions have not been migrated to config entries in the primary dc yet")
return false
}
}
// check the servers in the local DC
s.router.CheckServers(s.config.Datacenter, state.update)
if state.supported && state.found {
s.setDatacenterSupportsIntentionsAsConfigEntries()
return true
}
s.logger.Debug("intentions cannot be migrated to config entries in this datacenter", "datacenter", s.config.Datacenter)
return false
}
type serversIntentionsAsConfigEntriesInfo struct {
// supported indicates whether every processed server supports intentions as config entries
supported bool
// found indicates that at least one server was processed
found bool
}
func (s *serversIntentionsAsConfigEntriesInfo) update(srv *metadata.Server) bool {
if srv.Status != serf.StatusAlive && srv.Status != serf.StatusFailed {
// they are left or something so regardless we treat these servers as meeting
// the version requirement
return true
}
// mark that we processed at least one server
s.found = true
if supported, ok := srv.FeatureFlags["si"]; ok && supported == 1 {
return true
}
// mark that at least one server does not support service-intentions
s.supported = false
// prevent continuing server evaluation
return false
}