From 171e95d2655c3bf8cbdb53af71ab61f930e2cae4 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 26 Feb 2018 19:53:13 -0800 Subject: [PATCH 001/539] Readme for Fork Instructions --- README.md | 105 +++++++++++++++++++++++++----------------------------- 1 file changed, 48 insertions(+), 57 deletions(-) diff --git a/README.md b/README.md index 1e29e765f..1d7c55f37 100644 --- a/README.md +++ b/README.md @@ -1,75 +1,66 @@ -# Consul [![Build Status](https://travis-ci.org/hashicorp/consul.svg?branch=master)](https://travis-ci.org/hashicorp/consul) [![Join the chat at https://gitter.im/hashicorp-consul/Lobby](https://badges.gitter.im/hashicorp-consul/Lobby.svg)](https://gitter.im/hashicorp-consul/Lobby?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +**This is a temporary README. We'll restore the old README prior to PR upstream.** -* Website: https://www.consul.io -* Chat: [Gitter](https://gitter.im/hashicorp-consul/Lobby) -* Mailing list: [Google Groups](https://groups.google.com/group/consul-tool/) +# Consul Connect -Consul is a tool for service discovery and configuration. Consul is -distributed, highly available, and extremely scalable. +This repository is the forked repository for Consul Connect work to happen +in private prior to public release. This README will explain how to safely +use this fork, how to bring in upstream changes, etc. -Consul provides several key features: +## Cloning -* **Service Discovery** - Consul makes it simple for services to register - themselves and to discover other services via a DNS or HTTP interface. - External services such as SaaS providers can be registered as well. +To use this repository, clone it into your GOPATH as usual but you must +**rename `consul-connect` to `consul`** so that Go imports continue working +as usual. -* **Health Checking** - Health Checking enables Consul to quickly alert - operators about any issues in a cluster. The integration with service - discovery prevents routing traffic to unhealthy hosts and enables service - level circuit breakers. +## Important: Never Modify Master -* **Key/Value Storage** - A flexible key/value store enables storing - dynamic configuration, feature flagging, coordination, leader election and - more. The simple HTTP API makes it easy to use anywhere. +**NEVER MODIFY MASTER! NEVER MODIFY MASTER!** -* **Multi-Datacenter** - Consul is built to be datacenter aware, and can - support any number of regions without complex configuration. +We want to keep the "master" branch equivalent to OSS master. This will make +rebasing easy for master. Instead, we'll use the branch `f-connect`. All +feature branches should branch from `f-connect` and make PRs against +`f-connect`. -Consul runs on Linux, Mac OS X, FreeBSD, Solaris, and Windows. A commercial -version called [Consul Enterprise](https://www.hashicorp.com/products/consul) -is also available. +When we're ready to merge back to upstream, we can make a single mega PR +merging `f-connect` into OSS master. This way we don't have a sudden mega +push to master on OSS. -## Quick Start +## Creating a Feature Branch -An extensive quick start is viewable on the Consul website: +To create a feature branch, branch from `f-connect`: -https://www.consul.io/intro/getting-started/install.html - -## Documentation - -Full, comprehensive documentation is viewable on the Consul website: - -https://www.consul.io/docs - -## Developing Consul - -If you wish to work on Consul itself, you'll first need [Go](https://golang.org) -installed (version 1.9+ is _required_). Make sure you have Go properly installed, -including setting up your [GOPATH](https://golang.org/doc/code.html#GOPATH). - -Next, clone this repository into `$GOPATH/src/github.com/hashicorp/consul` and -then just type `make`. In a few moments, you'll have a working `consul` executable: - -``` -$ make -... -$ bin/consul -... +```sh +git checkout f-connect +git checkout -b my-new-branch ``` -*Note: `make` will build all os/architecture combinations. Set the environment variable `CONSUL_DEV=1` to build it just for your local machine's os/architecture, or use `make dev`.* +All merged Connect features will be in `f-connect`, so you want to work +from that branch. When making a PR for your feature branch, target the +`f-connect` branch as the merge target. You can do this by using the dropdowns +in the GitHub UI when creating a PR. -*Note: `make` will also place a copy of the binary in the first part of your `$GOPATH`.* +## Syncing Upstream -You can run tests by typing `make test`. The test suite may fail if -over-parallelized, so if you are seeing stochastic failures try -`GOTEST_FLAGS="-p 2 -parallel 2" make test`. +First update our local master: -If you make any changes to the code, run `make format` in order to automatically -format the code according to Go standards. +```sh +# This has to happen on forked master +git checkout master -## Vendoring +# Add upstream to OSS Consul +git remote add upstream https://github.com/hashicorp/consul.git -Consul currently uses [govendor](https://github.com/kardianos/govendor) for -vendoring and [vendorfmt](https://github.com/magiconair/vendorfmt) for formatting -`vendor.json` to a more merge-friendly "one line per package" format. +# Fetch it +git fetch upstream + +# Rebase forked master onto upstream. This should have no changes since +# we're never modifying master. +git rebase upstream master +``` + +Next, update the `f-connect` branch: + +```sh +git checkout f-connect +git rebase origin master +``` From c05bed86e171bee7ec9bd7492d2a034287811613 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 27 Feb 2018 22:25:05 -0800 Subject: [PATCH 002/539] agent/consul/state: initial work on intentions memdb table --- agent/consul/state/intention.go | 136 +++++++++++++++++++++++++++ agent/consul/state/intention_test.go | 122 ++++++++++++++++++++++++ agent/consul/state/state_store.go | 4 + agent/structs/intention.go | 62 ++++++++++++ 4 files changed, 324 insertions(+) create mode 100644 agent/consul/state/intention.go create mode 100644 agent/consul/state/intention_test.go create mode 100644 agent/structs/intention.go diff --git a/agent/consul/state/intention.go b/agent/consul/state/intention.go new file mode 100644 index 000000000..040761e2c --- /dev/null +++ b/agent/consul/state/intention.go @@ -0,0 +1,136 @@ +package state + +import ( + "fmt" + + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/go-memdb" +) + +const ( + intentionsTableName = "connect-intentions" +) + +// intentionsTableSchema returns a new table schema used for storing +// intentions for Connect. +func intentionsTableSchema() *memdb.TableSchema { + return &memdb.TableSchema{ + Name: intentionsTableName, + Indexes: map[string]*memdb.IndexSchema{ + "id": &memdb.IndexSchema{ + Name: "id", + AllowMissing: false, + Unique: true, + Indexer: &memdb.UUIDFieldIndex{ + Field: "ID", + }, + }, + "destination": &memdb.IndexSchema{ + Name: "destination", + AllowMissing: true, + Unique: true, + Indexer: &memdb.CompoundIndex{ + Indexes: []memdb.Indexer{ + &memdb.StringFieldIndex{ + Field: "DestinationNS", + Lowercase: true, + }, + &memdb.StringFieldIndex{ + Field: "DestinationName", + Lowercase: true, + }, + }, + }, + }, + "source": &memdb.IndexSchema{ + Name: "source", + AllowMissing: true, + Unique: true, + Indexer: &memdb.CompoundIndex{ + Indexes: []memdb.Indexer{ + &memdb.StringFieldIndex{ + Field: "SourceNS", + Lowercase: true, + }, + &memdb.StringFieldIndex{ + Field: "SourceName", + Lowercase: true, + }, + }, + }, + }, + }, + } +} + +func init() { + registerSchema(intentionsTableSchema) +} + +// IntentionSet creates or updates an intention. +func (s *Store) IntentionSet(idx uint64, ixn *structs.Intention) error { + tx := s.db.Txn(true) + defer tx.Abort() + + if err := s.intentionSetTxn(tx, idx, ixn); err != nil { + return err + } + + tx.Commit() + return nil +} + +// intentionSetTxn is the inner method used to insert an intention with +// the proper indexes into the state store. +func (s *Store) intentionSetTxn(tx *memdb.Txn, idx uint64, ixn *structs.Intention) error { + // ID is required + if ixn.ID == "" { + return ErrMissingIntentionID + } + + // Check for an existing intention + existing, err := tx.First(intentionsTableName, "id", ixn.ID) + if err != nil { + return fmt.Errorf("failed intention looup: %s", err) + } + if existing != nil { + ixn.CreateIndex = existing.(*structs.Intention).CreateIndex + } else { + ixn.CreateIndex = idx + } + ixn.ModifyIndex = idx + + // Insert + if err := tx.Insert(intentionsTableName, ixn); err != nil { + return err + } + if err := tx.Insert("index", &IndexEntry{intentionsTableName, idx}); err != nil { + return fmt.Errorf("failed updating index: %s", err) + } + + return nil +} + +// IntentionGet returns the given intention by ID. +func (s *Store) IntentionGet(ws memdb.WatchSet, id string) (uint64, *structs.Intention, error) { + tx := s.db.Txn(false) + defer tx.Abort() + + // Get the table index. + idx := maxIndexTxn(tx, intentionsTableName) + + // Look up by its ID. + watchCh, intention, err := tx.FirstWatch(intentionsTableName, "id", id) + if err != nil { + return 0, nil, fmt.Errorf("failed intention lookup: %s", err) + } + ws.Add(watchCh) + + // Convert the interface{} if it is non-nil + var result *structs.Intention + if intention != nil { + result = intention.(*structs.Intention) + } + + return idx, result, nil +} diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go new file mode 100644 index 000000000..1c168c3bc --- /dev/null +++ b/agent/consul/state/intention_test.go @@ -0,0 +1,122 @@ +package state + +import ( + "reflect" + "testing" + + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/go-memdb" +) + +func TestStore_IntentionGet_none(t *testing.T) { + s := testStateStore(t) + + // Querying with no results returns nil. + ws := memdb.NewWatchSet() + idx, res, err := s.IntentionGet(ws, testUUID()) + if idx != 0 || res != nil || err != nil { + t.Fatalf("expected (0, nil, nil), got: (%d, %#v, %#v)", idx, res, err) + } +} + +func TestStore_IntentionSetGet_basic(t *testing.T) { + s := testStateStore(t) + + // Call Get to populate the watch set + ws := memdb.NewWatchSet() + _, _, err := s.IntentionGet(ws, testUUID()) + if err != nil { + t.Fatalf("err: %s", err) + } + + // Build a valid intention + ixn := &structs.Intention{ + ID: testUUID(), + } + + // Inserting a with empty ID is disallowed. + if err := s.IntentionSet(1, ixn); err != nil { + t.Fatalf("err: %s", err) + } + + // Make sure the index got updated. + if idx := s.maxIndex(intentionsTableName); idx != 1 { + t.Fatalf("bad index: %d", idx) + } + if !watchFired(ws) { + t.Fatalf("bad") + } + + // Read it back out and verify it. + expected := &structs.Intention{ + ID: ixn.ID, + RaftIndex: structs.RaftIndex{ + CreateIndex: 1, + ModifyIndex: 1, + }, + } + + ws = memdb.NewWatchSet() + idx, actual, err := s.IntentionGet(ws, ixn.ID) + if err != nil { + t.Fatalf("err: %s", err) + } + if idx != expected.CreateIndex { + t.Fatalf("bad index: %d", idx) + } + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad: %v", actual) + } + + // Change a value and test updating + ixn.SourceNS = "foo" + if err := s.IntentionSet(2, ixn); err != nil { + t.Fatalf("err: %s", err) + } + + // Make sure the index got updated. + if idx := s.maxIndex(intentionsTableName); idx != 2 { + t.Fatalf("bad index: %d", idx) + } + if !watchFired(ws) { + t.Fatalf("bad") + } + + // Read it back and verify the data was updated + expected.SourceNS = ixn.SourceNS + expected.ModifyIndex = 2 + ws = memdb.NewWatchSet() + idx, actual, err = s.IntentionGet(ws, ixn.ID) + if err != nil { + t.Fatalf("err: %s", err) + } + if idx != expected.ModifyIndex { + t.Fatalf("bad index: %d", idx) + } + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad: %#v", actual) + } +} + +func TestStore_IntentionSet_emptyId(t *testing.T) { + s := testStateStore(t) + + ws := memdb.NewWatchSet() + _, _, err := s.IntentionGet(ws, testUUID()) + if err != nil { + t.Fatalf("err: %s", err) + } + + // Inserting a with empty ID is disallowed. + if err := s.IntentionSet(1, &structs.Intention{}); err == nil { + t.Fatalf("expected %#v, got: %#v", ErrMissingIntentionID, err) + } + + // Index is not updated if nothing is saved. + if idx := s.maxIndex(intentionsTableName); idx != 0 { + t.Fatalf("bad index: %d", idx) + } + if watchFired(ws) { + t.Fatalf("bad") + } +} diff --git a/agent/consul/state/state_store.go b/agent/consul/state/state_store.go index 94947f366..62b6a8bff 100644 --- a/agent/consul/state/state_store.go +++ b/agent/consul/state/state_store.go @@ -28,6 +28,10 @@ var ( // ErrMissingQueryID is returned when a Query set is called on // a Query with an empty ID. ErrMissingQueryID = errors.New("Missing Query ID") + + // ErrMissingIntentionID is returned when an Intention set is called + // with an Intention with an empty ID. + ErrMissingIntentionID = errors.New("Missing Intention ID") ) const ( diff --git a/agent/structs/intention.go b/agent/structs/intention.go new file mode 100644 index 000000000..646fb3f64 --- /dev/null +++ b/agent/structs/intention.go @@ -0,0 +1,62 @@ +package structs + +import ( + "time" +) + +// Intention defines an intention for the Connect Service Graph. This defines +// the allowed or denied behavior of a connection between two services using +// Connect. +type Intention struct { + // ID is the UUID-based ID for the intention, always generated by Consul. + ID string + + // SourceNS, SourceName are the namespace and name, respectively, of + // the source service. Either of these may be the wildcard "*", but only + // the full value can be a wildcard. Partial wildcards are not allowed. + // The source may also be a non-Consul service, as specified by SourceType. + // + // DestinationNS, DestinationName is the same, but for the destination + // service. The same rules apply. The destination is always a Consul + // service. + SourceNS, SourceName string + DestinationNS, DestinationName string + + // SourceType is the type of the value for the source. + SourceType IntentionSourceType + + // Action is whether this is a whitelist or blacklist intention. + Action IntentionAction + + // DefaultAddr, DefaultPort of the local listening proxy (if any) to + // make this connection. + DefaultAddr string + DefaultPort int + + // Meta is arbitrary metadata associated with the intention. This is + // opaque to Consul but is served in API responses. + Meta map[string]string + + // CreatedAt and UpdatedAt keep track of when this record was created + // or modified. + CreatedAt, UpdatedAt time.Time + + RaftIndex +} + +// IntentionAction is the action that the intention represents. This +// can be "allow" or "deny" to whitelist or blacklist intentions. +type IntentionAction string + +const ( + IntentionActionAllow IntentionAction = "allow" + IntentionActionDeny IntentionAction = "deny" +) + +// IntentionSourceType is the type of the source within an intention. +type IntentionSourceType string + +const ( + // IntentionSourceConsul is a service within the Consul catalog. + IntentionSourceConsul IntentionSourceType = "consul" +) From 8b0ac7d9c5891c4a2671eda365738670038bf902 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Feb 2018 09:53:21 -0800 Subject: [PATCH 003/539] agent/consul/state: list intentions --- agent/consul/state/intention.go | 22 ++++++++++ agent/consul/state/intention_test.go | 63 ++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+) diff --git a/agent/consul/state/intention.go b/agent/consul/state/intention.go index 040761e2c..844a1a509 100644 --- a/agent/consul/state/intention.go +++ b/agent/consul/state/intention.go @@ -67,6 +67,28 @@ func init() { registerSchema(intentionsTableSchema) } +// Intentions returns the list of all intentions. +func (s *Store) Intentions(ws memdb.WatchSet) (uint64, structs.Intentions, error) { + tx := s.db.Txn(false) + defer tx.Abort() + + // Get the index + idx := maxIndexTxn(tx, intentionsTableName) + + // Get all intentions + iter, err := tx.Get(intentionsTableName, "id") + if err != nil { + return 0, nil, fmt.Errorf("failed intention lookup: %s", err) + } + ws.Add(iter.WatchCh()) + + var results structs.Intentions + for ixn := iter.Next(); ixn != nil; ixn = iter.Next() { + results = append(results, ixn.(*structs.Intention)) + } + return idx, results, nil +} + // IntentionSet creates or updates an intention. func (s *Store) IntentionSet(idx uint64, ixn *structs.Intention) error { tx := s.db.Txn(true) diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go index 1c168c3bc..e4794bba7 100644 --- a/agent/consul/state/intention_test.go +++ b/agent/consul/state/intention_test.go @@ -120,3 +120,66 @@ func TestStore_IntentionSet_emptyId(t *testing.T) { t.Fatalf("bad") } } + +func TestStore_IntentionsList(t *testing.T) { + s := testStateStore(t) + + // Querying with no results returns nil. + ws := memdb.NewWatchSet() + idx, res, err := s.Intentions(ws) + if idx != 0 || res != nil || err != nil { + t.Fatalf("expected (0, nil, nil), got: (%d, %#v, %#v)", idx, res, err) + } + + // Create some intentions + ixns := structs.Intentions{ + &structs.Intention{ + ID: testUUID(), + }, + &structs.Intention{ + ID: testUUID(), + }, + } + + // Force deterministic sort order + ixns[0].ID = "a" + ixns[0].ID[1:] + ixns[1].ID = "b" + ixns[1].ID[1:] + + // Create + for i, ixn := range ixns { + if err := s.IntentionSet(uint64(1+i), ixn); err != nil { + t.Fatalf("err: %s", err) + } + } + if !watchFired(ws) { + t.Fatalf("bad") + } + + // Read it back and verify. + expected := structs.Intentions{ + &structs.Intention{ + ID: ixns[0].ID, + RaftIndex: structs.RaftIndex{ + CreateIndex: 1, + ModifyIndex: 1, + }, + }, + &structs.Intention{ + ID: ixns[1].ID, + RaftIndex: structs.RaftIndex{ + CreateIndex: 2, + ModifyIndex: 2, + }, + }, + } + idx, actual, err := s.Intentions(nil) + if err != nil { + t.Fatalf("err: %s", err) + } + if idx != 2 { + t.Fatalf("bad index: %d", idx) + } + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad: %v", actual) + } +} From b19a289596fdd94afabb188c6448f46adb7521f7 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Feb 2018 10:04:27 -0800 Subject: [PATCH 004/539] agent/consul: start Intention RPC endpoints, starting with List --- agent/consul/intention_endpoint.go | 36 +++++++++++++++++++++++++ agent/consul/intention_endpoint_test.go | 36 +++++++++++++++++++++++++ agent/consul/server_oss.go | 1 + agent/structs/intention.go | 9 +++++++ 4 files changed, 82 insertions(+) create mode 100644 agent/consul/intention_endpoint.go create mode 100644 agent/consul/intention_endpoint_test.go diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go new file mode 100644 index 000000000..7737d06dd --- /dev/null +++ b/agent/consul/intention_endpoint.go @@ -0,0 +1,36 @@ +package consul + +import ( + "github.com/hashicorp/consul/agent/consul/state" + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/go-memdb" +) + +// Intention manages the Connect intentions. +type Intention struct { + // srv is a pointer back to the server. + srv *Server +} + +func (s *Intention) List( + args *structs.DCSpecificRequest, + reply *structs.IndexedIntentions) error { + // Forward if necessary + if done, err := s.srv.forward("Intention.List", args, args, reply); done { + return err + } + + return s.srv.blockingQuery( + &args.QueryOptions, &reply.QueryMeta, + func(ws memdb.WatchSet, state *state.Store) error { + index, ixns, err := state.Intentions(ws) + if err != nil { + return err + } + + reply.Index, reply.Intentions = index, ixns + // filterACL + return nil + }, + ) +} diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go new file mode 100644 index 000000000..13242374c --- /dev/null +++ b/agent/consul/intention_endpoint_test.go @@ -0,0 +1,36 @@ +package consul + +import ( + "os" + "testing" + + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/testrpc" + "github.com/hashicorp/net-rpc-msgpackrpc" +) + +func TestIntentionList(t *testing.T) { + t.Parallel() + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + + codec := rpcClient(t, s1) + defer codec.Close() + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Test with no intentions inserted yet + { + req := &structs.DCSpecificRequest{ + Datacenter: "dc1", + } + var resp structs.IndexedIntentions + if err := msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + + if len(resp.Intentions) != 0 { + t.Fatalf("bad: %v", resp) + } + } +} diff --git a/agent/consul/server_oss.go b/agent/consul/server_oss.go index 05c02e46c..e633c2699 100644 --- a/agent/consul/server_oss.go +++ b/agent/consul/server_oss.go @@ -5,6 +5,7 @@ func init() { registerEndpoint(func(s *Server) interface{} { return &Catalog{s} }) registerEndpoint(func(s *Server) interface{} { return NewCoordinate(s) }) registerEndpoint(func(s *Server) interface{} { return &Health{s} }) + registerEndpoint(func(s *Server) interface{} { return &Intention{s} }) registerEndpoint(func(s *Server) interface{} { return &Internal{s} }) registerEndpoint(func(s *Server) interface{} { return &KVS{s} }) registerEndpoint(func(s *Server) interface{} { return &Operator{s} }) diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 646fb3f64..7837ad431 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -60,3 +60,12 @@ const ( // IntentionSourceConsul is a service within the Consul catalog. IntentionSourceConsul IntentionSourceType = "consul" ) + +// Intentions is a list of intentions. +type Intentions []*Intention + +// IndexedIntentions represents a list of intentions for RPC responses. +type IndexedIntentions struct { + Intentions Intentions + QueryMeta +} From 48b9a43f1d2cf32b4694e37ea810e0b3a5726165 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Feb 2018 10:28:07 -0800 Subject: [PATCH 005/539] agent/consul: Intention.Apply, FSM methods, very little validation --- agent/consul/fsm/commands_oss.go | 24 +++++++++++ agent/consul/intention_endpoint.go | 54 +++++++++++++++++++++++++ agent/consul/intention_endpoint_test.go | 31 ++++++++++++++ agent/structs/intention.go | 30 ++++++++++++++ agent/structs/structs.go | 1 + 5 files changed, 140 insertions(+) diff --git a/agent/consul/fsm/commands_oss.go b/agent/consul/fsm/commands_oss.go index ede04eef6..c90f185e0 100644 --- a/agent/consul/fsm/commands_oss.go +++ b/agent/consul/fsm/commands_oss.go @@ -20,6 +20,7 @@ func init() { registerCommand(structs.PreparedQueryRequestType, (*FSM).applyPreparedQueryOperation) registerCommand(structs.TxnRequestType, (*FSM).applyTxn) registerCommand(structs.AutopilotRequestType, (*FSM).applyAutopilotUpdate) + registerCommand(structs.IntentionRequestType, (*FSM).applyIntentionOperation) } func (c *FSM) applyRegister(buf []byte, index uint64) interface{} { @@ -246,3 +247,26 @@ func (c *FSM) applyAutopilotUpdate(buf []byte, index uint64) interface{} { } return c.state.AutopilotSetConfig(index, &req.Config) } + +// applyIntentionOperation applies the given intention operation to the state store. +func (c *FSM) applyIntentionOperation(buf []byte, index uint64) interface{} { + var req structs.IntentionRequest + if err := structs.Decode(buf, &req); err != nil { + panic(fmt.Errorf("failed to decode request: %v", err)) + } + + defer metrics.MeasureSinceWithLabels([]string{"consul", "fsm", "intention"}, time.Now(), + []metrics.Label{{Name: "op", Value: string(req.Op)}}) + defer metrics.MeasureSinceWithLabels([]string{"fsm", "intention"}, time.Now(), + []metrics.Label{{Name: "op", Value: string(req.Op)}}) + switch req.Op { + case structs.IntentionOpCreate, structs.IntentionOpUpdate: + return c.state.IntentionSet(index, req.Intention) + case structs.IntentionOpDelete: + panic("TODO") + //return c.state.PreparedQueryDelete(index, req.Query.ID) + default: + c.logger.Printf("[WARN] consul.fsm: Invalid Intention operation '%s'", req.Op) + return fmt.Errorf("Invalid Intention operation '%s'", req.Op) + } +} diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 7737d06dd..8d07b4e7b 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -1,9 +1,13 @@ package consul import ( + "time" + + "github.com/armon/go-metrics" "github.com/hashicorp/consul/agent/consul/state" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" + "github.com/hashicorp/go-uuid" ) // Intention manages the Connect intentions. @@ -12,6 +16,56 @@ type Intention struct { srv *Server } +// Apply creates or updates an intention in the data store. +func (s *Intention) Apply( + args *structs.IntentionRequest, + reply *string) error { + if done, err := s.srv.forward("Intention.Apply", args, args, reply); done { + return err + } + defer metrics.MeasureSince([]string{"consul", "intention", "apply"}, time.Now()) + defer metrics.MeasureSince([]string{"intention", "apply"}, time.Now()) + + // If no ID is provided, generate a new ID. This must be done prior to + // appending to the Raft log, because the ID is not deterministic. Once + // the entry is in the log, the state update MUST be deterministic or + // the followers will not converge. + if args.Op == structs.IntentionOpCreate && args.Intention.ID == "" { + state := s.srv.fsm.State() + for { + var err error + args.Intention.ID, err = uuid.GenerateUUID() + if err != nil { + s.srv.logger.Printf("[ERR] consul.intention: UUID generation failed: %v", err) + return err + } + + _, ixn, err := state.IntentionGet(nil, args.Intention.ID) + if err != nil { + s.srv.logger.Printf("[ERR] consul.intention: intention lookup failed: %v", err) + return err + } + if ixn == nil { + break + } + } + } + *reply = args.Intention.ID + + // Commit + resp, err := s.srv.raftApply(structs.IntentionRequestType, args) + if err != nil { + s.srv.logger.Printf("[ERR] consul.intention: Apply failed %v", err) + return err + } + if respErr, ok := resp.(error); ok { + return respErr + } + + return nil +} + +// List returns all the intentions. func (s *Intention) List( args *structs.DCSpecificRequest, reply *structs.IndexedIntentions) error { diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 13242374c..51fa635e3 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -9,6 +9,37 @@ import ( "github.com/hashicorp/net-rpc-msgpackrpc" ) +func TestIntentionApply_new(t *testing.T) { + t.Parallel() + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: &structs.Intention{ + SourceName: "test", + }, + } + var reply string + + // Create + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + if reply == "" { + t.Fatal("reply should be non-empty") + } + + // TODO test read +} + func TestIntentionList(t *testing.T) { t.Parallel() dir1, s1 := testServer(t) diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 7837ad431..81f07080c 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -69,3 +69,33 @@ type IndexedIntentions struct { Intentions Intentions QueryMeta } + +// IntentionOp is the operation for a request related to intentions. +type IntentionOp string + +const ( + IntentionOpCreate IntentionOp = "create" + IntentionOpUpdate IntentionOp = "update" + IntentionOpDelete IntentionOp = "delete" +) + +// IntentionRequest is used to create, update, and delete intentions. +type IntentionRequest struct { + // Datacenter is the target for this request. + Datacenter string + + // Op is the type of operation being requested. + Op IntentionOp + + // Intention is the intention. + Intention *Intention + + // WriteRequest is a common struct containing ACL tokens and other + // write-related common elements for requests. + WriteRequest +} + +// RequestDatacenter returns the datacenter for a given request. +func (q *IntentionRequest) RequestDatacenter() string { + return q.Datacenter +} diff --git a/agent/structs/structs.go b/agent/structs/structs.go index 77075b3e3..8a1860912 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -39,6 +39,7 @@ const ( AutopilotRequestType = 9 AreaRequestType = 10 ACLBootstrapRequestType = 11 // FSM snapshots only. + IntentionRequestType = 12 ) const ( From 2a8a2f8167caef17f019e24ba18553f4b69d5a15 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Feb 2018 10:44:49 -0800 Subject: [PATCH 006/539] agent/consul: Intention.Get endpoint --- agent/consul/intention_endpoint.go | 31 +++++++++++++++++++++++++ agent/consul/intention_endpoint_test.go | 25 +++++++++++++++++++- agent/structs/intention.go | 17 ++++++++++++++ 3 files changed, 72 insertions(+), 1 deletion(-) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 8d07b4e7b..3313c03f7 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -65,6 +65,37 @@ func (s *Intention) Apply( return nil } +// Get returns a single intention by ID. +func (s *Intention) Get( + args *structs.IntentionQueryRequest, + reply *structs.IndexedIntentions) error { + // Forward if necessary + if done, err := s.srv.forward("Intention.Get", args, args, reply); done { + return err + } + + return s.srv.blockingQuery( + &args.QueryOptions, + &reply.QueryMeta, + func(ws memdb.WatchSet, state *state.Store) error { + index, ixn, err := state.IntentionGet(ws, args.IntentionID) + if err != nil { + return err + } + if ixn == nil { + return ErrQueryNotFound + } + + reply.Index = index + reply.Intentions = structs.Intentions{ixn} + + // TODO: acl filtering + + return nil + }, + ) +} + // List returns all the intentions. func (s *Intention) List( args *structs.DCSpecificRequest, diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 51fa635e3..5a2405d89 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -2,6 +2,7 @@ package consul import ( "os" + "reflect" "testing" "github.com/hashicorp/consul/agent/structs" @@ -37,7 +38,29 @@ func TestIntentionApply_new(t *testing.T) { t.Fatal("reply should be non-empty") } - // TODO test read + // Read + ixn.Intention.ID = reply + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: ixn.Intention.ID, + } + var resp structs.IndexedIntentions + if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + if len(resp.Intentions) != 1 { + t.Fatalf("bad: %v", resp) + } + actual := resp.Intentions[0] + if resp.Index != actual.ModifyIndex { + t.Fatalf("bad index: %d", resp.Index) + } + actual.CreateIndex, actual.ModifyIndex = 0, 0 + if !reflect.DeepEqual(actual, ixn.Intention) { + t.Fatalf("bad: %v", actual) + } + } } func TestIntentionList(t *testing.T) { diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 81f07080c..cce6e3e0f 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -99,3 +99,20 @@ type IntentionRequest struct { func (q *IntentionRequest) RequestDatacenter() string { return q.Datacenter } + +// IntentionQueryRequest is used to query intentions. +type IntentionQueryRequest struct { + // Datacenter is the target this request is intended for. + Datacenter string + + // IntentionID is the ID of a specific intention. + IntentionID string + + // Options for queries + QueryOptions +} + +// RequestDatacenter returns the datacenter for a given request. +func (q *IntentionQueryRequest) RequestDatacenter() string { + return q.Datacenter +} From 4003bca543c00e2d9519912fb0e765d02ae558d6 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Feb 2018 11:36:54 -0800 Subject: [PATCH 007/539] agent: GET /v1/connect/intentions endpoint --- agent/http_oss.go | 1 + agent/intentions_endpoint.go | 30 +++++++++++++ agent/intentions_endpoint_test.go | 71 +++++++++++++++++++++++++++++++ 3 files changed, 102 insertions(+) create mode 100644 agent/intentions_endpoint.go create mode 100644 agent/intentions_endpoint_test.go diff --git a/agent/http_oss.go b/agent/http_oss.go index 4a2017d28..28ece14ae 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -39,6 +39,7 @@ func init() { registerEndpoint("/v1/catalog/services", []string{"GET"}, (*HTTPServer).CatalogServices) registerEndpoint("/v1/catalog/service/", []string{"GET"}, (*HTTPServer).CatalogServiceNodes) registerEndpoint("/v1/catalog/node/", []string{"GET"}, (*HTTPServer).CatalogNodeServices) + registerEndpoint("/v1/connect/intentions", []string{"GET"}, (*HTTPServer).IntentionList) registerEndpoint("/v1/coordinate/datacenters", []string{"GET"}, (*HTTPServer).CoordinateDatacenters) registerEndpoint("/v1/coordinate/nodes", []string{"GET"}, (*HTTPServer).CoordinateNodes) registerEndpoint("/v1/coordinate/node/", []string{"GET"}, (*HTTPServer).CoordinateNode) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go new file mode 100644 index 000000000..0cdd0dc43 --- /dev/null +++ b/agent/intentions_endpoint.go @@ -0,0 +1,30 @@ +package agent + +import ( + "net/http" + + "github.com/hashicorp/consul/agent/structs" +) + +// /v1/connect/intentions +func (s *HTTPServer) IntentionList(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + if req.Method != "GET" { + return nil, MethodNotAllowedError{req.Method, []string{"GET"}} + } + + var args structs.DCSpecificRequest + if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { + return nil, nil + } + + var reply structs.IndexedIntentions + if err := s.agent.RPC("Intention.List", &args, &reply); err != nil { + return nil, err + } + + // Use empty list instead of nil. + if reply.Intentions == nil { + reply.Intentions = make(structs.Intentions, 0) + } + return reply.Intentions, nil +} diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go new file mode 100644 index 000000000..2e56eabf7 --- /dev/null +++ b/agent/intentions_endpoint_test.go @@ -0,0 +1,71 @@ +package agent + +import ( + "net/http" + "net/http/httptest" + "reflect" + "sort" + "testing" + + "github.com/hashicorp/consul/agent/structs" +) + +func TestIntentionsList_empty(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Make sure an empty list is non-nil. + req, _ := http.NewRequest("GET", "/v1/connect/intentions", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionList(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + + value := obj.(structs.Intentions) + if value == nil || len(value) != 0 { + t.Fatalf("bad: %v", value) + } +} + +func TestIntentionsList_values(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Create some intentions + for _, v := range []string{"foo", "bar"} { + req := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: &structs.Intention{SourceName: v}, + } + var reply string + if err := a.RPC("Intention.Apply", &req, &reply); err != nil { + t.Fatalf("err: %s", err) + } + } + + // Request + req, _ := http.NewRequest("GET", "/v1/connect/intentions", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionList(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + + value := obj.(structs.Intentions) + if len(value) != 2 { + t.Fatalf("bad: %v", value) + } + + expected := []string{"bar", "foo"} + actual := []string{value[0].SourceName, value[1].SourceName} + sort.Strings(actual) + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad: %#v", actual) + } +} From c78b82f43b5573c51c9cb6c902e39beb770087a1 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Feb 2018 14:02:00 -0800 Subject: [PATCH 008/539] agent: POST /v1/connect/intentions --- agent/http_oss.go | 2 +- agent/intentions_endpoint.go | 47 ++++++++++++++++++++++++++++--- agent/intentions_endpoint_test.go | 40 ++++++++++++++++++++++++++ agent/structs/intention.go | 2 +- 4 files changed, 85 insertions(+), 6 deletions(-) diff --git a/agent/http_oss.go b/agent/http_oss.go index 28ece14ae..61bef8d2a 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -39,7 +39,7 @@ func init() { registerEndpoint("/v1/catalog/services", []string{"GET"}, (*HTTPServer).CatalogServices) registerEndpoint("/v1/catalog/service/", []string{"GET"}, (*HTTPServer).CatalogServiceNodes) registerEndpoint("/v1/catalog/node/", []string{"GET"}, (*HTTPServer).CatalogNodeServices) - registerEndpoint("/v1/connect/intentions", []string{"GET"}, (*HTTPServer).IntentionList) + registerEndpoint("/v1/connect/intentions", []string{"GET", "POST"}, (*HTTPServer).IntentionList) registerEndpoint("/v1/coordinate/datacenters", []string{"GET"}, (*HTTPServer).CoordinateDatacenters) registerEndpoint("/v1/coordinate/nodes", []string{"GET"}, (*HTTPServer).CoordinateNodes) registerEndpoint("/v1/coordinate/node/", []string{"GET"}, (*HTTPServer).CoordinateNode) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index 0cdd0dc43..62340e7e7 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -1,16 +1,29 @@ package agent import ( + "fmt" "net/http" "github.com/hashicorp/consul/agent/structs" ) -// /v1/connect/intentions -func (s *HTTPServer) IntentionList(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - if req.Method != "GET" { - return nil, MethodNotAllowedError{req.Method, []string{"GET"}} +// /v1/connection/intentions +func (s *HTTPServer) IntentionEndpoint(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + switch req.Method { + case "GET": + return s.IntentionList(resp, req) + + case "POST": + return s.IntentionCreate(resp, req) + + default: + return nil, MethodNotAllowedError{req.Method, []string{"GET", "POST"}} } +} + +// GET /v1/connect/intentions +func (s *HTTPServer) IntentionList(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Method is tested in IntentionEndpoint var args structs.DCSpecificRequest if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { @@ -28,3 +41,29 @@ func (s *HTTPServer) IntentionList(resp http.ResponseWriter, req *http.Request) } return reply.Intentions, nil } + +// POST /v1/connect/intentions +func (s *HTTPServer) IntentionCreate(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Method is tested in IntentionEndpoint + + args := structs.IntentionRequest{ + Op: structs.IntentionOpCreate, + } + s.parseDC(req, &args.Datacenter) + s.parseToken(req, &args.Token) + if err := decodeBody(req, &args.Intention, nil); err != nil { + resp.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(resp, "Request decode failed: %v", err) + return nil, nil + } + + var reply string + if err := s.agent.RPC("Intention.Apply", &args, &reply); err != nil { + return nil, err + } + + return intentionCreateResponse{reply}, nil +} + +// intentionCreateResponse is the response structure for creating an intention. +type intentionCreateResponse struct{ ID string } diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index 2e56eabf7..db6a16580 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -69,3 +69,43 @@ func TestIntentionsList_values(t *testing.T) { t.Fatalf("bad: %#v", actual) } } + +func TestIntentionsCreate_good(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Make sure an empty list is non-nil. + args := &structs.Intention{SourceName: "foo"} + req, _ := http.NewRequest("POST", "/v1/connect/intentions", jsonReader(args)) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionCreate(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + + value := obj.(intentionCreateResponse) + if value.ID == "" { + t.Fatalf("bad: %v", value) + } + + // Read the value + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: value.ID, + } + var resp structs.IndexedIntentions + if err := a.RPC("Intention.Get", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + if len(resp.Intentions) != 1 { + t.Fatalf("bad: %v", resp) + } + actual := resp.Intentions[0] + if actual.SourceName != "foo" { + t.Fatalf("bad: %#v", actual) + } + } +} diff --git a/agent/structs/intention.go b/agent/structs/intention.go index cce6e3e0f..7255bc8f1 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -39,7 +39,7 @@ type Intention struct { // CreatedAt and UpdatedAt keep track of when this record was created // or modified. - CreatedAt, UpdatedAt time.Time + CreatedAt, UpdatedAt time.Time `mapstructure:"-"` RaftIndex } From 37572829abac05f6680dfe4bef44dbccbe677747 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Feb 2018 15:54:48 -0800 Subject: [PATCH 009/539] agent: GET /v1/connect/intentions/:id --- agent/consul/intention_endpoint.go | 8 ++++- agent/http_oss.go | 3 +- agent/intentions_endpoint.go | 48 ++++++++++++++++++++++++++++++ agent/intentions_endpoint_test.go | 43 ++++++++++++++++++++++++++ 4 files changed, 100 insertions(+), 2 deletions(-) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 3313c03f7..030c9922d 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -1,6 +1,7 @@ package consul import ( + "errors" "time" "github.com/armon/go-metrics" @@ -10,6 +11,11 @@ import ( "github.com/hashicorp/go-uuid" ) +var ( + // ErrIntentionNotFound is returned if the intention lookup failed. + ErrIntentionNotFound = errors.New("Intention not found") +) + // Intention manages the Connect intentions. type Intention struct { // srv is a pointer back to the server. @@ -83,7 +89,7 @@ func (s *Intention) Get( return err } if ixn == nil { - return ErrQueryNotFound + return ErrIntentionNotFound } reply.Index = index diff --git a/agent/http_oss.go b/agent/http_oss.go index 61bef8d2a..0170a0075 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -39,7 +39,8 @@ func init() { registerEndpoint("/v1/catalog/services", []string{"GET"}, (*HTTPServer).CatalogServices) registerEndpoint("/v1/catalog/service/", []string{"GET"}, (*HTTPServer).CatalogServiceNodes) registerEndpoint("/v1/catalog/node/", []string{"GET"}, (*HTTPServer).CatalogNodeServices) - registerEndpoint("/v1/connect/intentions", []string{"GET", "POST"}, (*HTTPServer).IntentionList) + registerEndpoint("/v1/connect/intentions", []string{"GET", "POST"}, (*HTTPServer).IntentionEndpoint) + registerEndpoint("/v1/connect/intentions/", []string{"GET"}, (*HTTPServer).IntentionSpecific) registerEndpoint("/v1/coordinate/datacenters", []string{"GET"}, (*HTTPServer).CoordinateDatacenters) registerEndpoint("/v1/coordinate/nodes", []string{"GET"}, (*HTTPServer).CoordinateNodes) registerEndpoint("/v1/coordinate/node/", []string{"GET"}, (*HTTPServer).CoordinateNode) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index 62340e7e7..d5d6b6495 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -3,7 +3,9 @@ package agent import ( "fmt" "net/http" + "strings" + "github.com/hashicorp/consul/agent/consul" "github.com/hashicorp/consul/agent/structs" ) @@ -65,5 +67,51 @@ func (s *HTTPServer) IntentionCreate(resp http.ResponseWriter, req *http.Request return intentionCreateResponse{reply}, nil } +// IntentionSpecific handles the endpoint for /v1/connection/intentions/:id +func (s *HTTPServer) IntentionSpecific(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + id := strings.TrimPrefix(req.URL.Path, "/v1/connect/intentions/") + + switch req.Method { + case "GET": + return s.IntentionSpecificGet(id, resp, req) + + case "PUT": + panic("TODO") + + case "DELETE": + panic("TODO") + + default: + return nil, MethodNotAllowedError{req.Method, []string{"GET", "PUT", "DELETE"}} + } +} + +// GET /v1/connect/intentions/:id +func (s *HTTPServer) IntentionSpecificGet(id string, resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Method is tested in IntentionEndpoint + + args := structs.IntentionQueryRequest{ + IntentionID: id, + } + if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { + return nil, nil + } + + var reply structs.IndexedIntentions + if err := s.agent.RPC("Intention.Get", &args, &reply); err != nil { + // We have to check the string since the RPC sheds the error type + if err.Error() == consul.ErrIntentionNotFound.Error() { + resp.WriteHeader(http.StatusNotFound) + fmt.Fprint(resp, err.Error()) + return nil, nil + } + + return nil, err + } + + // TODO: validate length + return reply.Intentions[0], nil +} + // intentionCreateResponse is the response structure for creating an intention. type intentionCreateResponse struct{ ID string } diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index db6a16580..0bd956842 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -1,6 +1,7 @@ package agent import ( + "fmt" "net/http" "net/http/httptest" "reflect" @@ -109,3 +110,45 @@ func TestIntentionsCreate_good(t *testing.T) { } } } + +func TestIntentionsSpecificGet_good(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // The intention + ixn := &structs.Intention{SourceName: "foo"} + + // Create an intention directly + var reply string + { + req := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: ixn, + } + if err := a.RPC("Intention.Apply", &req, &reply); err != nil { + t.Fatalf("err: %s", err) + } + } + + // Get the value + req, _ := http.NewRequest("GET", fmt.Sprintf("/v1/connect/intentions/%s", reply), nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionSpecific(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + + value := obj.(*structs.Intention) + if value.ID != reply { + t.Fatalf("bad: %v", value) + } + + ixn.ID = value.ID + ixn.RaftIndex = value.RaftIndex + if !reflect.DeepEqual(value, ixn) { + t.Fatalf("bad (got, want):\n\n%#v\n\n%#v", value, ixn) + } +} From f219c766cbf42c4b7b71e7f7f7e2cccfc1c23b7c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Feb 2018 21:11:35 -0800 Subject: [PATCH 010/539] agent/consul: support updating intentions --- agent/consul/intention_endpoint.go | 13 ++++ agent/consul/intention_endpoint_test.go | 90 +++++++++++++++++++++++++ 2 files changed, 103 insertions(+) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 030c9922d..118dfb5b9 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -2,6 +2,7 @@ package consul import ( "errors" + "fmt" "time" "github.com/armon/go-metrics" @@ -58,6 +59,18 @@ func (s *Intention) Apply( } *reply = args.Intention.ID + // If this is not a create, then we have to verify the ID. + if args.Op != structs.IntentionOpCreate { + state := s.srv.fsm.State() + _, ixn, err := state.IntentionGet(nil, args.Intention.ID) + if err != nil { + return fmt.Errorf("Intention lookup failed: %v", err) + } + if ixn == nil { + return fmt.Errorf("Cannot modify non-existent intention: '%s'", args.Intention.ID) + } + } + // Commit resp, err := s.srv.raftApply(structs.IntentionRequestType, args) if err != nil { diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 5a2405d89..6049c5f35 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -3,6 +3,7 @@ package consul import ( "os" "reflect" + "strings" "testing" "github.com/hashicorp/consul/agent/structs" @@ -10,6 +11,7 @@ import ( "github.com/hashicorp/net-rpc-msgpackrpc" ) +// Test basic creation func TestIntentionApply_new(t *testing.T) { t.Parallel() dir1, s1 := testServer(t) @@ -63,6 +65,94 @@ func TestIntentionApply_new(t *testing.T) { } } +// Test basic updating +func TestIntentionApply_updateGood(t *testing.T) { + t.Parallel() + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: &structs.Intention{ + SourceName: "test", + }, + } + var reply string + + // Create + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + if reply == "" { + t.Fatal("reply should be non-empty") + } + + // Update + ixn.Op = structs.IntentionOpUpdate + ixn.Intention.ID = reply + ixn.Intention.SourceName = "bar" + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + + // Read + ixn.Intention.ID = reply + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: ixn.Intention.ID, + } + var resp structs.IndexedIntentions + if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + if len(resp.Intentions) != 1 { + t.Fatalf("bad: %v", resp) + } + actual := resp.Intentions[0] + actual.CreateIndex, actual.ModifyIndex = 0, 0 + if !reflect.DeepEqual(actual, ixn.Intention) { + t.Fatalf("bad: %v", actual) + } + } +} + +// Shouldn't be able to update a non-existent intention +func TestIntentionApply_updateNonExist(t *testing.T) { + t.Parallel() + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpUpdate, + Intention: &structs.Intention{ + ID: generateUUID(), + SourceName: "test", + }, + } + var reply string + + // Create + err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) + if err == nil || !strings.Contains(err.Error(), "Cannot modify non-existent intention") { + t.Fatalf("bad: %v", err) + } +} + func TestIntentionList(t *testing.T) { t.Parallel() dir1, s1 := testServer(t) From 32ad54369c05a8274d140ccb7ba4b150e61e23a0 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Feb 2018 21:16:45 -0800 Subject: [PATCH 011/539] agent/consul: creating intention must not have ID set --- agent/consul/intention_endpoint.go | 6 ++++- agent/consul/intention_endpoint_test.go | 29 +++++++++++++++++++++++++ 2 files changed, 34 insertions(+), 1 deletion(-) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 118dfb5b9..fc552afd9 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -37,7 +37,11 @@ func (s *Intention) Apply( // appending to the Raft log, because the ID is not deterministic. Once // the entry is in the log, the state update MUST be deterministic or // the followers will not converge. - if args.Op == structs.IntentionOpCreate && args.Intention.ID == "" { + if args.Op == structs.IntentionOpCreate { + if args.Intention.ID != "" { + return fmt.Errorf("ID must be empty when creating a new intention") + } + state := s.srv.fsm.State() for { var err error diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 6049c5f35..e0b4762de 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -65,6 +65,35 @@ func TestIntentionApply_new(t *testing.T) { } } +// Shouldn't be able to create with an ID set +func TestIntentionApply_createWithID(t *testing.T) { + t.Parallel() + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: &structs.Intention{ + ID: generateUUID(), + SourceName: "test", + }, + } + var reply string + + // Create + err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) + if err == nil || !strings.Contains(err.Error(), "ID must be empty") { + t.Fatalf("bad: %v", err) + } +} + // Test basic updating func TestIntentionApply_updateGood(t *testing.T) { t.Parallel() From 95e1c92edf098cd8b051ec3ced6344e93b732fce Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Feb 2018 21:21:59 -0800 Subject: [PATCH 012/539] agent/consul/state,fsm: support for deleting intentions --- agent/consul/fsm/commands_oss.go | 3 +- agent/consul/state/intention.go | 36 ++++++++++++++++++++ agent/consul/state/intention_test.go | 50 ++++++++++++++++++++++++++++ 3 files changed, 87 insertions(+), 2 deletions(-) diff --git a/agent/consul/fsm/commands_oss.go b/agent/consul/fsm/commands_oss.go index c90f185e0..51f127899 100644 --- a/agent/consul/fsm/commands_oss.go +++ b/agent/consul/fsm/commands_oss.go @@ -263,8 +263,7 @@ func (c *FSM) applyIntentionOperation(buf []byte, index uint64) interface{} { case structs.IntentionOpCreate, structs.IntentionOpUpdate: return c.state.IntentionSet(index, req.Intention) case structs.IntentionOpDelete: - panic("TODO") - //return c.state.PreparedQueryDelete(index, req.Query.ID) + return c.state.IntentionDelete(index, req.Intention.ID) default: c.logger.Printf("[WARN] consul.fsm: Invalid Intention operation '%s'", req.Op) return fmt.Errorf("Invalid Intention operation '%s'", req.Op) diff --git a/agent/consul/state/intention.go b/agent/consul/state/intention.go index 844a1a509..ea2ee3fd5 100644 --- a/agent/consul/state/intention.go +++ b/agent/consul/state/intention.go @@ -156,3 +156,39 @@ func (s *Store) IntentionGet(ws memdb.WatchSet, id string) (uint64, *structs.Int return idx, result, nil } + +// IntentionDelete deletes the given intention by ID. +func (s *Store) IntentionDelete(idx uint64, id string) error { + tx := s.db.Txn(true) + defer tx.Abort() + + if err := s.intentionDeleteTxn(tx, idx, id); err != nil { + return fmt.Errorf("failed intention delete: %s", err) + } + + tx.Commit() + return nil +} + +// intentionDeleteTxn is the inner method used to delete a intention +// with the proper indexes into the state store. +func (s *Store) intentionDeleteTxn(tx *memdb.Txn, idx uint64, queryID string) error { + // Pull the query. + wrapped, err := tx.First(intentionsTableName, "id", queryID) + if err != nil { + return fmt.Errorf("failed intention lookup: %s", err) + } + if wrapped == nil { + return nil + } + + // Delete the query and update the index. + if err := tx.Delete(intentionsTableName, wrapped); err != nil { + return fmt.Errorf("failed intention delete: %s", err) + } + if err := tx.Insert("index", &IndexEntry{intentionsTableName, idx}); err != nil { + return fmt.Errorf("failed updating index: %s", err) + } + + return nil +} diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go index e4794bba7..d1494d5e0 100644 --- a/agent/consul/state/intention_test.go +++ b/agent/consul/state/intention_test.go @@ -121,6 +121,56 @@ func TestStore_IntentionSet_emptyId(t *testing.T) { } } +func TestStore_IntentionDelete(t *testing.T) { + s := testStateStore(t) + + // Call Get to populate the watch set + ws := memdb.NewWatchSet() + _, _, err := s.IntentionGet(ws, testUUID()) + if err != nil { + t.Fatalf("err: %s", err) + } + + // Create + ixn := &structs.Intention{ID: testUUID()} + if err := s.IntentionSet(1, ixn); err != nil { + t.Fatalf("err: %s", err) + } + + // Make sure the index got updated. + if idx := s.maxIndex(intentionsTableName); idx != 1 { + t.Fatalf("bad index: %d", idx) + } + if !watchFired(ws) { + t.Fatalf("bad") + } + + // Delete + if err := s.IntentionDelete(2, ixn.ID); err != nil { + t.Fatalf("err: %s", err) + } + + // Make sure the index got updated. + if idx := s.maxIndex(intentionsTableName); idx != 2 { + t.Fatalf("bad index: %d", idx) + } + if !watchFired(ws) { + t.Fatalf("bad") + } + + // Sanity check to make sure it's not there. + idx, actual, err := s.IntentionGet(nil, ixn.ID) + if err != nil { + t.Fatalf("err: %s", err) + } + if idx != 2 { + t.Fatalf("bad index: %d", idx) + } + if actual != nil { + t.Fatalf("bad: %v", actual) + } +} + func TestStore_IntentionsList(t *testing.T) { s := testStateStore(t) From bebe6870ffa088dedc788a0dd1f88cb50b90495c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 1 Mar 2018 15:48:48 -0800 Subject: [PATCH 013/539] agent/consul: test that Apply works to delete an intention --- agent/consul/intention_endpoint_test.go | 51 +++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index e0b4762de..53aef35cd 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -182,6 +182,57 @@ func TestIntentionApply_updateNonExist(t *testing.T) { } } +// Test basic deleting +func TestIntentionApply_deleteGood(t *testing.T) { + t.Parallel() + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: &structs.Intention{ + SourceName: "test", + }, + } + var reply string + + // Create + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + if reply == "" { + t.Fatal("reply should be non-empty") + } + + // Delete + ixn.Op = structs.IntentionOpDelete + ixn.Intention.ID = reply + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + + // Read + ixn.Intention.ID = reply + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: ixn.Intention.ID, + } + var resp structs.IndexedIntentions + err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp) + if err == nil || !strings.Contains(err.Error(), ErrIntentionNotFound.Error()) { + t.Fatalf("err: %v", err) + } + } +} + func TestIntentionList(t *testing.T) { t.Parallel() dir1, s1 := testServer(t) From cae7bca448af147625a736b44790b49f87afa5cc Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 1 Mar 2018 15:54:03 -0800 Subject: [PATCH 014/539] agent: DELETE /v1/connect/intentions/:id --- agent/intentions_endpoint.go | 21 +++++++++- agent/intentions_endpoint_test.go | 68 +++++++++++++++++++++++++++++++ 2 files changed, 88 insertions(+), 1 deletion(-) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index d5d6b6495..9f974309e 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -79,7 +79,7 @@ func (s *HTTPServer) IntentionSpecific(resp http.ResponseWriter, req *http.Reque panic("TODO") case "DELETE": - panic("TODO") + return s.IntentionSpecificDelete(id, resp, req) default: return nil, MethodNotAllowedError{req.Method, []string{"GET", "PUT", "DELETE"}} @@ -113,5 +113,24 @@ func (s *HTTPServer) IntentionSpecificGet(id string, resp http.ResponseWriter, r return reply.Intentions[0], nil } +// DELETE /v1/connect/intentions/:id +func (s *HTTPServer) IntentionSpecificDelete(id string, resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Method is tested in IntentionEndpoint + + args := structs.IntentionRequest{ + Op: structs.IntentionOpDelete, + Intention: &structs.Intention{ID: id}, + } + s.parseDC(req, &args.Datacenter) + s.parseToken(req, &args.Token) + + var reply string + if err := s.agent.RPC("Intention.Apply", &args, &reply); err != nil { + return nil, err + } + + return nil, nil +} + // intentionCreateResponse is the response structure for creating an intention. type intentionCreateResponse struct{ ID string } diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index 0bd956842..d38fc6c43 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -6,6 +6,7 @@ import ( "net/http/httptest" "reflect" "sort" + "strings" "testing" "github.com/hashicorp/consul/agent/structs" @@ -152,3 +153,70 @@ func TestIntentionsSpecificGet_good(t *testing.T) { t.Fatalf("bad (got, want):\n\n%#v\n\n%#v", value, ixn) } } + +func TestIntentionsSpecificDelete_good(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // The intention + ixn := &structs.Intention{SourceName: "foo"} + + // Create an intention directly + var reply string + { + req := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: ixn, + } + if err := a.RPC("Intention.Apply", &req, &reply); err != nil { + t.Fatalf("err: %s", err) + } + } + + // Sanity check that the intention exists + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: reply, + } + var resp structs.IndexedIntentions + if err := a.RPC("Intention.Get", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + if len(resp.Intentions) != 1 { + t.Fatalf("bad: %v", resp) + } + actual := resp.Intentions[0] + if actual.SourceName != "foo" { + t.Fatalf("bad: %#v", actual) + } + } + + // Delete the intention + req, _ := http.NewRequest("DELETE", fmt.Sprintf("/v1/connect/intentions/%s", reply), nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionSpecific(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + if obj != nil { + t.Fatalf("obj should be nil: %v", err) + } + + // Verify the intention is gone + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: reply, + } + var resp structs.IndexedIntentions + err := a.RPC("Intention.Get", req, &resp) + if err == nil || !strings.Contains(err.Error(), "not found") { + t.Fatalf("err: %v", err) + } + } + +} From a91fadb9710789b2bf5d885ca792bee0123bc6fa Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 1 Mar 2018 16:53:31 -0800 Subject: [PATCH 015/539] agent: PUT /v1/connect/intentions/:id --- agent/intentions_endpoint.go | 29 +++++++++++++++- agent/intentions_endpoint_test.go | 55 +++++++++++++++++++++++++++++++ 2 files changed, 83 insertions(+), 1 deletion(-) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index 9f974309e..40a9f2282 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -76,7 +76,7 @@ func (s *HTTPServer) IntentionSpecific(resp http.ResponseWriter, req *http.Reque return s.IntentionSpecificGet(id, resp, req) case "PUT": - panic("TODO") + return s.IntentionSpecificUpdate(id, resp, req) case "DELETE": return s.IntentionSpecificDelete(id, resp, req) @@ -113,6 +113,33 @@ func (s *HTTPServer) IntentionSpecificGet(id string, resp http.ResponseWriter, r return reply.Intentions[0], nil } +// PUT /v1/connect/intentions/:id +func (s *HTTPServer) IntentionSpecificUpdate(id string, resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Method is tested in IntentionEndpoint + + args := structs.IntentionRequest{ + Op: structs.IntentionOpUpdate, + } + s.parseDC(req, &args.Datacenter) + s.parseToken(req, &args.Token) + if err := decodeBody(req, &args.Intention, nil); err != nil { + resp.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(resp, "Request decode failed: %v", err) + return nil, nil + } + + // Use the ID from the URL + args.Intention.ID = id + + var reply string + if err := s.agent.RPC("Intention.Apply", &args, &reply); err != nil { + return nil, err + } + + return nil, nil + +} + // DELETE /v1/connect/intentions/:id func (s *HTTPServer) IntentionSpecificDelete(id string, resp http.ResponseWriter, req *http.Request) (interface{}, error) { // Method is tested in IntentionEndpoint diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index d38fc6c43..c3753ea97 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -154,6 +154,61 @@ func TestIntentionsSpecificGet_good(t *testing.T) { } } +func TestIntentionsSpecificUpdate_good(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // The intention + ixn := &structs.Intention{SourceName: "foo"} + + // Create an intention directly + var reply string + { + req := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: ixn, + } + if err := a.RPC("Intention.Apply", &req, &reply); err != nil { + t.Fatalf("err: %s", err) + } + } + + // Update the intention + ixn.ID = "bogus" + ixn.SourceName = "bar" + req, _ := http.NewRequest("PUT", fmt.Sprintf("/v1/connect/intentions/%s", reply), jsonReader(ixn)) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionSpecific(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + if obj != nil { + t.Fatalf("obj should be nil: %v", err) + } + + // Read the value + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: reply, + } + var resp structs.IndexedIntentions + if err := a.RPC("Intention.Get", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + if len(resp.Intentions) != 1 { + t.Fatalf("bad: %v", resp) + } + actual := resp.Intentions[0] + if actual.SourceName != "bar" { + t.Fatalf("bad: %#v", actual) + } + } +} + func TestIntentionsSpecificDelete_good(t *testing.T) { t.Parallel() From 231f7328bd6782bc47cef614d2026ce4bf5206f6 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 2 Mar 2018 11:53:40 -0800 Subject: [PATCH 016/539] agent/structs: IntentionPrecedenceSorter for sorting based on precedence --- agent/structs/intention.go | 80 +++++++++++++++++++++++++++++++++ agent/structs/intention_test.go | 72 +++++++++++++++++++++++++++++ 2 files changed, 152 insertions(+) create mode 100644 agent/structs/intention_test.go diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 7255bc8f1..14b5e0b8e 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -4,6 +4,11 @@ import ( "time" ) +const ( + // IntentionWildcard is the wildcard value. + IntentionWildcard = "*" +) + // Intention defines an intention for the Connect Service Graph. This defines // the allowed or denied behavior of a connection between two services using // Connect. @@ -100,6 +105,16 @@ func (q *IntentionRequest) RequestDatacenter() string { return q.Datacenter } +// IntentionMatchType is the target for a match request. For example, +// matching by source will look for all intentions that match the given +// source value. +type IntentionMatchType string + +const ( + IntentionMatchSource IntentionMatchType = "source" + IntentionMatchDestination IntentionMatchType = "destination" +) + // IntentionQueryRequest is used to query intentions. type IntentionQueryRequest struct { // Datacenter is the target this request is intended for. @@ -108,6 +123,12 @@ type IntentionQueryRequest struct { // IntentionID is the ID of a specific intention. IntentionID string + // MatchBy and MatchNames are used to match a namespace/name pair + // to a set of intentions. The list of MatchNames is an OR list, + // all matching intentions are returned together. + MatchBy IntentionMatchType + MatchNames []string + // Options for queries QueryOptions } @@ -116,3 +137,62 @@ type IntentionQueryRequest struct { func (q *IntentionQueryRequest) RequestDatacenter() string { return q.Datacenter } + +// IntentionQueryMatch are the parameters for performing a match request +// against the state store. +type IntentionQueryMatch struct { + Type IntentionMatchType + Entries []IntentionMatchEntry +} + +// IntentionMatchEntry is a single entry for matching an intention. +type IntentionMatchEntry struct { + Namespace string + Name string +} + +// IntentionPrecedenceSorter takes a list of intentions and sorts them +// based on the match precedence rules for intentions. The intentions +// closer to the head of the list have higher precedence. i.e. index 0 has +// the highest precedence. +type IntentionPrecedenceSorter Intentions + +func (s IntentionPrecedenceSorter) Len() int { return len(s) } +func (s IntentionPrecedenceSorter) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + +func (s IntentionPrecedenceSorter) Less(i, j int) bool { + a, b := s[i], s[j] + + // First test the # of exact values in destination, since precedence + // is destination-oriented. + aExact := s.countExact(a.DestinationNS, a.DestinationName) + bExact := s.countExact(b.DestinationNS, b.DestinationName) + if aExact != bExact { + return aExact > bExact + } + + // Next test the # of exact values in source + aExact = s.countExact(a.SourceNS, a.SourceName) + bExact = s.countExact(b.SourceNS, b.SourceName) + return aExact > bExact +} + +// countExact counts the number of exact values (not wildcards) in +// the given namespace and name. +func (s IntentionPrecedenceSorter) countExact(ns, n string) int { + // If NS is wildcard, it must be zero since wildcards only follow exact + if ns == IntentionWildcard { + return 0 + } + + // Same reasoning as above, a wildcard can only follow an exact value + // and an exact value cannot follow a wildcard, so if name is a wildcard + // we must have exactly one. + if n == IntentionWildcard { + return 1 + } + + return 2 +} diff --git a/agent/structs/intention_test.go b/agent/structs/intention_test.go new file mode 100644 index 000000000..19ac5811a --- /dev/null +++ b/agent/structs/intention_test.go @@ -0,0 +1,72 @@ +package structs + +import ( + "reflect" + "sort" + "testing" +) + +func TestIntentionPrecedenceSorter(t *testing.T) { + cases := []struct { + Name string + Input [][]string // SrcNS, SrcN, DstNS, DstN + Expected [][]string // Same structure as Input + }{ + { + "exhaustive list", + [][]string{ + {"*", "*", "exact", "*"}, + {"*", "*", "*", "*"}, + {"exact", "*", "exact", "exact"}, + {"*", "*", "exact", "exact"}, + {"exact", "exact", "*", "*"}, + {"exact", "exact", "exact", "exact"}, + {"exact", "exact", "exact", "*"}, + {"exact", "*", "exact", "*"}, + {"exact", "*", "*", "*"}, + }, + [][]string{ + {"exact", "exact", "exact", "exact"}, + {"exact", "*", "exact", "exact"}, + {"*", "*", "exact", "exact"}, + {"exact", "exact", "exact", "*"}, + {"exact", "*", "exact", "*"}, + {"*", "*", "exact", "*"}, + {"exact", "exact", "*", "*"}, + {"exact", "*", "*", "*"}, + {"*", "*", "*", "*"}, + }, + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + var input Intentions + for _, v := range tc.Input { + input = append(input, &Intention{ + SourceNS: v[0], + SourceName: v[1], + DestinationNS: v[2], + DestinationName: v[3], + }) + } + + // Sort + sort.Sort(IntentionPrecedenceSorter(input)) + + // Get back into a comparable form + var actual [][]string + for _, v := range input { + actual = append(actual, []string{ + v.SourceNS, + v.SourceName, + v.DestinationNS, + v.DestinationName, + }) + } + if !reflect.DeepEqual(actual, tc.Expected) { + t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", actual, tc.Expected) + } + }) + } +} From 987b7ce0a291a810b3934cb1bc1e9474cc5b8f2a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 2 Mar 2018 12:56:39 -0800 Subject: [PATCH 017/539] agent/consul/state: IntentionMatch for performing match resolution --- agent/consul/state/intention.go | 78 +++++++++++++++ agent/consul/state/intention_test.go | 136 +++++++++++++++++++++++++++ 2 files changed, 214 insertions(+) diff --git a/agent/consul/state/intention.go b/agent/consul/state/intention.go index ea2ee3fd5..51f4e1e3b 100644 --- a/agent/consul/state/intention.go +++ b/agent/consul/state/intention.go @@ -2,6 +2,7 @@ package state import ( "fmt" + "sort" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" @@ -192,3 +193,80 @@ func (s *Store) intentionDeleteTxn(tx *memdb.Txn, idx uint64, queryID string) er return nil } + +// IntentionMatch returns the list of intentions that match the namespace and +// name for either a source or destination. This applies the resolution rules +// so wildcards will match any value. +// +// The returned value is the list of intentions in the same order as the +// entries in args. The intentions themselves are sorted based on the +// intention precedence rules. i.e. result[0][0] is the highest precedent +// rule to match for the first entry. +func (s *Store) IntentionMatch(ws memdb.WatchSet, args *structs.IntentionQueryMatch) (uint64, []structs.Intentions, error) { + tx := s.db.Txn(false) + defer tx.Abort() + + // Get the table index. + idx := maxIndexTxn(tx, intentionsTableName) + + // Make all the calls and accumulate the results + results := make([]structs.Intentions, len(args.Entries)) + for i, entry := range args.Entries { + // Each search entry may require multiple queries to memdb, so this + // returns the arguments for each necessary Get. Note on performance: + // this is not the most optimal set of queries since we repeat some + // many times (such as */*). We can work on improving that in the + // future, the test cases shouldn't have to change for that. + getParams, err := s.intentionMatchGetParams(entry) + if err != nil { + return 0, nil, err + } + + // Perform each call and accumulate the result. + var ixns structs.Intentions + for _, params := range getParams { + iter, err := tx.Get(intentionsTableName, string(args.Type), params...) + if err != nil { + return 0, nil, fmt.Errorf("failed intention lookup: %s", err) + } + + ws.Add(iter.WatchCh()) + + for ixn := iter.Next(); ixn != nil; ixn = iter.Next() { + ixns = append(ixns, ixn.(*structs.Intention)) + } + } + + // TODO: filter for uniques + + // Sort the results by precedence + sort.Sort(structs.IntentionPrecedenceSorter(ixns)) + + // Store the result + results[i] = ixns + } + + return idx, results, nil +} + +// intentionMatchGetParams returns the tx.Get parameters to find all the +// intentions for a certain entry. +func (s *Store) intentionMatchGetParams(entry structs.IntentionMatchEntry) ([][]interface{}, error) { + // We always query for "*/*" so include that. If the namespace is a + // wildcard, then we're actually done. + result := make([][]interface{}, 0, 3) + result = append(result, []interface{}{"*", "*"}) + if entry.Namespace == structs.IntentionWildcard { + return result, nil + } + + // Search for NS/* intentions. If we have a wildcard name, then we're done. + result = append(result, []interface{}{entry.Namespace, "*"}) + if entry.Name == structs.IntentionWildcard { + return result, nil + } + + // Search for the exact NS/N value. + result = append(result, []interface{}{entry.Namespace, entry.Name}) + return result, nil +} diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go index d1494d5e0..2f4fee26b 100644 --- a/agent/consul/state/intention_test.go +++ b/agent/consul/state/intention_test.go @@ -233,3 +233,139 @@ func TestStore_IntentionsList(t *testing.T) { t.Fatalf("bad: %v", actual) } } + +// Test the matrix of match logic. +// +// Note that this doesn't need to test the intention sort logic exhaustively +// since this is tested in their sort implementation in the structs. +func TestStore_IntentionMatch_table(t *testing.T) { + type testCase struct { + Name string + Insert [][]string // List of intentions to insert + Query [][]string // List of intentions to match + Expected [][][]string // List of matches, where each match is a list of intentions + } + + cases := []testCase{ + { + "single exact namespace/name", + [][]string{ + {"foo", "*"}, + {"foo", "bar"}, + {"foo", "baz"}, // shouldn't match + {"bar", "bar"}, // shouldn't match + {"bar", "*"}, // shouldn't match + {"*", "*"}, + }, + [][]string{ + {"foo", "bar"}, + }, + [][][]string{ + { + {"foo", "bar"}, + {"foo", "*"}, + {"*", "*"}, + }, + }, + }, + + { + "multiple exact namespace/name", + [][]string{ + {"foo", "*"}, + {"foo", "bar"}, + {"foo", "baz"}, // shouldn't match + {"bar", "bar"}, + {"bar", "*"}, + }, + [][]string{ + {"foo", "bar"}, + {"bar", "bar"}, + }, + [][][]string{ + { + {"foo", "bar"}, + {"foo", "*"}, + }, + { + {"bar", "bar"}, + {"bar", "*"}, + }, + }, + }, + } + + // testRunner implements the test for a single case, but can be + // parameterized to run for both source and destination so we can + // test both cases. + testRunner := func(t *testing.T, tc testCase, typ structs.IntentionMatchType) { + // Insert the set + s := testStateStore(t) + var idx uint64 = 1 + for _, v := range tc.Insert { + ixn := &structs.Intention{ID: testUUID()} + switch typ { + case structs.IntentionMatchDestination: + ixn.DestinationNS = v[0] + ixn.DestinationName = v[1] + case structs.IntentionMatchSource: + ixn.SourceNS = v[0] + ixn.SourceName = v[1] + } + + err := s.IntentionSet(idx, ixn) + if err != nil { + t.Fatalf("error inserting: %s", err) + } + + idx++ + } + + // Build the arguments + args := &structs.IntentionQueryMatch{Type: typ} + for _, q := range tc.Query { + args.Entries = append(args.Entries, structs.IntentionMatchEntry{ + Namespace: q[0], + Name: q[1], + }) + } + + // Match + _, matches, err := s.IntentionMatch(nil, args) + if err != nil { + t.Fatalf("error matching: %s", err) + } + + // Should have equal lengths + if len(matches) != len(tc.Expected) { + t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", tc.Expected, matches) + } + + // Verify matches + for i, expected := range tc.Expected { + var actual [][]string + for _, ixn := range matches[i] { + switch typ { + case structs.IntentionMatchDestination: + actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) + case structs.IntentionMatchSource: + actual = append(actual, []string{ixn.SourceNS, ixn.SourceName}) + } + } + + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", actual, expected) + } + } + } + + for _, tc := range cases { + t.Run(tc.Name+" (destination)", func(t *testing.T) { + testRunner(t, tc, structs.IntentionMatchDestination) + }) + + t.Run(tc.Name+" (source)", func(t *testing.T) { + testRunner(t, tc, structs.IntentionMatchSource) + }) + } +} From e9d208bcb61f9b46924602ad29096093a15e2597 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 2 Mar 2018 13:40:03 -0800 Subject: [PATCH 018/539] agent/consul: RPC endpoint for Intention.Match --- agent/consul/intention_endpoint.go | 30 ++++++++++ agent/consul/intention_endpoint_test.go | 76 +++++++++++++++++++++++++ agent/structs/intention.go | 15 +++-- 3 files changed, 116 insertions(+), 5 deletions(-) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index fc552afd9..d13002722 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -142,3 +142,33 @@ func (s *Intention) List( }, ) } + +// Match returns the set of intentions that match the given source/destination. +func (s *Intention) Match( + args *structs.IntentionQueryRequest, + reply *structs.IndexedIntentionMatches) error { + // Forward if necessary + if done, err := s.srv.forward("Intention.Match", args, args, reply); done { + return err + } + + // TODO(mitchellh): validate + + return s.srv.blockingQuery( + &args.QueryOptions, + &reply.QueryMeta, + func(ws memdb.WatchSet, state *state.Store) error { + index, matches, err := state.IntentionMatch(ws, args.Match) + if err != nil { + return err + } + + reply.Index = index + reply.Matches = matches + + // TODO(mitchellh): acl filtering + + return nil + }, + ) +} diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 53aef35cd..65170ff7b 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -258,3 +258,79 @@ func TestIntentionList(t *testing.T) { } } } + +// Test basic matching. We don't need to exhaustively test inputs since this +// is tested in the agent/consul/state package. +func TestIntentionMatch_good(t *testing.T) { + t.Parallel() + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create some records + { + insert := [][]string{ + {"foo", "*"}, + {"foo", "bar"}, + {"foo", "baz"}, // shouldn't match + {"bar", "bar"}, // shouldn't match + {"bar", "*"}, // shouldn't match + {"*", "*"}, + } + + for _, v := range insert { + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: &structs.Intention{ + SourceNS: "default", + SourceName: "test", + DestinationNS: v[0], + DestinationName: v[1], + }, + } + + // Create + var reply string + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + } + } + + // Match + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + Match: &structs.IntentionQueryMatch{ + Type: structs.IntentionMatchDestination, + Entries: []structs.IntentionMatchEntry{ + { + Namespace: "foo", + Name: "bar", + }, + }, + }, + } + var resp structs.IndexedIntentionMatches + if err := msgpackrpc.CallWithCodec(codec, "Intention.Match", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + + if len(resp.Matches) != 1 { + t.Fatalf("bad: %#v", resp.Matches) + } + + expected := [][]string{{"foo", "bar"}, {"foo", "*"}, {"*", "*"}} + var actual [][]string + for _, ixn := range resp.Matches[0] { + actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) + } + + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", actual, expected) + } +} diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 14b5e0b8e..e2ad2fb92 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -75,6 +75,12 @@ type IndexedIntentions struct { QueryMeta } +// IndexedIntentionMatches represents the list of matches for a match query. +type IndexedIntentionMatches struct { + Matches []Intentions + QueryMeta +} + // IntentionOp is the operation for a request related to intentions. type IntentionOp string @@ -123,11 +129,10 @@ type IntentionQueryRequest struct { // IntentionID is the ID of a specific intention. IntentionID string - // MatchBy and MatchNames are used to match a namespace/name pair - // to a set of intentions. The list of MatchNames is an OR list, - // all matching intentions are returned together. - MatchBy IntentionMatchType - MatchNames []string + // Match is non-nil if we're performing a match query. A match will + // find intentions that "match" the given parameters. A match includes + // resolving wildcards. + Match *IntentionQueryMatch // Options for queries QueryOptions From 237da67da524f82e3fc578441a036a7dd110df8f Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 2 Mar 2018 14:17:21 -0800 Subject: [PATCH 019/539] agent: GET /v1/connect/intentions/match --- agent/http_oss.go | 1 + agent/intentions_endpoint.go | 89 ++++++++++++++++ agent/intentions_endpoint_test.go | 167 +++++++++++++++++++++++++++++- 3 files changed, 256 insertions(+), 1 deletion(-) diff --git a/agent/http_oss.go b/agent/http_oss.go index 0170a0075..d3bb7adc4 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -40,6 +40,7 @@ func init() { registerEndpoint("/v1/catalog/service/", []string{"GET"}, (*HTTPServer).CatalogServiceNodes) registerEndpoint("/v1/catalog/node/", []string{"GET"}, (*HTTPServer).CatalogNodeServices) registerEndpoint("/v1/connect/intentions", []string{"GET", "POST"}, (*HTTPServer).IntentionEndpoint) + registerEndpoint("/v1/connect/intentions/match", []string{"GET"}, (*HTTPServer).IntentionMatch) registerEndpoint("/v1/connect/intentions/", []string{"GET"}, (*HTTPServer).IntentionSpecific) registerEndpoint("/v1/coordinate/datacenters", []string{"GET"}, (*HTTPServer).CoordinateDatacenters) registerEndpoint("/v1/coordinate/nodes", []string{"GET"}, (*HTTPServer).CoordinateNodes) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index 40a9f2282..a28488c3d 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -67,6 +67,70 @@ func (s *HTTPServer) IntentionCreate(resp http.ResponseWriter, req *http.Request return intentionCreateResponse{reply}, nil } +// GET /v1/connect/intentions/match +func (s *HTTPServer) IntentionMatch(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Test the method + if req.Method != "GET" { + return nil, MethodNotAllowedError{req.Method, []string{"GET"}} + } + + // Prepare args + args := &structs.IntentionQueryRequest{Match: &structs.IntentionQueryMatch{}} + if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { + return nil, nil + } + + q := req.URL.Query() + + // Extract the "by" query parameter + if by, ok := q["by"]; !ok || len(by) != 1 { + return nil, fmt.Errorf("required query parameter 'by' not set") + } else { + switch v := structs.IntentionMatchType(by[0]); v { + case structs.IntentionMatchSource, structs.IntentionMatchDestination: + args.Match.Type = v + default: + return nil, fmt.Errorf("'by' parameter must be one of 'source' or 'destination'") + } + } + + // Extract all the match names + names, ok := q["name"] + if !ok || len(names) == 0 { + return nil, fmt.Errorf("required query parameter 'name' not set") + } + + // Build the entries in order. The order matters since that is the + // order of the returned responses. + args.Match.Entries = make([]structs.IntentionMatchEntry, len(names)) + for i, n := range names { + entry, err := parseIntentionMatchEntry(n) + if err != nil { + return nil, fmt.Errorf("name %q is invalid: %s", n, err) + } + + args.Match.Entries[i] = entry + } + + var reply structs.IndexedIntentionMatches + if err := s.agent.RPC("Intention.Match", args, &reply); err != nil { + return nil, err + } + + // We must have an identical count of matches + if len(reply.Matches) != len(names) { + return nil, fmt.Errorf("internal error: match response count didn't match input count") + } + + // Use empty list instead of nil. + response := make(map[string]structs.Intentions) + for i, ixns := range reply.Matches { + response[names[i]] = ixns + } + + return response, nil +} + // IntentionSpecific handles the endpoint for /v1/connection/intentions/:id func (s *HTTPServer) IntentionSpecific(resp http.ResponseWriter, req *http.Request) (interface{}, error) { id := strings.TrimPrefix(req.URL.Path, "/v1/connect/intentions/") @@ -161,3 +225,28 @@ func (s *HTTPServer) IntentionSpecificDelete(id string, resp http.ResponseWriter // intentionCreateResponse is the response structure for creating an intention. type intentionCreateResponse struct{ ID string } + +// parseIntentionMatchEntry parses the query parameter for an intention +// match query entry. +func parseIntentionMatchEntry(input string) (structs.IntentionMatchEntry, error) { + var result structs.IntentionMatchEntry + + // TODO(mitchellh): when namespaces are introduced, set the default + // namespace to be the namespace of the requestor. + + // Get the index to the '/'. If it doesn't exist, we have just a name + // so just set that and return. + idx := strings.IndexByte(input, '/') + if idx == -1 { + result.Name = input + return result, nil + } + + result.Namespace = input[:idx] + result.Name = input[idx+1:] + if strings.IndexByte(result.Name, '/') != -1 { + return result, fmt.Errorf("input can contain at most one '/'") + } + + return result, nil +} diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index c3753ea97..7e83846aa 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -72,6 +72,125 @@ func TestIntentionsList_values(t *testing.T) { } } +func TestIntentionsMatch_basic(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Create some intentions + { + insert := [][]string{ + {"foo", "*"}, + {"foo", "bar"}, + {"foo", "baz"}, // shouldn't match + {"bar", "bar"}, // shouldn't match + {"bar", "*"}, // shouldn't match + {"*", "*"}, + } + + for _, v := range insert { + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: &structs.Intention{ + SourceNS: "default", + SourceName: "test", + DestinationNS: v[0], + DestinationName: v[1], + }, + } + + // Create + var reply string + if err := a.RPC("Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + } + } + + // Request + req, _ := http.NewRequest("GET", + "/v1/connect/intentions/match?by=destination&name=foo/bar", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionMatch(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + + value := obj.(map[string]structs.Intentions) + if len(value) != 1 { + t.Fatalf("bad: %v", value) + } + + var actual [][]string + expected := [][]string{{"foo", "bar"}, {"foo", "*"}, {"*", "*"}} + for _, ixn := range value["foo/bar"] { + actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) + } + + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", actual, expected) + } +} + +func TestIntentionsMatch_noBy(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Request + req, _ := http.NewRequest("GET", + "/v1/connect/intentions/match?name=foo/bar", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionMatch(resp, req) + if err == nil || !strings.Contains(err.Error(), "by") { + t.Fatalf("err: %v", err) + } + if obj != nil { + t.Fatal("should have no response") + } +} + +func TestIntentionsMatch_byInvalid(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Request + req, _ := http.NewRequest("GET", + "/v1/connect/intentions/match?by=datacenter", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionMatch(resp, req) + if err == nil || !strings.Contains(err.Error(), "'by' parameter") { + t.Fatalf("err: %v", err) + } + if obj != nil { + t.Fatal("should have no response") + } +} + +func TestIntentionsMatch_noName(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Request + req, _ := http.NewRequest("GET", + "/v1/connect/intentions/match?by=source", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionMatch(resp, req) + if err == nil || !strings.Contains(err.Error(), "'name' not set") { + t.Fatalf("err: %v", err) + } + if obj != nil { + t.Fatal("should have no response") + } +} + func TestIntentionsCreate_good(t *testing.T) { t.Parallel() @@ -273,5 +392,51 @@ func TestIntentionsSpecificDelete_good(t *testing.T) { t.Fatalf("err: %v", err) } } - +} + +func TestParseIntentionMatchEntry(t *testing.T) { + cases := []struct { + Input string + Expected structs.IntentionMatchEntry + Err bool + }{ + { + "foo", + structs.IntentionMatchEntry{ + Name: "foo", + }, + false, + }, + + { + "foo/bar", + structs.IntentionMatchEntry{ + Namespace: "foo", + Name: "bar", + }, + false, + }, + + { + "foo/bar/baz", + structs.IntentionMatchEntry{}, + true, + }, + } + + for _, tc := range cases { + t.Run(tc.Input, func(t *testing.T) { + actual, err := parseIntentionMatchEntry(tc.Input) + if (err != nil) != tc.Err { + t.Fatalf("err: %s", err) + } + if err != nil { + return + } + + if !reflect.DeepEqual(actual, tc.Expected) { + t.Fatalf("bad: %#v", actual) + } + }) + } } From e630d65d9dadce73a591476da4ad06dac3e7195f Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 3 Mar 2018 08:43:19 -0800 Subject: [PATCH 020/539] agent/consul: set CreatedAt, UpdatedAt on intentions --- agent/consul/intention_endpoint.go | 11 +++++ agent/consul/intention_endpoint_test.go | 61 +++++++++++++++++++++++++ agent/consul/state/intention.go | 4 +- agent/consul/state/intention_test.go | 33 +++++++++++++ 4 files changed, 108 insertions(+), 1 deletion(-) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index d13002722..70ad25183 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -33,6 +33,11 @@ func (s *Intention) Apply( defer metrics.MeasureSince([]string{"consul", "intention", "apply"}, time.Now()) defer metrics.MeasureSince([]string{"intention", "apply"}, time.Now()) + // Always set a non-nil intention to avoid nil-access below + if args.Intention == nil { + args.Intention = &structs.Intention{} + } + // If no ID is provided, generate a new ID. This must be done prior to // appending to the Raft log, because the ID is not deterministic. Once // the entry is in the log, the state update MUST be deterministic or @@ -60,6 +65,9 @@ func (s *Intention) Apply( break } } + + // Set the created at + args.Intention.CreatedAt = time.Now() } *reply = args.Intention.ID @@ -75,6 +83,9 @@ func (s *Intention) Apply( } } + // We always update the updatedat field. This has no effect for deletion. + args.Intention.UpdatedAt = time.Now() + // Commit resp, err := s.srv.raftApply(structs.IntentionRequestType, args) if err != nil { diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 65170ff7b..8ff584d75 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -5,6 +5,7 @@ import ( "reflect" "strings" "testing" + "time" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/testrpc" @@ -32,6 +33,9 @@ func TestIntentionApply_new(t *testing.T) { } var reply string + // Record now to check created at time + now := time.Now() + // Create if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { t.Fatalf("err: %v", err) @@ -58,7 +62,26 @@ func TestIntentionApply_new(t *testing.T) { if resp.Index != actual.ModifyIndex { t.Fatalf("bad index: %d", resp.Index) } + + // Test CreatedAt + { + timeDiff := actual.CreatedAt.Sub(now) + if timeDiff < 0 || timeDiff > 5*time.Second { + t.Fatalf("should set created at: %s", actual.CreatedAt) + } + } + + // Test UpdatedAt + { + timeDiff := actual.UpdatedAt.Sub(now) + if timeDiff < 0 || timeDiff > 5*time.Second { + t.Fatalf("should set updated at: %s", actual.CreatedAt) + } + } + actual.CreateIndex, actual.ModifyIndex = 0, 0 + actual.CreatedAt = ixn.Intention.CreatedAt + actual.UpdatedAt = ixn.Intention.UpdatedAt if !reflect.DeepEqual(actual, ixn.Intention) { t.Fatalf("bad: %v", actual) } @@ -123,6 +146,28 @@ func TestIntentionApply_updateGood(t *testing.T) { t.Fatal("reply should be non-empty") } + // Read CreatedAt + var createdAt time.Time + ixn.Intention.ID = reply + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: ixn.Intention.ID, + } + var resp structs.IndexedIntentions + if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + if len(resp.Intentions) != 1 { + t.Fatalf("bad: %v", resp) + } + actual := resp.Intentions[0] + createdAt = actual.CreatedAt + } + + // Sleep a bit so that the updated at will definitely be different, not much + time.Sleep(1 * time.Millisecond) + // Update ixn.Op = structs.IntentionOpUpdate ixn.Intention.ID = reply @@ -146,7 +191,23 @@ func TestIntentionApply_updateGood(t *testing.T) { t.Fatalf("bad: %v", resp) } actual := resp.Intentions[0] + + // Test CreatedAt + if !actual.CreatedAt.Equal(createdAt) { + t.Fatalf("should not modify created at: %s", actual.CreatedAt) + } + + // Test UpdatedAt + { + timeDiff := actual.UpdatedAt.Sub(createdAt) + if timeDiff <= 0 || timeDiff > 5*time.Second { + t.Fatalf("should set updated at: %s", actual.CreatedAt) + } + } + actual.CreateIndex, actual.ModifyIndex = 0, 0 + actual.CreatedAt = ixn.Intention.CreatedAt + actual.UpdatedAt = ixn.Intention.UpdatedAt if !reflect.DeepEqual(actual, ixn.Intention) { t.Fatalf("bad: %v", actual) } diff --git a/agent/consul/state/intention.go b/agent/consul/state/intention.go index 51f4e1e3b..ba3871ea5 100644 --- a/agent/consul/state/intention.go +++ b/agent/consul/state/intention.go @@ -117,7 +117,9 @@ func (s *Store) intentionSetTxn(tx *memdb.Txn, idx uint64, ixn *structs.Intentio return fmt.Errorf("failed intention looup: %s", err) } if existing != nil { - ixn.CreateIndex = existing.(*structs.Intention).CreateIndex + oldIxn := existing.(*structs.Intention) + ixn.CreateIndex = oldIxn.CreateIndex + ixn.CreatedAt = oldIxn.CreatedAt } else { ixn.CreateIndex = idx } diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go index 2f4fee26b..1bfb6d248 100644 --- a/agent/consul/state/intention_test.go +++ b/agent/consul/state/intention_test.go @@ -3,6 +3,7 @@ package state import ( "reflect" "testing" + "time" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" @@ -121,6 +122,38 @@ func TestStore_IntentionSet_emptyId(t *testing.T) { } } +func TestStore_IntentionSet_updateCreatedAt(t *testing.T) { + s := testStateStore(t) + + // Build a valid intention + now := time.Now() + ixn := structs.Intention{ + ID: testUUID(), + CreatedAt: now, + } + + // Insert + if err := s.IntentionSet(1, &ixn); err != nil { + t.Fatalf("err: %s", err) + } + + // Change a value and test updating + ixnUpdate := ixn + ixnUpdate.CreatedAt = now.Add(10 * time.Second) + if err := s.IntentionSet(2, &ixnUpdate); err != nil { + t.Fatalf("err: %s", err) + } + + // Read it back and verify + _, actual, err := s.IntentionGet(nil, ixn.ID) + if err != nil { + t.Fatalf("err: %s", err) + } + if !actual.CreatedAt.Equal(now) { + t.Fatalf("bad: %#v", actual) + } +} + func TestStore_IntentionDelete(t *testing.T) { s := testStateStore(t) From 2b047fb09be75788fe6085d5226d9cef0cf8f5cb Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 3 Mar 2018 08:51:40 -0800 Subject: [PATCH 021/539] agent,agent/consul: set default namespaces --- agent/consul/intention_endpoint.go | 8 ++++++++ agent/consul/intention_endpoint_test.go | 10 ++++++++-- agent/intentions_endpoint.go | 1 + agent/intentions_endpoint_test.go | 11 +++++++++-- agent/structs/intention.go | 7 +++++++ 5 files changed, 33 insertions(+), 4 deletions(-) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 70ad25183..434c90628 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -86,6 +86,14 @@ func (s *Intention) Apply( // We always update the updatedat field. This has no effect for deletion. args.Intention.UpdatedAt = time.Now() + // Until we support namespaces, we force all namespaces to be default + if args.Intention.SourceNS == "" { + args.Intention.SourceNS = structs.IntentionDefaultNamespace + } + if args.Intention.DestinationNS == "" { + args.Intention.DestinationNS = structs.IntentionDefaultNamespace + } + // Commit resp, err := s.srv.raftApply(structs.IntentionRequestType, args) if err != nil { diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 8ff584d75..21f112f48 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -28,7 +28,10 @@ func TestIntentionApply_new(t *testing.T) { Datacenter: "dc1", Op: structs.IntentionOpCreate, Intention: &structs.Intention{ - SourceName: "test", + SourceNS: structs.IntentionDefaultNamespace, + SourceName: "test", + DestinationNS: structs.IntentionDefaultNamespace, + DestinationName: "test", }, } var reply string @@ -133,7 +136,10 @@ func TestIntentionApply_updateGood(t *testing.T) { Datacenter: "dc1", Op: structs.IntentionOpCreate, Intention: &structs.Intention{ - SourceName: "test", + SourceNS: structs.IntentionDefaultNamespace, + SourceName: "test", + DestinationNS: structs.IntentionDefaultNamespace, + DestinationName: "test", }, } var reply string diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index a28488c3d..39cf3e50b 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -230,6 +230,7 @@ type intentionCreateResponse struct{ ID string } // match query entry. func parseIntentionMatchEntry(input string) (structs.IntentionMatchEntry, error) { var result structs.IntentionMatchEntry + result.Namespace = structs.IntentionDefaultNamespace // TODO(mitchellh): when namespaces are introduced, set the default // namespace to be the namespace of the requestor. diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index 7e83846aa..a1c0413ed 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -238,7 +238,12 @@ func TestIntentionsSpecificGet_good(t *testing.T) { defer a.Shutdown() // The intention - ixn := &structs.Intention{SourceName: "foo"} + ixn := &structs.Intention{ + SourceNS: structs.IntentionDefaultNamespace, + SourceName: "foo", + DestinationNS: structs.IntentionDefaultNamespace, + DestinationName: "bar", + } // Create an intention directly var reply string @@ -268,6 +273,7 @@ func TestIntentionsSpecificGet_good(t *testing.T) { ixn.ID = value.ID ixn.RaftIndex = value.RaftIndex + ixn.CreatedAt, ixn.UpdatedAt = value.CreatedAt, value.UpdatedAt if !reflect.DeepEqual(value, ixn) { t.Fatalf("bad (got, want):\n\n%#v\n\n%#v", value, ixn) } @@ -403,7 +409,8 @@ func TestParseIntentionMatchEntry(t *testing.T) { { "foo", structs.IntentionMatchEntry{ - Name: "foo", + Namespace: structs.IntentionDefaultNamespace, + Name: "foo", }, false, }, diff --git a/agent/structs/intention.go b/agent/structs/intention.go index e2ad2fb92..b60c36625 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -7,6 +7,13 @@ import ( const ( // IntentionWildcard is the wildcard value. IntentionWildcard = "*" + + // IntentionDefaultNamespace is the default namespace value. + // NOTE(mitchellh): This is only meant to be a temporary constant. + // When namespaces are introduced, we should delete this constant and + // fix up all the places where this was used with the proper namespace + // value. + IntentionDefaultNamespace = "default" ) // Intention defines an intention for the Connect Service Graph. This defines From e81d1c88b7dc41439353de4ad5045e36f01a8435 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 3 Mar 2018 08:59:17 -0800 Subject: [PATCH 022/539] agent/consul/fsm: add tests for intention requests --- agent/consul/fsm/commands_oss_test.go | 102 ++++++++++++++++++++++++++ 1 file changed, 102 insertions(+) diff --git a/agent/consul/fsm/commands_oss_test.go b/agent/consul/fsm/commands_oss_test.go index ccf58a47f..b679cb313 100644 --- a/agent/consul/fsm/commands_oss_test.go +++ b/agent/consul/fsm/commands_oss_test.go @@ -1148,3 +1148,105 @@ func TestFSM_Autopilot(t *testing.T) { t.Fatalf("bad: %v", config.CleanupDeadServers) } } + +func TestFSM_Intention_CRUD(t *testing.T) { + t.Parallel() + + fsm, err := New(nil, os.Stderr) + if err != nil { + t.Fatalf("err: %v", err) + } + + // Create a new intention. + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: &structs.Intention{ + ID: generateUUID(), + SourceNS: "default", + SourceName: "web", + DestinationNS: "default", + DestinationName: "db", + }, + } + + { + buf, err := structs.Encode(structs.IntentionRequestType, ixn) + if err != nil { + t.Fatalf("err: %v", err) + } + resp := fsm.Apply(makeLog(buf)) + if resp != nil { + t.Fatalf("resp: %v", resp) + } + } + + // Verify it's in the state store. + { + _, actual, err := fsm.state.IntentionGet(nil, ixn.Intention.ID) + if err != nil { + t.Fatalf("err: %s", err) + } + + actual.CreateIndex, actual.ModifyIndex = 0, 0 + actual.CreatedAt = ixn.Intention.CreatedAt + actual.UpdatedAt = ixn.Intention.UpdatedAt + if !reflect.DeepEqual(actual, ixn.Intention) { + t.Fatalf("bad: %v", actual) + } + } + + // Make an update + ixn.Op = structs.IntentionOpUpdate + ixn.Intention.SourceName = "api" + { + buf, err := structs.Encode(structs.IntentionRequestType, ixn) + if err != nil { + t.Fatalf("err: %v", err) + } + resp := fsm.Apply(makeLog(buf)) + if resp != nil { + t.Fatalf("resp: %v", resp) + } + } + + // Verify the update. + { + _, actual, err := fsm.state.IntentionGet(nil, ixn.Intention.ID) + if err != nil { + t.Fatalf("err: %s", err) + } + + actual.CreateIndex, actual.ModifyIndex = 0, 0 + actual.CreatedAt = ixn.Intention.CreatedAt + actual.UpdatedAt = ixn.Intention.UpdatedAt + if !reflect.DeepEqual(actual, ixn.Intention) { + t.Fatalf("bad: %v", actual) + } + } + + // Delete + ixn.Op = structs.IntentionOpDelete + { + buf, err := structs.Encode(structs.IntentionRequestType, ixn) + if err != nil { + t.Fatalf("err: %v", err) + } + resp := fsm.Apply(makeLog(buf)) + if resp != nil { + t.Fatalf("resp: %v", resp) + } + } + + // Make sure it's gone. + { + _, actual, err := fsm.state.IntentionGet(nil, ixn.Intention.ID) + if err != nil { + t.Fatalf("err: %s", err) + } + + if actual != nil { + t.Fatalf("bad: %v", actual) + } + } +} From d34ee200de1aee7e3c600e1104545b0275b27313 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 3 Mar 2018 09:16:26 -0800 Subject: [PATCH 023/539] agent/consul: support intention description, meta is non-nil --- agent/consul/state/intention.go | 6 ++++ agent/consul/state/intention_test.go | 47 ++++++++++++++++++++++++++++ agent/structs/intention.go | 5 +++ 3 files changed, 58 insertions(+) diff --git a/agent/consul/state/intention.go b/agent/consul/state/intention.go index ba3871ea5..b737318ee 100644 --- a/agent/consul/state/intention.go +++ b/agent/consul/state/intention.go @@ -125,6 +125,12 @@ func (s *Store) intentionSetTxn(tx *memdb.Txn, idx uint64, ixn *structs.Intentio } ixn.ModifyIndex = idx + // We always force meta to be non-nil so that we its an empty map. + // This makes it easy for API responses to not nil-check this everywhere. + if ixn.Meta == nil { + ixn.Meta = make(map[string]string) + } + // Insert if err := tx.Insert(intentionsTableName, ixn); err != nil { return err diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go index 1bfb6d248..dd7d9fcdf 100644 --- a/agent/consul/state/intention_test.go +++ b/agent/consul/state/intention_test.go @@ -154,6 +154,53 @@ func TestStore_IntentionSet_updateCreatedAt(t *testing.T) { } } +func TestStore_IntentionSet_metaNil(t *testing.T) { + s := testStateStore(t) + + // Build a valid intention + ixn := structs.Intention{ + ID: testUUID(), + } + + // Insert + if err := s.IntentionSet(1, &ixn); err != nil { + t.Fatalf("err: %s", err) + } + + // Read it back and verify + _, actual, err := s.IntentionGet(nil, ixn.ID) + if err != nil { + t.Fatalf("err: %s", err) + } + if actual.Meta == nil { + t.Fatal("meta should be non-nil") + } +} + +func TestStore_IntentionSet_metaSet(t *testing.T) { + s := testStateStore(t) + + // Build a valid intention + ixn := structs.Intention{ + ID: testUUID(), + Meta: map[string]string{"foo": "bar"}, + } + + // Insert + if err := s.IntentionSet(1, &ixn); err != nil { + t.Fatalf("err: %s", err) + } + + // Read it back and verify + _, actual, err := s.IntentionGet(nil, ixn.ID) + if err != nil { + t.Fatalf("err: %s", err) + } + if !reflect.DeepEqual(actual.Meta, ixn.Meta) { + t.Fatalf("bad: %#v", actual) + } +} + func TestStore_IntentionDelete(t *testing.T) { s := testStateStore(t) diff --git a/agent/structs/intention.go b/agent/structs/intention.go index b60c36625..0a7d8c5d4 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -23,6 +23,11 @@ type Intention struct { // ID is the UUID-based ID for the intention, always generated by Consul. ID string + // Description is a human-friendly description of this intention. + // It is opaque to Consul and is only stored and transferred in API + // requests. + Description string + // SourceNS, SourceName are the namespace and name, respectively, of // the source service. Either of these may be the wildcard "*", but only // the full value can be a wildcard. Partial wildcards are not allowed. From 8e2462e3019477b7b80521d31a37fc24e6230a3e Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 3 Mar 2018 09:43:37 -0800 Subject: [PATCH 024/539] agent/structs: Intention validation --- agent/consul/intention_endpoint.go | 5 ++ agent/consul/intention_endpoint_test.go | 13 ++- agent/structs/intention.go | 89 ++++++++++++++++++++ agent/structs/intention_test.go | 107 ++++++++++++++++++++++++ agent/structs/testing_intention.go | 15 ++++ 5 files changed, 227 insertions(+), 2 deletions(-) create mode 100644 agent/structs/testing_intention.go diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 434c90628..56f310e18 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -94,6 +94,11 @@ func (s *Intention) Apply( args.Intention.DestinationNS = structs.IntentionDefaultNamespace } + // Validate + if err := args.Intention.Validate(); err != nil { + return err + } + // Commit resp, err := s.srv.raftApply(structs.IntentionRequestType, args) if err != nil { diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 21f112f48..2ee49f207 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -32,6 +32,8 @@ func TestIntentionApply_new(t *testing.T) { SourceName: "test", DestinationNS: structs.IntentionDefaultNamespace, DestinationName: "test", + Action: structs.IntentionActionAllow, + Meta: map[string]string{}, }, } var reply string @@ -86,7 +88,7 @@ func TestIntentionApply_new(t *testing.T) { actual.CreatedAt = ixn.Intention.CreatedAt actual.UpdatedAt = ixn.Intention.UpdatedAt if !reflect.DeepEqual(actual, ixn.Intention) { - t.Fatalf("bad: %v", actual) + t.Fatalf("bad:\n\n%#v\n\n%#v", actual, ixn.Intention) } } } @@ -140,6 +142,8 @@ func TestIntentionApply_updateGood(t *testing.T) { SourceName: "test", DestinationNS: structs.IntentionDefaultNamespace, DestinationName: "test", + Action: structs.IntentionActionAllow, + Meta: map[string]string{}, }, } var reply string @@ -265,7 +269,11 @@ func TestIntentionApply_deleteGood(t *testing.T) { Datacenter: "dc1", Op: structs.IntentionOpCreate, Intention: &structs.Intention{ - SourceName: "test", + SourceNS: "test", + SourceName: "test", + DestinationNS: "test", + DestinationName: "test", + Action: structs.IntentionActionAllow, }, } var reply string @@ -358,6 +366,7 @@ func TestIntentionMatch_good(t *testing.T) { SourceName: "test", DestinationNS: v[0], DestinationName: v[1], + Action: structs.IntentionActionAllow, }, } diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 0a7d8c5d4..a8da939f7 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -1,7 +1,11 @@ package structs import ( + "fmt" + "strings" "time" + + "github.com/hashicorp/go-multierror" ) const ( @@ -61,6 +65,91 @@ type Intention struct { RaftIndex } +// Validate returns an error if the intention is invalid for inserting +// or updating. +func (x *Intention) Validate() error { + var result error + + // Empty values + if x.SourceNS == "" { + result = multierror.Append(result, fmt.Errorf("SourceNS must be set")) + } + if x.SourceName == "" { + result = multierror.Append(result, fmt.Errorf("SourceName must be set")) + } + if x.DestinationNS == "" { + result = multierror.Append(result, fmt.Errorf("DestinationNS must be set")) + } + if x.DestinationName == "" { + result = multierror.Append(result, fmt.Errorf("DestinationName must be set")) + } + + // Wildcard usage verification + if x.SourceNS != IntentionWildcard { + if strings.Contains(x.SourceNS, IntentionWildcard) { + result = multierror.Append(result, fmt.Errorf( + "SourceNS: wildcard character '*' cannot be used with partial values")) + } + } + if x.SourceName != IntentionWildcard { + if strings.Contains(x.SourceName, IntentionWildcard) { + result = multierror.Append(result, fmt.Errorf( + "SourceName: wildcard character '*' cannot be used with partial values")) + } + + if x.SourceNS == IntentionWildcard { + result = multierror.Append(result, fmt.Errorf( + "SourceName: exact value cannot follow wildcard namespace")) + } + } + if x.DestinationNS != IntentionWildcard { + if strings.Contains(x.DestinationNS, IntentionWildcard) { + result = multierror.Append(result, fmt.Errorf( + "DestinationNS: wildcard character '*' cannot be used with partial values")) + } + } + if x.DestinationName != IntentionWildcard { + if strings.Contains(x.DestinationName, IntentionWildcard) { + result = multierror.Append(result, fmt.Errorf( + "DestinationName: wildcard character '*' cannot be used with partial values")) + } + + if x.DestinationNS == IntentionWildcard { + result = multierror.Append(result, fmt.Errorf( + "DestinationName: exact value cannot follow wildcard namespace")) + } + } + + // Length of opaque values + if len(x.Description) > metaValueMaxLength { + result = multierror.Append(result, fmt.Errorf( + "Description exceeds maximum length %d", metaValueMaxLength)) + } + if len(x.Meta) > metaMaxKeyPairs { + result = multierror.Append(result, fmt.Errorf( + "Meta exceeds maximum element count %d", metaMaxKeyPairs)) + } + for k, v := range x.Meta { + if len(k) > metaKeyMaxLength { + result = multierror.Append(result, fmt.Errorf( + "Meta key %q exceeds maximum length %d", k, metaKeyMaxLength)) + } + if len(v) > metaValueMaxLength { + result = multierror.Append(result, fmt.Errorf( + "Meta value for key %q exceeds maximum length %d", k, metaValueMaxLength)) + } + } + + switch x.Action { + case IntentionActionAllow, IntentionActionDeny: + default: + result = multierror.Append(result, fmt.Errorf( + "Action must be set to 'allow' or 'deny'")) + } + + return result +} + // IntentionAction is the action that the intention represents. This // can be "allow" or "deny" to whitelist or blacklist intentions. type IntentionAction string diff --git a/agent/structs/intention_test.go b/agent/structs/intention_test.go index 19ac5811a..500b24d5a 100644 --- a/agent/structs/intention_test.go +++ b/agent/structs/intention_test.go @@ -3,9 +3,116 @@ package structs import ( "reflect" "sort" + "strings" "testing" ) +func TestIntentionValidate(t *testing.T) { + cases := []struct { + Name string + Modify func(*Intention) + Err string + }{ + { + "long description", + func(x *Intention) { + x.Description = strings.Repeat("x", metaValueMaxLength+1) + }, + "description exceeds", + }, + + { + "no action set", + func(x *Intention) { x.Action = "" }, + "action must be set", + }, + + { + "no SourceNS", + func(x *Intention) { x.SourceNS = "" }, + "SourceNS must be set", + }, + + { + "no SourceName", + func(x *Intention) { x.SourceName = "" }, + "SourceName must be set", + }, + + { + "no DestinationNS", + func(x *Intention) { x.DestinationNS = "" }, + "DestinationNS must be set", + }, + + { + "no DestinationName", + func(x *Intention) { x.DestinationName = "" }, + "DestinationName must be set", + }, + + { + "SourceNS partial wildcard", + func(x *Intention) { x.SourceNS = "foo*" }, + "partial value", + }, + + { + "SourceName partial wildcard", + func(x *Intention) { x.SourceName = "foo*" }, + "partial value", + }, + + { + "SourceName exact following wildcard", + func(x *Intention) { + x.SourceNS = "*" + x.SourceName = "foo" + }, + "follow wildcard", + }, + + { + "DestinationNS partial wildcard", + func(x *Intention) { x.DestinationNS = "foo*" }, + "partial value", + }, + + { + "DestinationName partial wildcard", + func(x *Intention) { x.DestinationName = "foo*" }, + "partial value", + }, + + { + "DestinationName exact following wildcard", + func(x *Intention) { + x.DestinationNS = "*" + x.DestinationName = "foo" + }, + "follow wildcard", + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + ixn := TestIntention(t) + tc.Modify(ixn) + + err := ixn.Validate() + if (err != nil) != (tc.Err != "") { + t.Fatalf("err: %s", err) + } + if err == nil { + return + } + if !strings.Contains(strings.ToLower(err.Error()), strings.ToLower(tc.Err)) { + t.Fatalf("err: %s", err) + } + }) + } +} + func TestIntentionPrecedenceSorter(t *testing.T) { cases := []struct { Name string diff --git a/agent/structs/testing_intention.go b/agent/structs/testing_intention.go new file mode 100644 index 000000000..b653c1174 --- /dev/null +++ b/agent/structs/testing_intention.go @@ -0,0 +1,15 @@ +package structs + +import ( + "github.com/mitchellh/go-testing-interface" +) + +// TestIntention returns a valid, uninserted (no ID set) intention. +func TestIntention(t testing.T) *Intention { + return &Intention{ + SourceNS: "eng", + SourceName: "api", + DestinationNS: "eng", + DestinationName: "db", + } +} From 04bd4af99c92253c9db264cd992caa5f40d44a5f Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 3 Mar 2018 09:55:27 -0800 Subject: [PATCH 025/539] agent/consul: set default intention SourceType, validate it --- agent/consul/intention_endpoint.go | 5 +++ agent/consul/intention_endpoint_test.go | 57 +++++++++++++++++++++++++ agent/structs/intention.go | 7 +++ agent/structs/intention_test.go | 12 ++++++ agent/structs/testing_intention.go | 2 + 5 files changed, 83 insertions(+) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 56f310e18..c3723d708 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -86,6 +86,11 @@ func (s *Intention) Apply( // We always update the updatedat field. This has no effect for deletion. args.Intention.UpdatedAt = time.Now() + // Default source type + if args.Intention.SourceType == "" { + args.Intention.SourceType = structs.IntentionSourceConsul + } + // Until we support namespaces, we force all namespaces to be default if args.Intention.SourceNS == "" { args.Intention.SourceNS = structs.IntentionDefaultNamespace diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 2ee49f207..3c38e695a 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -33,6 +33,7 @@ func TestIntentionApply_new(t *testing.T) { DestinationNS: structs.IntentionDefaultNamespace, DestinationName: "test", Action: structs.IntentionActionAllow, + SourceType: structs.IntentionSourceConsul, Meta: map[string]string{}, }, } @@ -93,6 +94,61 @@ func TestIntentionApply_new(t *testing.T) { } } +// Test the source type defaults +func TestIntentionApply_defaultSourceType(t *testing.T) { + t.Parallel() + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: &structs.Intention{ + SourceNS: structs.IntentionDefaultNamespace, + SourceName: "test", + DestinationNS: structs.IntentionDefaultNamespace, + DestinationName: "test", + Action: structs.IntentionActionAllow, + }, + } + var reply string + + // Create + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + if reply == "" { + t.Fatal("reply should be non-empty") + } + + // Read + ixn.Intention.ID = reply + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: ixn.Intention.ID, + } + var resp structs.IndexedIntentions + if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + if len(resp.Intentions) != 1 { + t.Fatalf("bad: %v", resp) + } + + actual := resp.Intentions[0] + if actual.SourceType != structs.IntentionSourceConsul { + t.Fatalf("bad:\n\n%#v\n\n%#v", actual, ixn.Intention) + } + } +} + // Shouldn't be able to create with an ID set func TestIntentionApply_createWithID(t *testing.T) { t.Parallel() @@ -143,6 +199,7 @@ func TestIntentionApply_updateGood(t *testing.T) { DestinationNS: structs.IntentionDefaultNamespace, DestinationName: "test", Action: structs.IntentionActionAllow, + SourceType: structs.IntentionSourceConsul, Meta: map[string]string{}, }, } diff --git a/agent/structs/intention.go b/agent/structs/intention.go index a8da939f7..579fef6c1 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -147,6 +147,13 @@ func (x *Intention) Validate() error { "Action must be set to 'allow' or 'deny'")) } + switch x.SourceType { + case IntentionSourceConsul: + default: + result = multierror.Append(result, fmt.Errorf( + "SourceType must be set to 'consul'")) + } + return result } diff --git a/agent/structs/intention_test.go b/agent/structs/intention_test.go index 500b24d5a..ec0a2de66 100644 --- a/agent/structs/intention_test.go +++ b/agent/structs/intention_test.go @@ -92,6 +92,18 @@ func TestIntentionValidate(t *testing.T) { }, "follow wildcard", }, + + { + "SourceType is not set", + func(x *Intention) { x.SourceType = "" }, + "SourceType must", + }, + + { + "SourceType is other", + func(x *Intention) { x.SourceType = IntentionSourceType("other") }, + "SourceType must", + }, } for _, tc := range cases { diff --git a/agent/structs/testing_intention.go b/agent/structs/testing_intention.go index b653c1174..930e27869 100644 --- a/agent/structs/testing_intention.go +++ b/agent/structs/testing_intention.go @@ -11,5 +11,7 @@ func TestIntention(t testing.T) *Intention { SourceName: "api", DestinationNS: "eng", DestinationName: "db", + Action: IntentionActionAllow, + SourceType: IntentionSourceConsul, } } From 37f66e47ed2824331e48a9fd423ee2541bda80cc Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 3 Mar 2018 10:12:05 -0800 Subject: [PATCH 026/539] agent: use testing intention to get valid intentions --- agent/consul/intention_endpoint.go | 9 ++++++--- agent/intentions_endpoint.go | 8 +++++++- agent/intentions_endpoint_test.go | 28 ++++++++++++---------------- agent/structs/testing_intention.go | 1 + 4 files changed, 26 insertions(+), 20 deletions(-) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index c3723d708..825c89c59 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -99,9 +99,12 @@ func (s *Intention) Apply( args.Intention.DestinationNS = structs.IntentionDefaultNamespace } - // Validate - if err := args.Intention.Validate(); err != nil { - return err + // Validate. We do not validate on delete since it is valid to only + // send an ID in that case. + if args.Op != structs.IntentionOpDelete { + if err := args.Intention.Validate(); err != nil { + return err + } } // Commit diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index 39cf3e50b..72a1cba67 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -173,7 +173,13 @@ func (s *HTTPServer) IntentionSpecificGet(id string, resp http.ResponseWriter, r return nil, err } - // TODO: validate length + // This shouldn't happen since the called API documents it shouldn't, + // but we check since the alternative if it happens is a panic. + if len(reply.Intentions) == 0 { + resp.WriteHeader(http.StatusNotFound) + return nil, nil + } + return reply.Intentions[0], nil } diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index a1c0413ed..c236491e8 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -43,8 +43,10 @@ func TestIntentionsList_values(t *testing.T) { req := structs.IntentionRequest{ Datacenter: "dc1", Op: structs.IntentionOpCreate, - Intention: &structs.Intention{SourceName: v}, + Intention: structs.TestIntention(t), } + req.Intention.SourceName = v + var reply string if err := a.RPC("Intention.Apply", &req, &reply); err != nil { t.Fatalf("err: %s", err) @@ -93,13 +95,10 @@ func TestIntentionsMatch_basic(t *testing.T) { ixn := structs.IntentionRequest{ Datacenter: "dc1", Op: structs.IntentionOpCreate, - Intention: &structs.Intention{ - SourceNS: "default", - SourceName: "test", - DestinationNS: v[0], - DestinationName: v[1], - }, + Intention: structs.TestIntention(t), } + ixn.Intention.DestinationNS = v[0] + ixn.Intention.DestinationName = v[1] // Create var reply string @@ -198,7 +197,8 @@ func TestIntentionsCreate_good(t *testing.T) { defer a.Shutdown() // Make sure an empty list is non-nil. - args := &structs.Intention{SourceName: "foo"} + args := structs.TestIntention(t) + args.SourceName = "foo" req, _ := http.NewRequest("POST", "/v1/connect/intentions", jsonReader(args)) resp := httptest.NewRecorder() obj, err := a.srv.IntentionCreate(resp, req) @@ -238,12 +238,7 @@ func TestIntentionsSpecificGet_good(t *testing.T) { defer a.Shutdown() // The intention - ixn := &structs.Intention{ - SourceNS: structs.IntentionDefaultNamespace, - SourceName: "foo", - DestinationNS: structs.IntentionDefaultNamespace, - DestinationName: "bar", - } + ixn := structs.TestIntention(t) // Create an intention directly var reply string @@ -286,7 +281,7 @@ func TestIntentionsSpecificUpdate_good(t *testing.T) { defer a.Shutdown() // The intention - ixn := &structs.Intention{SourceName: "foo"} + ixn := structs.TestIntention(t) // Create an intention directly var reply string @@ -341,7 +336,8 @@ func TestIntentionsSpecificDelete_good(t *testing.T) { defer a.Shutdown() // The intention - ixn := &structs.Intention{SourceName: "foo"} + ixn := structs.TestIntention(t) + ixn.SourceName = "foo" // Create an intention directly var reply string diff --git a/agent/structs/testing_intention.go b/agent/structs/testing_intention.go index 930e27869..a946a3243 100644 --- a/agent/structs/testing_intention.go +++ b/agent/structs/testing_intention.go @@ -13,5 +13,6 @@ func TestIntention(t testing.T) *Intention { DestinationName: "db", Action: IntentionActionAllow, SourceType: IntentionSourceConsul, + Meta: map[string]string{}, } } From 027dad867288efe7c449342d33bf2640e9f29188 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 3 Mar 2018 10:13:16 -0800 Subject: [PATCH 027/539] agent/consul/state: remove TODO --- agent/consul/state/intention.go | 2 -- 1 file changed, 2 deletions(-) diff --git a/agent/consul/state/intention.go b/agent/consul/state/intention.go index b737318ee..3e83af4d1 100644 --- a/agent/consul/state/intention.go +++ b/agent/consul/state/intention.go @@ -245,8 +245,6 @@ func (s *Store) IntentionMatch(ws memdb.WatchSet, args *structs.IntentionQueryMa } } - // TODO: filter for uniques - // Sort the results by precedence sort.Sort(structs.IntentionPrecedenceSorter(ixns)) From 3a00564411478a0ce4e3af05c0dcdae990c70392 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 4 Mar 2018 19:10:19 -0800 Subject: [PATCH 028/539] agent/consul/state: need to set Meta for intentions for tests --- agent/consul/state/intention_test.go | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go index dd7d9fcdf..7f5376509 100644 --- a/agent/consul/state/intention_test.go +++ b/agent/consul/state/intention_test.go @@ -32,7 +32,8 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { // Build a valid intention ixn := &structs.Intention{ - ID: testUUID(), + ID: testUUID(), + Meta: map[string]string{}, } // Inserting a with empty ID is disallowed. @@ -50,7 +51,8 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { // Read it back out and verify it. expected := &structs.Intention{ - ID: ixn.ID, + ID: ixn.ID, + Meta: map[string]string{}, RaftIndex: structs.RaftIndex{ CreateIndex: 1, ModifyIndex: 1, @@ -264,10 +266,12 @@ func TestStore_IntentionsList(t *testing.T) { // Create some intentions ixns := structs.Intentions{ &structs.Intention{ - ID: testUUID(), + ID: testUUID(), + Meta: map[string]string{}, }, &structs.Intention{ - ID: testUUID(), + ID: testUUID(), + Meta: map[string]string{}, }, } @@ -288,14 +292,16 @@ func TestStore_IntentionsList(t *testing.T) { // Read it back and verify. expected := structs.Intentions{ &structs.Intention{ - ID: ixns[0].ID, + ID: ixns[0].ID, + Meta: map[string]string{}, RaftIndex: structs.RaftIndex{ CreateIndex: 1, ModifyIndex: 1, }, }, &structs.Intention{ - ID: ixns[1].ID, + ID: ixns[1].ID, + Meta: map[string]string{}, RaftIndex: structs.RaftIndex{ CreateIndex: 2, ModifyIndex: 2, From 67b017c95c0b1eb246ed26cc60777d381a234c45 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 4 Mar 2018 19:14:19 -0800 Subject: [PATCH 029/539] agent/consul/fsm: switch tests to use structs.TestIntention --- agent/consul/fsm/commands_oss_test.go | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-) diff --git a/agent/consul/fsm/commands_oss_test.go b/agent/consul/fsm/commands_oss_test.go index b679cb313..d18d7651e 100644 --- a/agent/consul/fsm/commands_oss_test.go +++ b/agent/consul/fsm/commands_oss_test.go @@ -1161,14 +1161,9 @@ func TestFSM_Intention_CRUD(t *testing.T) { ixn := structs.IntentionRequest{ Datacenter: "dc1", Op: structs.IntentionOpCreate, - Intention: &structs.Intention{ - ID: generateUUID(), - SourceNS: "default", - SourceName: "web", - DestinationNS: "default", - DestinationName: "db", - }, + Intention: structs.TestIntention(t), } + ixn.Intention.ID = generateUUID() { buf, err := structs.Encode(structs.IntentionRequestType, ixn) From 6f33b2d0703d92dc465a6257fe946c72b3b64117 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 6 Mar 2018 09:04:44 -0800 Subject: [PATCH 030/539] agent: use UTC time for intention times, move empty list check to agent/consul --- agent/consul/intention_endpoint.go | 8 ++++++-- agent/consul/intention_endpoint_test.go | 3 +++ agent/intentions_endpoint.go | 4 ---- 3 files changed, 9 insertions(+), 6 deletions(-) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 825c89c59..6653d5502 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -67,7 +67,7 @@ func (s *Intention) Apply( } // Set the created at - args.Intention.CreatedAt = time.Now() + args.Intention.CreatedAt = time.Now().UTC() } *reply = args.Intention.ID @@ -84,7 +84,7 @@ func (s *Intention) Apply( } // We always update the updatedat field. This has no effect for deletion. - args.Intention.UpdatedAt = time.Now() + args.Intention.UpdatedAt = time.Now().UTC() // Default source type if args.Intention.SourceType == "" { @@ -169,6 +169,10 @@ func (s *Intention) List( } reply.Index, reply.Intentions = index, ixns + if reply.Intentions == nil { + reply.Intentions = make(structs.Intentions, 0) + } + // filterACL return nil }, diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 3c38e695a..751b7894c 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -384,6 +384,9 @@ func TestIntentionList(t *testing.T) { if err := msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp); err != nil { t.Fatalf("err: %v", err) } + if resp.Intentions == nil { + t.Fatal("should not be nil") + } if len(resp.Intentions) != 0 { t.Fatalf("bad: %v", resp) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index 72a1cba67..5196f06c5 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -37,10 +37,6 @@ func (s *HTTPServer) IntentionList(resp http.ResponseWriter, req *http.Request) return nil, err } - // Use empty list instead of nil. - if reply.Intentions == nil { - reply.Intentions = make(structs.Intentions, 0) - } return reply.Intentions, nil } From f07340e94f4b686d682e9b6519b4c04216323ae4 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 6 Mar 2018 09:31:21 -0800 Subject: [PATCH 031/539] agent/consul/fsm,state: snapshot/restore for intentions --- agent/consul/fsm/snapshot_oss.go | 33 ++++++++ agent/consul/fsm/snapshot_oss_test.go | 23 ++++++ agent/consul/state/intention.go | 28 +++++++ agent/consul/state/intention_test.go | 105 ++++++++++++++++++++++++++ 4 files changed, 189 insertions(+) diff --git a/agent/consul/fsm/snapshot_oss.go b/agent/consul/fsm/snapshot_oss.go index be7bfc5a8..1dde3ab0b 100644 --- a/agent/consul/fsm/snapshot_oss.go +++ b/agent/consul/fsm/snapshot_oss.go @@ -20,6 +20,7 @@ func init() { registerRestorer(structs.CoordinateBatchUpdateType, restoreCoordinates) registerRestorer(structs.PreparedQueryRequestType, restorePreparedQuery) registerRestorer(structs.AutopilotRequestType, restoreAutopilot) + registerRestorer(structs.IntentionRequestType, restoreIntention) } func persistOSS(s *snapshot, sink raft.SnapshotSink, encoder *codec.Encoder) error { @@ -44,6 +45,9 @@ func persistOSS(s *snapshot, sink raft.SnapshotSink, encoder *codec.Encoder) err if err := s.persistAutopilot(sink, encoder); err != nil { return err } + if err := s.persistIntentions(sink, encoder); err != nil { + return err + } return nil } @@ -258,6 +262,24 @@ func (s *snapshot) persistAutopilot(sink raft.SnapshotSink, return nil } +func (s *snapshot) persistIntentions(sink raft.SnapshotSink, + encoder *codec.Encoder) error { + ixns, err := s.state.Intentions() + if err != nil { + return err + } + + for _, ixn := range ixns { + if _, err := sink.Write([]byte{byte(structs.IntentionRequestType)}); err != nil { + return err + } + if err := encoder.Encode(ixn); err != nil { + return err + } + } + return nil +} + func restoreRegistration(header *snapshotHeader, restore *state.Restore, decoder *codec.Decoder) error { var req structs.RegisterRequest if err := decoder.Decode(&req); err != nil { @@ -364,3 +386,14 @@ func restoreAutopilot(header *snapshotHeader, restore *state.Restore, decoder *c } return nil } + +func restoreIntention(header *snapshotHeader, restore *state.Restore, decoder *codec.Decoder) error { + var req structs.Intention + if err := decoder.Decode(&req); err != nil { + return err + } + if err := restore.Intention(&req); err != nil { + return err + } + return nil +} diff --git a/agent/consul/fsm/snapshot_oss_test.go b/agent/consul/fsm/snapshot_oss_test.go index 8b8544420..759f825b1 100644 --- a/agent/consul/fsm/snapshot_oss_test.go +++ b/agent/consul/fsm/snapshot_oss_test.go @@ -98,6 +98,17 @@ func TestFSM_SnapshotRestore_OSS(t *testing.T) { t.Fatalf("err: %s", err) } + // Intentions + ixn := structs.TestIntention(t) + ixn.ID = generateUUID() + ixn.RaftIndex = structs.RaftIndex{ + CreateIndex: 14, + ModifyIndex: 14, + } + if err := fsm.state.IntentionSet(14, ixn); err != nil { + t.Fatalf("err: %s", err) + } + // Snapshot snap, err := fsm.Snapshot() if err != nil { @@ -260,6 +271,18 @@ func TestFSM_SnapshotRestore_OSS(t *testing.T) { t.Fatalf("bad: %#v, %#v", restoredConf, autopilotConf) } + // Verify intentions are restored. + _, ixns, err := fsm2.state.Intentions(nil) + if err != nil { + t.Fatalf("err: %s", err) + } + if len(ixns) != 1 { + t.Fatalf("bad: %#v", ixns) + } + if !reflect.DeepEqual(ixns[0], ixn) { + t.Fatalf("bad: %#v", ixns[0]) + } + // Snapshot snap, err = fsm2.Snapshot() if err != nil { diff --git a/agent/consul/state/intention.go b/agent/consul/state/intention.go index 3e83af4d1..bc8bb0213 100644 --- a/agent/consul/state/intention.go +++ b/agent/consul/state/intention.go @@ -68,6 +68,34 @@ func init() { registerSchema(intentionsTableSchema) } +// Intentions is used to pull all the intentions from the snapshot. +func (s *Snapshot) Intentions() (structs.Intentions, error) { + ixns, err := s.tx.Get(intentionsTableName, "id") + if err != nil { + return nil, err + } + + var ret structs.Intentions + for wrapped := ixns.Next(); wrapped != nil; wrapped = ixns.Next() { + ret = append(ret, wrapped.(*structs.Intention)) + } + + return ret, nil +} + +// Intention is used when restoring from a snapshot. +func (s *Restore) Intention(ixn *structs.Intention) error { + // Insert the intention + if err := s.tx.Insert(intentionsTableName, ixn); err != nil { + return fmt.Errorf("failed restoring intention: %s", err) + } + if err := indexUpdateMaxTxn(s.tx, ixn.ModifyIndex, intentionsTableName); err != nil { + return fmt.Errorf("failed updating index: %s", err) + } + + return nil +} + // Intentions returns the list of all intentions. func (s *Store) Intentions(ws memdb.WatchSet) (uint64, structs.Intentions, error) { tx := s.db.Txn(false) diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go index 7f5376509..eb56ff04b 100644 --- a/agent/consul/state/intention_test.go +++ b/agent/consul/state/intention_test.go @@ -455,3 +455,108 @@ func TestStore_IntentionMatch_table(t *testing.T) { }) } } + +func TestStore_Intention_Snapshot_Restore(t *testing.T) { + s := testStateStore(t) + + // Create some intentions. + ixns := structs.Intentions{ + &structs.Intention{ + DestinationName: "foo", + }, + &structs.Intention{ + DestinationName: "bar", + }, + &structs.Intention{ + DestinationName: "baz", + }, + } + + // Force the sort order of the UUIDs before we create them so the + // order is deterministic. + id := testUUID() + ixns[0].ID = "a" + id[1:] + ixns[1].ID = "b" + id[1:] + ixns[2].ID = "c" + id[1:] + + // Now create + for i, ixn := range ixns { + if err := s.IntentionSet(uint64(4+i), ixn); err != nil { + t.Fatalf("err: %s", err) + } + } + + // Snapshot the queries. + snap := s.Snapshot() + defer snap.Close() + + // Alter the real state store. + if err := s.IntentionDelete(7, ixns[0].ID); err != nil { + t.Fatalf("err: %s", err) + } + + // Verify the snapshot. + if idx := snap.LastIndex(); idx != 6 { + t.Fatalf("bad index: %d", idx) + } + expected := structs.Intentions{ + &structs.Intention{ + ID: ixns[0].ID, + DestinationName: "foo", + Meta: map[string]string{}, + RaftIndex: structs.RaftIndex{ + CreateIndex: 4, + ModifyIndex: 4, + }, + }, + &structs.Intention{ + ID: ixns[1].ID, + DestinationName: "bar", + Meta: map[string]string{}, + RaftIndex: structs.RaftIndex{ + CreateIndex: 5, + ModifyIndex: 5, + }, + }, + &structs.Intention{ + ID: ixns[2].ID, + DestinationName: "baz", + Meta: map[string]string{}, + RaftIndex: structs.RaftIndex{ + CreateIndex: 6, + ModifyIndex: 6, + }, + }, + } + dump, err := snap.Intentions() + if err != nil { + t.Fatalf("err: %s", err) + } + if !reflect.DeepEqual(dump, expected) { + t.Fatalf("bad: %#v", dump[0]) + } + + // Restore the values into a new state store. + func() { + s := testStateStore(t) + restore := s.Restore() + for _, ixn := range dump { + if err := restore.Intention(ixn); err != nil { + t.Fatalf("err: %s", err) + } + } + restore.Commit() + + // Read the restored values back out and verify that they match. + idx, actual, err := s.Intentions(nil) + if err != nil { + t.Fatalf("err: %s", err) + } + if idx != 6 { + t.Fatalf("bad index: %d", idx) + } + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad: %v", actual) + } + }() +} From 1d0b4ceedbd5cf397d3439fa98e265ff790f26b8 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 6 Mar 2018 10:35:20 -0800 Subject: [PATCH 032/539] agent: convert all intention tests to testify/assert --- agent/consul/fsm/commands_oss_test.go | 58 ++---- agent/consul/fsm/snapshot_oss_test.go | 19 +- agent/consul/intention_endpoint_test.go | 187 ++++++------------ agent/consul/state/intention_test.go | 247 +++++++----------------- agent/intentions_endpoint_test.go | 179 ++++++----------- agent/structs/intention_test.go | 19 +- 6 files changed, 218 insertions(+), 491 deletions(-) diff --git a/agent/consul/fsm/commands_oss_test.go b/agent/consul/fsm/commands_oss_test.go index d18d7651e..acf67c5fb 100644 --- a/agent/consul/fsm/commands_oss_test.go +++ b/agent/consul/fsm/commands_oss_test.go @@ -15,6 +15,7 @@ import ( "github.com/hashicorp/go-uuid" "github.com/hashicorp/serf/coordinate" "github.com/pascaldekloe/goe/verify" + "github.com/stretchr/testify/assert" ) func generateUUID() (ret string) { @@ -1152,10 +1153,9 @@ func TestFSM_Autopilot(t *testing.T) { func TestFSM_Intention_CRUD(t *testing.T) { t.Parallel() + assert := assert.New(t) fsm, err := New(nil, os.Stderr) - if err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(err) // Create a new intention. ixn := structs.IntentionRequest{ @@ -1167,28 +1167,19 @@ func TestFSM_Intention_CRUD(t *testing.T) { { buf, err := structs.Encode(structs.IntentionRequestType, ixn) - if err != nil { - t.Fatalf("err: %v", err) - } - resp := fsm.Apply(makeLog(buf)) - if resp != nil { - t.Fatalf("resp: %v", resp) - } + assert.Nil(err) + assert.Nil(fsm.Apply(makeLog(buf))) } // Verify it's in the state store. { _, actual, err := fsm.state.IntentionGet(nil, ixn.Intention.ID) - if err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(err) actual.CreateIndex, actual.ModifyIndex = 0, 0 actual.CreatedAt = ixn.Intention.CreatedAt actual.UpdatedAt = ixn.Intention.UpdatedAt - if !reflect.DeepEqual(actual, ixn.Intention) { - t.Fatalf("bad: %v", actual) - } + assert.Equal(ixn.Intention, actual) } // Make an update @@ -1196,52 +1187,33 @@ func TestFSM_Intention_CRUD(t *testing.T) { ixn.Intention.SourceName = "api" { buf, err := structs.Encode(structs.IntentionRequestType, ixn) - if err != nil { - t.Fatalf("err: %v", err) - } - resp := fsm.Apply(makeLog(buf)) - if resp != nil { - t.Fatalf("resp: %v", resp) - } + assert.Nil(err) + assert.Nil(fsm.Apply(makeLog(buf))) } // Verify the update. { _, actual, err := fsm.state.IntentionGet(nil, ixn.Intention.ID) - if err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(err) actual.CreateIndex, actual.ModifyIndex = 0, 0 actual.CreatedAt = ixn.Intention.CreatedAt actual.UpdatedAt = ixn.Intention.UpdatedAt - if !reflect.DeepEqual(actual, ixn.Intention) { - t.Fatalf("bad: %v", actual) - } + assert.Equal(ixn.Intention, actual) } // Delete ixn.Op = structs.IntentionOpDelete { buf, err := structs.Encode(structs.IntentionRequestType, ixn) - if err != nil { - t.Fatalf("err: %v", err) - } - resp := fsm.Apply(makeLog(buf)) - if resp != nil { - t.Fatalf("resp: %v", resp) - } + assert.Nil(err) + assert.Nil(fsm.Apply(makeLog(buf))) } // Make sure it's gone. { _, actual, err := fsm.state.IntentionGet(nil, ixn.Intention.ID) - if err != nil { - t.Fatalf("err: %s", err) - } - - if actual != nil { - t.Fatalf("bad: %v", actual) - } + assert.Nil(err) + assert.Nil(actual) } } diff --git a/agent/consul/fsm/snapshot_oss_test.go b/agent/consul/fsm/snapshot_oss_test.go index 759f825b1..63f1ab1d3 100644 --- a/agent/consul/fsm/snapshot_oss_test.go +++ b/agent/consul/fsm/snapshot_oss_test.go @@ -13,10 +13,13 @@ import ( "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/lib" "github.com/pascaldekloe/goe/verify" + "github.com/stretchr/testify/assert" ) func TestFSM_SnapshotRestore_OSS(t *testing.T) { t.Parallel() + + assert := assert.New(t) fsm, err := New(nil, os.Stderr) if err != nil { t.Fatalf("err: %v", err) @@ -105,9 +108,7 @@ func TestFSM_SnapshotRestore_OSS(t *testing.T) { CreateIndex: 14, ModifyIndex: 14, } - if err := fsm.state.IntentionSet(14, ixn); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(fsm.state.IntentionSet(14, ixn)) // Snapshot snap, err := fsm.Snapshot() @@ -273,15 +274,9 @@ func TestFSM_SnapshotRestore_OSS(t *testing.T) { // Verify intentions are restored. _, ixns, err := fsm2.state.Intentions(nil) - if err != nil { - t.Fatalf("err: %s", err) - } - if len(ixns) != 1 { - t.Fatalf("bad: %#v", ixns) - } - if !reflect.DeepEqual(ixns[0], ixn) { - t.Fatalf("bad: %#v", ixns[0]) - } + assert.Nil(err) + assert.Len(ixns, 1) + assert.Equal(ixn, ixns[0]) // Snapshot snap, err = fsm2.Snapshot() diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 751b7894c..2ba5b04c3 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -2,19 +2,20 @@ package consul import ( "os" - "reflect" - "strings" "testing" "time" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/testrpc" "github.com/hashicorp/net-rpc-msgpackrpc" + "github.com/stretchr/testify/assert" ) // Test basic creation func TestIntentionApply_new(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServer(t) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -43,12 +44,8 @@ func TestIntentionApply_new(t *testing.T) { now := time.Now() // Create - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } - if reply == "" { - t.Fatal("reply should be non-empty") - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) + assert.NotEmpty(reply) // Read ixn.Intention.ID = reply @@ -58,45 +55,25 @@ func TestIntentionApply_new(t *testing.T) { IntentionID: ixn.Intention.ID, } var resp structs.IndexedIntentions - if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - if len(resp.Intentions) != 1 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp)) + assert.Len(resp.Intentions, 1) actual := resp.Intentions[0] - if resp.Index != actual.ModifyIndex { - t.Fatalf("bad index: %d", resp.Index) - } - - // Test CreatedAt - { - timeDiff := actual.CreatedAt.Sub(now) - if timeDiff < 0 || timeDiff > 5*time.Second { - t.Fatalf("should set created at: %s", actual.CreatedAt) - } - } - - // Test UpdatedAt - { - timeDiff := actual.UpdatedAt.Sub(now) - if timeDiff < 0 || timeDiff > 5*time.Second { - t.Fatalf("should set updated at: %s", actual.CreatedAt) - } - } + assert.Equal(resp.Index, actual.ModifyIndex) + assert.WithinDuration(now, actual.CreatedAt, 5*time.Second) + assert.WithinDuration(now, actual.UpdatedAt, 5*time.Second) actual.CreateIndex, actual.ModifyIndex = 0, 0 actual.CreatedAt = ixn.Intention.CreatedAt actual.UpdatedAt = ixn.Intention.UpdatedAt - if !reflect.DeepEqual(actual, ixn.Intention) { - t.Fatalf("bad:\n\n%#v\n\n%#v", actual, ixn.Intention) - } + assert.Equal(ixn.Intention, actual) } } // Test the source type defaults func TestIntentionApply_defaultSourceType(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServer(t) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -120,12 +97,8 @@ func TestIntentionApply_defaultSourceType(t *testing.T) { var reply string // Create - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } - if reply == "" { - t.Fatal("reply should be non-empty") - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) + assert.NotEmpty(reply) // Read ixn.Intention.ID = reply @@ -135,23 +108,18 @@ func TestIntentionApply_defaultSourceType(t *testing.T) { IntentionID: ixn.Intention.ID, } var resp structs.IndexedIntentions - if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - if len(resp.Intentions) != 1 { - t.Fatalf("bad: %v", resp) - } - + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp)) + assert.Len(resp.Intentions, 1) actual := resp.Intentions[0] - if actual.SourceType != structs.IntentionSourceConsul { - t.Fatalf("bad:\n\n%#v\n\n%#v", actual, ixn.Intention) - } + assert.Equal(structs.IntentionSourceConsul, actual.SourceType) } } // Shouldn't be able to create with an ID set func TestIntentionApply_createWithID(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServer(t) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -173,14 +141,15 @@ func TestIntentionApply_createWithID(t *testing.T) { // Create err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) - if err == nil || !strings.Contains(err.Error(), "ID must be empty") { - t.Fatalf("bad: %v", err) - } + assert.NotNil(err) + assert.Contains(err, "ID must be empty") } // Test basic updating func TestIntentionApply_updateGood(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServer(t) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -206,12 +175,8 @@ func TestIntentionApply_updateGood(t *testing.T) { var reply string // Create - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } - if reply == "" { - t.Fatal("reply should be non-empty") - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) + assert.NotEmpty(reply) // Read CreatedAt var createdAt time.Time @@ -222,12 +187,8 @@ func TestIntentionApply_updateGood(t *testing.T) { IntentionID: ixn.Intention.ID, } var resp structs.IndexedIntentions - if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - if len(resp.Intentions) != 1 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp)) + assert.Len(resp.Intentions, 1) actual := resp.Intentions[0] createdAt = actual.CreatedAt } @@ -239,9 +200,7 @@ func TestIntentionApply_updateGood(t *testing.T) { ixn.Op = structs.IntentionOpUpdate ixn.Intention.ID = reply ixn.Intention.SourceName = "bar" - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) // Read ixn.Intention.ID = reply @@ -251,39 +210,24 @@ func TestIntentionApply_updateGood(t *testing.T) { IntentionID: ixn.Intention.ID, } var resp structs.IndexedIntentions - if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - if len(resp.Intentions) != 1 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp)) + assert.Len(resp.Intentions, 1) actual := resp.Intentions[0] - - // Test CreatedAt - if !actual.CreatedAt.Equal(createdAt) { - t.Fatalf("should not modify created at: %s", actual.CreatedAt) - } - - // Test UpdatedAt - { - timeDiff := actual.UpdatedAt.Sub(createdAt) - if timeDiff <= 0 || timeDiff > 5*time.Second { - t.Fatalf("should set updated at: %s", actual.CreatedAt) - } - } + assert.Equal(createdAt, actual.CreatedAt) + assert.WithinDuration(time.Now(), actual.UpdatedAt, 5*time.Second) actual.CreateIndex, actual.ModifyIndex = 0, 0 actual.CreatedAt = ixn.Intention.CreatedAt actual.UpdatedAt = ixn.Intention.UpdatedAt - if !reflect.DeepEqual(actual, ixn.Intention) { - t.Fatalf("bad: %v", actual) - } + assert.Equal(ixn.Intention, actual) } } // Shouldn't be able to update a non-existent intention func TestIntentionApply_updateNonExist(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServer(t) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -305,14 +249,15 @@ func TestIntentionApply_updateNonExist(t *testing.T) { // Create err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) - if err == nil || !strings.Contains(err.Error(), "Cannot modify non-existent intention") { - t.Fatalf("bad: %v", err) - } + assert.NotNil(err) + assert.Contains(err, "Cannot modify non-existent intention") } // Test basic deleting func TestIntentionApply_deleteGood(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServer(t) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -336,19 +281,13 @@ func TestIntentionApply_deleteGood(t *testing.T) { var reply string // Create - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } - if reply == "" { - t.Fatal("reply should be non-empty") - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) + assert.NotEmpty(reply) // Delete ixn.Op = structs.IntentionOpDelete ixn.Intention.ID = reply - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) // Read ixn.Intention.ID = reply @@ -359,14 +298,15 @@ func TestIntentionApply_deleteGood(t *testing.T) { } var resp structs.IndexedIntentions err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp) - if err == nil || !strings.Contains(err.Error(), ErrIntentionNotFound.Error()) { - t.Fatalf("err: %v", err) - } + assert.NotNil(err) + assert.Contains(err, ErrIntentionNotFound.Error()) } } func TestIntentionList(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServer(t) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -381,16 +321,9 @@ func TestIntentionList(t *testing.T) { Datacenter: "dc1", } var resp structs.IndexedIntentions - if err := msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - if resp.Intentions == nil { - t.Fatal("should not be nil") - } - - if len(resp.Intentions) != 0 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp)) + assert.NotNil(resp.Intentions) + assert.Len(resp.Intentions, 0) } } @@ -398,6 +331,8 @@ func TestIntentionList(t *testing.T) { // is tested in the agent/consul/state package. func TestIntentionMatch_good(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServer(t) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -432,9 +367,7 @@ func TestIntentionMatch_good(t *testing.T) { // Create var reply string - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) } } @@ -452,21 +385,13 @@ func TestIntentionMatch_good(t *testing.T) { }, } var resp structs.IndexedIntentionMatches - if err := msgpackrpc.CallWithCodec(codec, "Intention.Match", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - - if len(resp.Matches) != 1 { - t.Fatalf("bad: %#v", resp.Matches) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Match", req, &resp)) + assert.Len(resp.Matches, 1) expected := [][]string{{"foo", "bar"}, {"foo", "*"}, {"*", "*"}} var actual [][]string for _, ixn := range resp.Matches[0] { actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) } - - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", actual, expected) - } + assert.Equal(expected, actual) } diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go index eb56ff04b..d4c63647a 100644 --- a/agent/consul/state/intention_test.go +++ b/agent/consul/state/intention_test.go @@ -1,34 +1,34 @@ package state import ( - "reflect" "testing" "time" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" + "github.com/stretchr/testify/assert" ) func TestStore_IntentionGet_none(t *testing.T) { + assert := assert.New(t) s := testStateStore(t) // Querying with no results returns nil. ws := memdb.NewWatchSet() idx, res, err := s.IntentionGet(ws, testUUID()) - if idx != 0 || res != nil || err != nil { - t.Fatalf("expected (0, nil, nil), got: (%d, %#v, %#v)", idx, res, err) - } + assert.Equal(idx, uint64(0)) + assert.Nil(res) + assert.Nil(err) } func TestStore_IntentionSetGet_basic(t *testing.T) { + assert := assert.New(t) s := testStateStore(t) // Call Get to populate the watch set ws := memdb.NewWatchSet() _, _, err := s.IntentionGet(ws, testUUID()) - if err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(err) // Build a valid intention ixn := &structs.Intention{ @@ -37,17 +37,11 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { } // Inserting a with empty ID is disallowed. - if err := s.IntentionSet(1, ixn); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.IntentionSet(1, ixn)) // Make sure the index got updated. - if idx := s.maxIndex(intentionsTableName); idx != 1 { - t.Fatalf("bad index: %d", idx) - } - if !watchFired(ws) { - t.Fatalf("bad") - } + assert.Equal(s.maxIndex(intentionsTableName), uint64(1)) + assert.True(watchFired(ws), "watch fired") // Read it back out and verify it. expected := &structs.Intention{ @@ -61,70 +55,48 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { ws = memdb.NewWatchSet() idx, actual, err := s.IntentionGet(ws, ixn.ID) - if err != nil { - t.Fatalf("err: %s", err) - } - if idx != expected.CreateIndex { - t.Fatalf("bad index: %d", idx) - } - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("bad: %v", actual) - } + assert.Nil(err) + assert.Equal(expected.CreateIndex, idx) + assert.Equal(expected, actual) // Change a value and test updating ixn.SourceNS = "foo" - if err := s.IntentionSet(2, ixn); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.IntentionSet(2, ixn)) // Make sure the index got updated. - if idx := s.maxIndex(intentionsTableName); idx != 2 { - t.Fatalf("bad index: %d", idx) - } - if !watchFired(ws) { - t.Fatalf("bad") - } + assert.Equal(s.maxIndex(intentionsTableName), uint64(2)) + assert.True(watchFired(ws), "watch fired") // Read it back and verify the data was updated expected.SourceNS = ixn.SourceNS expected.ModifyIndex = 2 ws = memdb.NewWatchSet() idx, actual, err = s.IntentionGet(ws, ixn.ID) - if err != nil { - t.Fatalf("err: %s", err) - } - if idx != expected.ModifyIndex { - t.Fatalf("bad index: %d", idx) - } - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("bad: %#v", actual) - } + assert.Nil(err) + assert.Equal(expected.ModifyIndex, idx) + assert.Equal(expected, actual) } func TestStore_IntentionSet_emptyId(t *testing.T) { + assert := assert.New(t) s := testStateStore(t) ws := memdb.NewWatchSet() _, _, err := s.IntentionGet(ws, testUUID()) - if err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(err) // Inserting a with empty ID is disallowed. - if err := s.IntentionSet(1, &structs.Intention{}); err == nil { - t.Fatalf("expected %#v, got: %#v", ErrMissingIntentionID, err) - } + err = s.IntentionSet(1, &structs.Intention{}) + assert.NotNil(err) + assert.Contains(err.Error(), ErrMissingIntentionID.Error()) // Index is not updated if nothing is saved. - if idx := s.maxIndex(intentionsTableName); idx != 0 { - t.Fatalf("bad index: %d", idx) - } - if watchFired(ws) { - t.Fatalf("bad") - } + assert.Equal(s.maxIndex(intentionsTableName), uint64(0)) + assert.False(watchFired(ws), "watch fired") } func TestStore_IntentionSet_updateCreatedAt(t *testing.T) { + assert := assert.New(t) s := testStateStore(t) // Build a valid intention @@ -135,28 +107,21 @@ func TestStore_IntentionSet_updateCreatedAt(t *testing.T) { } // Insert - if err := s.IntentionSet(1, &ixn); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.IntentionSet(1, &ixn)) // Change a value and test updating ixnUpdate := ixn ixnUpdate.CreatedAt = now.Add(10 * time.Second) - if err := s.IntentionSet(2, &ixnUpdate); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.IntentionSet(2, &ixnUpdate)) // Read it back and verify _, actual, err := s.IntentionGet(nil, ixn.ID) - if err != nil { - t.Fatalf("err: %s", err) - } - if !actual.CreatedAt.Equal(now) { - t.Fatalf("bad: %#v", actual) - } + assert.Nil(err) + assert.Equal(now, actual.CreatedAt) } func TestStore_IntentionSet_metaNil(t *testing.T) { + assert := assert.New(t) s := testStateStore(t) // Build a valid intention @@ -165,21 +130,16 @@ func TestStore_IntentionSet_metaNil(t *testing.T) { } // Insert - if err := s.IntentionSet(1, &ixn); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.IntentionSet(1, &ixn)) // Read it back and verify _, actual, err := s.IntentionGet(nil, ixn.ID) - if err != nil { - t.Fatalf("err: %s", err) - } - if actual.Meta == nil { - t.Fatal("meta should be non-nil") - } + assert.Nil(err) + assert.NotNil(actual.Meta) } func TestStore_IntentionSet_metaSet(t *testing.T) { + assert := assert.New(t) s := testStateStore(t) // Build a valid intention @@ -189,79 +149,55 @@ func TestStore_IntentionSet_metaSet(t *testing.T) { } // Insert - if err := s.IntentionSet(1, &ixn); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.IntentionSet(1, &ixn)) // Read it back and verify _, actual, err := s.IntentionGet(nil, ixn.ID) - if err != nil { - t.Fatalf("err: %s", err) - } - if !reflect.DeepEqual(actual.Meta, ixn.Meta) { - t.Fatalf("bad: %#v", actual) - } + assert.Nil(err) + assert.Equal(ixn.Meta, actual.Meta) } func TestStore_IntentionDelete(t *testing.T) { + assert := assert.New(t) s := testStateStore(t) // Call Get to populate the watch set ws := memdb.NewWatchSet() _, _, err := s.IntentionGet(ws, testUUID()) - if err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(err) // Create ixn := &structs.Intention{ID: testUUID()} - if err := s.IntentionSet(1, ixn); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.IntentionSet(1, ixn)) // Make sure the index got updated. - if idx := s.maxIndex(intentionsTableName); idx != 1 { - t.Fatalf("bad index: %d", idx) - } - if !watchFired(ws) { - t.Fatalf("bad") - } + assert.Equal(s.maxIndex(intentionsTableName), uint64(1)) + assert.True(watchFired(ws), "watch fired") // Delete - if err := s.IntentionDelete(2, ixn.ID); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.IntentionDelete(2, ixn.ID)) // Make sure the index got updated. - if idx := s.maxIndex(intentionsTableName); idx != 2 { - t.Fatalf("bad index: %d", idx) - } - if !watchFired(ws) { - t.Fatalf("bad") - } + assert.Equal(s.maxIndex(intentionsTableName), uint64(2)) + assert.True(watchFired(ws), "watch fired") // Sanity check to make sure it's not there. idx, actual, err := s.IntentionGet(nil, ixn.ID) - if err != nil { - t.Fatalf("err: %s", err) - } - if idx != 2 { - t.Fatalf("bad index: %d", idx) - } - if actual != nil { - t.Fatalf("bad: %v", actual) - } + assert.Nil(err) + assert.Equal(idx, uint64(2)) + assert.Nil(actual) } func TestStore_IntentionsList(t *testing.T) { + assert := assert.New(t) s := testStateStore(t) // Querying with no results returns nil. ws := memdb.NewWatchSet() idx, res, err := s.Intentions(ws) - if idx != 0 || res != nil || err != nil { - t.Fatalf("expected (0, nil, nil), got: (%d, %#v, %#v)", idx, res, err) - } + assert.Nil(err) + assert.Nil(res) + assert.Equal(idx, uint64(0)) // Create some intentions ixns := structs.Intentions{ @@ -281,13 +217,9 @@ func TestStore_IntentionsList(t *testing.T) { // Create for i, ixn := range ixns { - if err := s.IntentionSet(uint64(1+i), ixn); err != nil { - t.Fatalf("err: %s", err) - } - } - if !watchFired(ws) { - t.Fatalf("bad") + assert.Nil(s.IntentionSet(uint64(1+i), ixn)) } + assert.True(watchFired(ws), "watch fired") // Read it back and verify. expected := structs.Intentions{ @@ -309,15 +241,9 @@ func TestStore_IntentionsList(t *testing.T) { }, } idx, actual, err := s.Intentions(nil) - if err != nil { - t.Fatalf("err: %s", err) - } - if idx != 2 { - t.Fatalf("bad index: %d", idx) - } - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("bad: %v", actual) - } + assert.Nil(err) + assert.Equal(idx, uint64(2)) + assert.Equal(expected, actual) } // Test the matrix of match logic. @@ -386,6 +312,7 @@ func TestStore_IntentionMatch_table(t *testing.T) { // test both cases. testRunner := func(t *testing.T, tc testCase, typ structs.IntentionMatchType) { // Insert the set + assert := assert.New(t) s := testStateStore(t) var idx uint64 = 1 for _, v := range tc.Insert { @@ -399,10 +326,7 @@ func TestStore_IntentionMatch_table(t *testing.T) { ixn.SourceName = v[1] } - err := s.IntentionSet(idx, ixn) - if err != nil { - t.Fatalf("error inserting: %s", err) - } + assert.Nil(s.IntentionSet(idx, ixn)) idx++ } @@ -418,14 +342,10 @@ func TestStore_IntentionMatch_table(t *testing.T) { // Match _, matches, err := s.IntentionMatch(nil, args) - if err != nil { - t.Fatalf("error matching: %s", err) - } + assert.Nil(err) // Should have equal lengths - if len(matches) != len(tc.Expected) { - t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", tc.Expected, matches) - } + assert.Len(matches, len(tc.Expected)) // Verify matches for i, expected := range tc.Expected { @@ -439,9 +359,7 @@ func TestStore_IntentionMatch_table(t *testing.T) { } } - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", actual, expected) - } + assert.Equal(expected, actual) } } @@ -457,6 +375,7 @@ func TestStore_IntentionMatch_table(t *testing.T) { } func TestStore_Intention_Snapshot_Restore(t *testing.T) { + assert := assert.New(t) s := testStateStore(t) // Create some intentions. @@ -481,9 +400,7 @@ func TestStore_Intention_Snapshot_Restore(t *testing.T) { // Now create for i, ixn := range ixns { - if err := s.IntentionSet(uint64(4+i), ixn); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.IntentionSet(uint64(4+i), ixn)) } // Snapshot the queries. @@ -491,14 +408,10 @@ func TestStore_Intention_Snapshot_Restore(t *testing.T) { defer snap.Close() // Alter the real state store. - if err := s.IntentionDelete(7, ixns[0].ID); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.IntentionDelete(7, ixns[0].ID)) // Verify the snapshot. - if idx := snap.LastIndex(); idx != 6 { - t.Fatalf("bad index: %d", idx) - } + assert.Equal(snap.LastIndex(), uint64(6)) expected := structs.Intentions{ &structs.Intention{ ID: ixns[0].ID, @@ -529,34 +442,22 @@ func TestStore_Intention_Snapshot_Restore(t *testing.T) { }, } dump, err := snap.Intentions() - if err != nil { - t.Fatalf("err: %s", err) - } - if !reflect.DeepEqual(dump, expected) { - t.Fatalf("bad: %#v", dump[0]) - } + assert.Nil(err) + assert.Equal(expected, dump) // Restore the values into a new state store. func() { s := testStateStore(t) restore := s.Restore() for _, ixn := range dump { - if err := restore.Intention(ixn); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(restore.Intention(ixn)) } restore.Commit() // Read the restored values back out and verify that they match. idx, actual, err := s.Intentions(nil) - if err != nil { - t.Fatalf("err: %s", err) - } - if idx != 6 { - t.Fatalf("bad index: %d", idx) - } - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("bad: %v", actual) - } + assert.Nil(err) + assert.Equal(idx, uint64(6)) + assert.Equal(expected, actual) }() } diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index c236491e8..4df0bf312 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -4,17 +4,17 @@ import ( "fmt" "net/http" "net/http/httptest" - "reflect" "sort" - "strings" "testing" "github.com/hashicorp/consul/agent/structs" + "github.com/stretchr/testify/assert" ) func TestIntentionsList_empty(t *testing.T) { t.Parallel() + assert := assert.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -22,19 +22,17 @@ func TestIntentionsList_empty(t *testing.T) { req, _ := http.NewRequest("GET", "/v1/connect/intentions", nil) resp := httptest.NewRecorder() obj, err := a.srv.IntentionList(resp, req) - if err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(err) value := obj.(structs.Intentions) - if value == nil || len(value) != 0 { - t.Fatalf("bad: %v", value) - } + assert.NotNil(value) + assert.Len(value, 0) } func TestIntentionsList_values(t *testing.T) { t.Parallel() + assert := assert.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -48,35 +46,28 @@ func TestIntentionsList_values(t *testing.T) { req.Intention.SourceName = v var reply string - if err := a.RPC("Intention.Apply", &req, &reply); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(a.RPC("Intention.Apply", &req, &reply)) } // Request req, _ := http.NewRequest("GET", "/v1/connect/intentions", nil) resp := httptest.NewRecorder() obj, err := a.srv.IntentionList(resp, req) - if err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(err) value := obj.(structs.Intentions) - if len(value) != 2 { - t.Fatalf("bad: %v", value) - } + assert.Len(value, 2) expected := []string{"bar", "foo"} actual := []string{value[0].SourceName, value[1].SourceName} sort.Strings(actual) - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("bad: %#v", actual) - } + assert.Equal(expected, actual) } func TestIntentionsMatch_basic(t *testing.T) { t.Parallel() + assert := assert.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -102,9 +93,7 @@ func TestIntentionsMatch_basic(t *testing.T) { // Create var reply string - if err := a.RPC("Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(a.RPC("Intention.Apply", &ixn, &reply)) } } @@ -113,14 +102,10 @@ func TestIntentionsMatch_basic(t *testing.T) { "/v1/connect/intentions/match?by=destination&name=foo/bar", nil) resp := httptest.NewRecorder() obj, err := a.srv.IntentionMatch(resp, req) - if err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(err) value := obj.(map[string]structs.Intentions) - if len(value) != 1 { - t.Fatalf("bad: %v", value) - } + assert.Len(value, 1) var actual [][]string expected := [][]string{{"foo", "bar"}, {"foo", "*"}, {"*", "*"}} @@ -128,14 +113,13 @@ func TestIntentionsMatch_basic(t *testing.T) { actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) } - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", actual, expected) - } + assert.Equal(expected, actual) } func TestIntentionsMatch_noBy(t *testing.T) { t.Parallel() + assert := assert.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -144,17 +128,15 @@ func TestIntentionsMatch_noBy(t *testing.T) { "/v1/connect/intentions/match?name=foo/bar", nil) resp := httptest.NewRecorder() obj, err := a.srv.IntentionMatch(resp, req) - if err == nil || !strings.Contains(err.Error(), "by") { - t.Fatalf("err: %v", err) - } - if obj != nil { - t.Fatal("should have no response") - } + assert.NotNil(err) + assert.Contains(err.Error(), "by") + assert.Nil(obj) } func TestIntentionsMatch_byInvalid(t *testing.T) { t.Parallel() + assert := assert.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -163,17 +145,15 @@ func TestIntentionsMatch_byInvalid(t *testing.T) { "/v1/connect/intentions/match?by=datacenter", nil) resp := httptest.NewRecorder() obj, err := a.srv.IntentionMatch(resp, req) - if err == nil || !strings.Contains(err.Error(), "'by' parameter") { - t.Fatalf("err: %v", err) - } - if obj != nil { - t.Fatal("should have no response") - } + assert.NotNil(err) + assert.Contains(err.Error(), "'by' parameter") + assert.Nil(obj) } func TestIntentionsMatch_noName(t *testing.T) { t.Parallel() + assert := assert.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -182,17 +162,15 @@ func TestIntentionsMatch_noName(t *testing.T) { "/v1/connect/intentions/match?by=source", nil) resp := httptest.NewRecorder() obj, err := a.srv.IntentionMatch(resp, req) - if err == nil || !strings.Contains(err.Error(), "'name' not set") { - t.Fatalf("err: %v", err) - } - if obj != nil { - t.Fatal("should have no response") - } + assert.NotNil(err) + assert.Contains(err.Error(), "'name' not set") + assert.Nil(obj) } func TestIntentionsCreate_good(t *testing.T) { t.Parallel() + assert := assert.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -202,14 +180,10 @@ func TestIntentionsCreate_good(t *testing.T) { req, _ := http.NewRequest("POST", "/v1/connect/intentions", jsonReader(args)) resp := httptest.NewRecorder() obj, err := a.srv.IntentionCreate(resp, req) - if err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(err) value := obj.(intentionCreateResponse) - if value.ID == "" { - t.Fatalf("bad: %v", value) - } + assert.NotEqual("", value.ID) // Read the value { @@ -218,22 +192,17 @@ func TestIntentionsCreate_good(t *testing.T) { IntentionID: value.ID, } var resp structs.IndexedIntentions - if err := a.RPC("Intention.Get", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - if len(resp.Intentions) != 1 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(a.RPC("Intention.Get", req, &resp)) + assert.Len(resp.Intentions, 1) actual := resp.Intentions[0] - if actual.SourceName != "foo" { - t.Fatalf("bad: %#v", actual) - } + assert.Equal("foo", actual.SourceName) } } func TestIntentionsSpecificGet_good(t *testing.T) { t.Parallel() + assert := assert.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -248,35 +217,28 @@ func TestIntentionsSpecificGet_good(t *testing.T) { Op: structs.IntentionOpCreate, Intention: ixn, } - if err := a.RPC("Intention.Apply", &req, &reply); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(a.RPC("Intention.Apply", &req, &reply)) } // Get the value req, _ := http.NewRequest("GET", fmt.Sprintf("/v1/connect/intentions/%s", reply), nil) resp := httptest.NewRecorder() obj, err := a.srv.IntentionSpecific(resp, req) - if err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(err) value := obj.(*structs.Intention) - if value.ID != reply { - t.Fatalf("bad: %v", value) - } + assert.Equal(reply, value.ID) ixn.ID = value.ID ixn.RaftIndex = value.RaftIndex ixn.CreatedAt, ixn.UpdatedAt = value.CreatedAt, value.UpdatedAt - if !reflect.DeepEqual(value, ixn) { - t.Fatalf("bad (got, want):\n\n%#v\n\n%#v", value, ixn) - } + assert.Equal(ixn, value) } func TestIntentionsSpecificUpdate_good(t *testing.T) { t.Parallel() + assert := assert.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -291,9 +253,7 @@ func TestIntentionsSpecificUpdate_good(t *testing.T) { Op: structs.IntentionOpCreate, Intention: ixn, } - if err := a.RPC("Intention.Apply", &req, &reply); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(a.RPC("Intention.Apply", &req, &reply)) } // Update the intention @@ -302,12 +262,8 @@ func TestIntentionsSpecificUpdate_good(t *testing.T) { req, _ := http.NewRequest("PUT", fmt.Sprintf("/v1/connect/intentions/%s", reply), jsonReader(ixn)) resp := httptest.NewRecorder() obj, err := a.srv.IntentionSpecific(resp, req) - if err != nil { - t.Fatalf("err: %v", err) - } - if obj != nil { - t.Fatalf("obj should be nil: %v", err) - } + assert.Nil(err) + assert.Nil(obj) // Read the value { @@ -316,22 +272,17 @@ func TestIntentionsSpecificUpdate_good(t *testing.T) { IntentionID: reply, } var resp structs.IndexedIntentions - if err := a.RPC("Intention.Get", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - if len(resp.Intentions) != 1 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(a.RPC("Intention.Get", req, &resp)) + assert.Len(resp.Intentions, 1) actual := resp.Intentions[0] - if actual.SourceName != "bar" { - t.Fatalf("bad: %#v", actual) - } + assert.Equal("bar", actual.SourceName) } } func TestIntentionsSpecificDelete_good(t *testing.T) { t.Parallel() + assert := assert.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -347,9 +298,7 @@ func TestIntentionsSpecificDelete_good(t *testing.T) { Op: structs.IntentionOpCreate, Intention: ixn, } - if err := a.RPC("Intention.Apply", &req, &reply); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(a.RPC("Intention.Apply", &req, &reply)) } // Sanity check that the intention exists @@ -359,28 +308,18 @@ func TestIntentionsSpecificDelete_good(t *testing.T) { IntentionID: reply, } var resp structs.IndexedIntentions - if err := a.RPC("Intention.Get", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - if len(resp.Intentions) != 1 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(a.RPC("Intention.Get", req, &resp)) + assert.Len(resp.Intentions, 1) actual := resp.Intentions[0] - if actual.SourceName != "foo" { - t.Fatalf("bad: %#v", actual) - } + assert.Equal("foo", actual.SourceName) } // Delete the intention req, _ := http.NewRequest("DELETE", fmt.Sprintf("/v1/connect/intentions/%s", reply), nil) resp := httptest.NewRecorder() obj, err := a.srv.IntentionSpecific(resp, req) - if err != nil { - t.Fatalf("err: %v", err) - } - if obj != nil { - t.Fatalf("obj should be nil: %v", err) - } + assert.Nil(err) + assert.Nil(obj) // Verify the intention is gone { @@ -390,9 +329,8 @@ func TestIntentionsSpecificDelete_good(t *testing.T) { } var resp structs.IndexedIntentions err := a.RPC("Intention.Get", req, &resp) - if err == nil || !strings.Contains(err.Error(), "not found") { - t.Fatalf("err: %v", err) - } + assert.NotNil(err) + assert.Contains(err.Error(), "not found") } } @@ -429,17 +367,14 @@ func TestParseIntentionMatchEntry(t *testing.T) { for _, tc := range cases { t.Run(tc.Input, func(t *testing.T) { + assert := assert.New(t) actual, err := parseIntentionMatchEntry(tc.Input) - if (err != nil) != tc.Err { - t.Fatalf("err: %s", err) - } + assert.Equal(err != nil, tc.Err, err) if err != nil { return } - if !reflect.DeepEqual(actual, tc.Expected) { - t.Fatalf("bad: %#v", actual) - } + assert.Equal(tc.Expected, actual) }) } } diff --git a/agent/structs/intention_test.go b/agent/structs/intention_test.go index ec0a2de66..9db4ff255 100644 --- a/agent/structs/intention_test.go +++ b/agent/structs/intention_test.go @@ -1,10 +1,11 @@ package structs import ( - "reflect" "sort" "strings" "testing" + + "github.com/stretchr/testify/assert" ) func TestIntentionValidate(t *testing.T) { @@ -108,19 +109,17 @@ func TestIntentionValidate(t *testing.T) { for _, tc := range cases { t.Run(tc.Name, func(t *testing.T) { + assert := assert.New(t) ixn := TestIntention(t) tc.Modify(ixn) err := ixn.Validate() - if (err != nil) != (tc.Err != "") { - t.Fatalf("err: %s", err) - } + assert.Equal(err != nil, tc.Err != "", err) if err == nil { return } - if !strings.Contains(strings.ToLower(err.Error()), strings.ToLower(tc.Err)) { - t.Fatalf("err: %s", err) - } + + assert.Contains(strings.ToLower(err.Error()), strings.ToLower(tc.Err)) }) } } @@ -160,6 +159,8 @@ func TestIntentionPrecedenceSorter(t *testing.T) { for _, tc := range cases { t.Run(tc.Name, func(t *testing.T) { + assert := assert.New(t) + var input Intentions for _, v := range tc.Input { input = append(input, &Intention{ @@ -183,9 +184,7 @@ func TestIntentionPrecedenceSorter(t *testing.T) { v.DestinationName, }) } - if !reflect.DeepEqual(actual, tc.Expected) { - t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", actual, tc.Expected) - } + assert.Equal(tc.Expected, actual) }) } } From 10ebccba4507204f186f8fe4d2c4a07ddc154413 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 3 Mar 2018 16:14:33 -0800 Subject: [PATCH 033/539] acl: parsing intentions in service block --- acl/policy.go | 8 +++++ acl/policy_test.go | 82 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 90 insertions(+) diff --git a/acl/policy.go b/acl/policy.go index b8fc498b6..0796a1aec 100644 --- a/acl/policy.go +++ b/acl/policy.go @@ -73,6 +73,11 @@ type ServicePolicy struct { Name string `hcl:",key"` Policy string Sentinel Sentinel + + // Intentions is the policy for intentions where this service is the + // destination. This may be empty, in which case the Policy determines + // the intentions policy. + Intentions string } func (s *ServicePolicy) GoString() string { @@ -197,6 +202,9 @@ func Parse(rules string, sentinel sentinel.Evaluator) (*Policy, error) { if !isPolicyValid(sp.Policy) { return nil, fmt.Errorf("Invalid service policy: %#v", sp) } + if sp.Intentions != "" && !isPolicyValid(sp.Intentions) { + return nil, fmt.Errorf("Invalid service intentions policy: %#v", sp) + } if err := isSentinelValid(sentinel, sp.Policy, sp.Sentinel); err != nil { return nil, fmt.Errorf("Invalid service Sentinel policy: %#v, got error:%v", sp, err) } diff --git a/acl/policy_test.go b/acl/policy_test.go index 37b8216f5..9d3ae8f69 100644 --- a/acl/policy_test.go +++ b/acl/policy_test.go @@ -6,6 +6,88 @@ import ( "testing" ) +func TestParse_table(t *testing.T) { + // Note that the table tests are newer than other tests. Many of the + // other aspects of policy parsing are tested in older tests below. New + // parsing tests should be added to this table as its easier to maintain. + cases := []struct { + Name string + Input string + Expected *Policy + Err string + }{ + { + "service no intentions", + ` +service "foo" { + policy = "write" +} + `, + &Policy{ + Services: []*ServicePolicy{ + { + Name: "foo", + Policy: "write", + }, + }, + }, + "", + }, + + { + "service intentions", + ` +service "foo" { + policy = "write" + intentions = "read" +} + `, + &Policy{ + Services: []*ServicePolicy{ + { + Name: "foo", + Policy: "write", + Intentions: "read", + }, + }, + }, + "", + }, + + { + "service intention: invalid value", + ` +service "foo" { + policy = "write" + intentions = "foo" +} + `, + nil, + "service intentions", + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + actual, err := Parse(tc.Input, nil) + if (err != nil) != (tc.Err != "") { + t.Fatalf("err: %s", err) + } + if err != nil { + if !strings.Contains(err.Error(), tc.Err) { + t.Fatalf("err: %s", err) + } + + return + } + + if !reflect.DeepEqual(actual, tc.Expected) { + t.Fatalf("bad: %#v", actual) + } + }) + } +} + func TestACLPolicy_Parse_HCL(t *testing.T) { inp := ` agent "foo" { From 7b3c6fd8bdb2c82d42655f23a260669cf2732122 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 4 Mar 2018 00:38:04 -0800 Subject: [PATCH 034/539] acl: implement IntentionRead/Write methods on ACL interface --- acl/acl.go | 76 +++++++++++++++++++++++++++++++++++++++++++++++++ acl/acl_test.go | 44 ++++++++++++++++++++++++++-- 2 files changed, 118 insertions(+), 2 deletions(-) diff --git a/acl/acl.go b/acl/acl.go index 73bcc4fc3..4ac88f01c 100644 --- a/acl/acl.go +++ b/acl/acl.go @@ -60,6 +60,13 @@ type ACL interface { // EventWrite determines if a specific event may be fired. EventWrite(string) bool + // IntentionRead determines if a specific intention can be read. + IntentionRead(string) bool + + // IntentionWrite determines if a specific intention can be + // created, modified, or deleted. + IntentionWrite(string) bool + // KeyList checks for permission to list keys under a prefix KeyList(string) bool @@ -154,6 +161,14 @@ func (s *StaticACL) EventWrite(string) bool { return s.defaultAllow } +func (s *StaticACL) IntentionRead(string) bool { + return s.defaultAllow +} + +func (s *StaticACL) IntentionWrite(string) bool { + return s.defaultAllow +} + func (s *StaticACL) KeyRead(string) bool { return s.defaultAllow } @@ -275,6 +290,9 @@ type PolicyACL struct { // agentRules contains the agent policies agentRules *radix.Tree + // intentionRules contains the service intention policies + intentionRules *radix.Tree + // keyRules contains the key policies keyRules *radix.Tree @@ -308,6 +326,7 @@ func New(parent ACL, policy *Policy, sentinel sentinel.Evaluator) (*PolicyACL, e p := &PolicyACL{ parent: parent, agentRules: radix.New(), + intentionRules: radix.New(), keyRules: radix.New(), nodeRules: radix.New(), serviceRules: radix.New(), @@ -347,6 +366,25 @@ func New(parent ACL, policy *Policy, sentinel sentinel.Evaluator) (*PolicyACL, e sentinelPolicy: sp.Sentinel, } p.serviceRules.Insert(sp.Name, policyRule) + + // Determine the intention. The intention could be blank (not set). + // If the intention is not set, the value depends on the value of + // the service policy. + intention := sp.Intentions + if intention == "" { + switch sp.Policy { + case PolicyRead, PolicyWrite: + intention = PolicyRead + default: + intention = PolicyDeny + } + } + + policyRule = PolicyRule{ + aclPolicy: intention, + sentinelPolicy: sp.Sentinel, + } + p.intentionRules.Insert(sp.Name, policyRule) } // Load the session policy @@ -455,6 +493,44 @@ func (p *PolicyACL) EventWrite(name string) bool { return p.parent.EventWrite(name) } +// IntentionRead checks if writing (creating, updating, or deleting) of an +// intention is allowed. +func (p *PolicyACL) IntentionRead(prefix string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.intentionRules.LongestPrefix(prefix) + if ok { + pr := rule.(PolicyRule) + switch pr.aclPolicy { + case PolicyRead, PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.IntentionRead(prefix) +} + +// IntentionWrite checks if writing (creating, updating, or deleting) of an +// intention is allowed. +func (p *PolicyACL) IntentionWrite(prefix string) bool { + // Check for an exact rule or catch-all + _, rule, ok := p.intentionRules.LongestPrefix(prefix) + if ok { + pr := rule.(PolicyRule) + switch pr.aclPolicy { + case PolicyWrite: + return true + default: + return false + } + } + + // No matching rule, use the parent. + return p.parent.IntentionWrite(prefix) +} + // KeyRead returns if a key is allowed to be read func (p *PolicyACL) KeyRead(key string) bool { // Look for a matching rule diff --git a/acl/acl_test.go b/acl/acl_test.go index 02ae0efb4..85f35f606 100644 --- a/acl/acl_test.go +++ b/acl/acl_test.go @@ -53,6 +53,9 @@ func TestStaticACL(t *testing.T) { if !all.EventWrite("foobar") { t.Fatalf("should allow") } + if !all.IntentionWrite("foobar") { + t.Fatalf("should allow") + } if !all.KeyRead("foobar") { t.Fatalf("should allow") } @@ -123,6 +126,9 @@ func TestStaticACL(t *testing.T) { if none.EventWrite("") { t.Fatalf("should not allow") } + if none.IntentionWrite("foo") { + t.Fatalf("should not allow") + } if none.KeyRead("foobar") { t.Fatalf("should not allow") } @@ -187,6 +193,9 @@ func TestStaticACL(t *testing.T) { if !manage.EventWrite("foobar") { t.Fatalf("should allow") } + if !manage.IntentionWrite("foobar") { + t.Fatalf("should allow") + } if !manage.KeyRead("foobar") { t.Fatalf("should allow") } @@ -305,8 +314,14 @@ func TestPolicyACL(t *testing.T) { Policy: PolicyDeny, }, &ServicePolicy{ - Name: "barfoo", - Policy: PolicyWrite, + Name: "barfoo", + Policy: PolicyWrite, + Intentions: PolicyWrite, + }, + &ServicePolicy{ + Name: "intbaz", + Policy: PolicyWrite, + Intentions: PolicyDeny, }, }, } @@ -344,6 +359,31 @@ func TestPolicyACL(t *testing.T) { } } + // Test the intentions + type intentioncase struct { + inp string + read bool + write bool + } + icases := []intentioncase{ + {"other", true, false}, + {"foo", true, false}, + {"bar", false, false}, + {"foobar", true, false}, + {"barfo", false, false}, + {"barfoo", true, true}, + {"barfoo2", true, true}, + {"intbaz", false, false}, + } + for _, c := range icases { + if c.read != acl.IntentionRead(c.inp) { + t.Fatalf("Read fail: %#v", c) + } + if c.write != acl.IntentionWrite(c.inp) { + t.Fatalf("Write fail: %#v", c) + } + } + // Test the services type servicecase struct { inp string From c54be9bc09c90dc95be9f8059f34bec9ed29aa96 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 4 Mar 2018 00:39:56 -0800 Subject: [PATCH 035/539] agent/consul: Basic ACL on Intention.Apply --- agent/consul/intention_endpoint.go | 15 +++++ agent/consul/intention_endpoint_test.go | 89 +++++++++++++++++++++++++ agent/structs/intention.go | 7 ++ 3 files changed, 111 insertions(+) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 6653d5502..0440f17e4 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -6,6 +6,7 @@ import ( "time" "github.com/armon/go-metrics" + "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/consul/state" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" @@ -71,6 +72,20 @@ func (s *Intention) Apply( } *reply = args.Intention.ID + // Get the ACL token for the request for the checks below. + rule, err := s.srv.resolveToken(args.Token) + if err != nil { + return err + } + + // Perform the ACL check + if prefix, ok := args.Intention.GetACLPrefix(); ok { + if rule != nil && !rule.IntentionWrite(prefix) { + s.srv.logger.Printf("[WARN] consul.intention: Operation on intention '%s' denied due to ACLs", args.Intention.ID) + return acl.ErrPermissionDenied + } + } + // If this is not a create, then we have to verify the ID. if args.Op != structs.IntentionOpCreate { state := s.srv.fsm.State() diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 2ba5b04c3..5edf904d7 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -5,6 +5,7 @@ import ( "testing" "time" + "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/testrpc" "github.com/hashicorp/net-rpc-msgpackrpc" @@ -303,6 +304,94 @@ func TestIntentionApply_deleteGood(t *testing.T) { } } +// Test apply with a deny ACL +func TestIntentionApply_aclDeny(t *testing.T) { + t.Parallel() + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create an ACL with write permissions + var token string + { + var rules = ` +service "foo" { + policy = "deny" + intentions = "write" +}` + + req := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: rules, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { + t.Fatalf("err: %v", err) + } + } + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + ixn.Intention.DestinationName = "foobar" + + // Create without a token should error since default deny + var reply string + err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) + if !acl.IsErrPermissionDenied(err) { + t.Fatalf("bad: %v", err) + } + + // Now add the token and try again. + ixn.WriteRequest.Token = token + if err = msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + + // Read + ixn.Intention.ID = reply + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: ixn.Intention.ID, + } + var resp structs.IndexedIntentions + if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + if len(resp.Intentions) != 1 { + t.Fatalf("bad: %v", resp) + } + actual := resp.Intentions[0] + if resp.Index != actual.ModifyIndex { + t.Fatalf("bad index: %d", resp.Index) + } + + actual.CreateIndex, actual.ModifyIndex = 0, 0 + actual.CreatedAt = ixn.Intention.CreatedAt + actual.UpdatedAt = ixn.Intention.UpdatedAt + if !reflect.DeepEqual(actual, ixn.Intention) { + t.Fatalf("bad:\n\n%#v\n\n%#v", actual, ixn.Intention) + } + } +} + func TestIntentionList(t *testing.T) { t.Parallel() diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 579fef6c1..fb83f85da 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -157,6 +157,13 @@ func (x *Intention) Validate() error { return result } +// GetACLPrefix returns the prefix to look up the ACL policy for this +// intention, and a boolean noting whether the prefix is valid to check +// or not. You must check the ok value before using the prefix. +func (x *Intention) GetACLPrefix() (string, bool) { + return x.DestinationName, x.DestinationName != "" +} + // IntentionAction is the action that the intention represents. This // can be "allow" or "deny" to whitelist or blacklist intentions. type IntentionAction string From 14ca93e09c34286e11f014b442c11a7bfafc59f8 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 4 Mar 2018 00:55:23 -0800 Subject: [PATCH 036/539] agent/consul: tests for ACLs on Intention.Apply update/delete --- agent/consul/intention_endpoint_test.go | 153 ++++++++++++++++++++++++ 1 file changed, 153 insertions(+) diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 5edf904d7..946e66f0e 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -392,6 +392,159 @@ service "foo" { } } +// Test apply with delete and a default deny ACL +func TestIntentionApply_aclDelete(t *testing.T) { + t.Parallel() + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create an ACL with write permissions + var token string + { + var rules = ` +service "foo" { + policy = "deny" + intentions = "write" +}` + + req := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: rules, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { + t.Fatalf("err: %v", err) + } + } + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + ixn.Intention.DestinationName = "foobar" + ixn.WriteRequest.Token = token + + // Create + var reply string + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("bad: %v", err) + } + + // Try to do a delete with no token; this should get rejected. + ixn.Op = structs.IntentionOpDelete + ixn.Intention.ID = reply + ixn.WriteRequest.Token = "" + err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) + if !acl.IsErrPermissionDenied(err) { + t.Fatalf("bad: %v", err) + } + + // Try again with the original token. This should go through. + ixn.WriteRequest.Token = token + if err = msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + + // Verify it is gone + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: ixn.Intention.ID, + } + var resp structs.IndexedIntentions + err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp) + if err == nil || err.Error() != ErrIntentionNotFound.Error() { + t.Fatalf("err: %v", err) + } + } +} + +// Test apply with update and a default deny ACL +func TestIntentionApply_aclUpdate(t *testing.T) { + t.Parallel() + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create an ACL with write permissions + var token string + { + var rules = ` +service "foo" { + policy = "deny" + intentions = "write" +}` + + req := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: rules, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { + t.Fatalf("err: %v", err) + } + } + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + ixn.Intention.DestinationName = "foobar" + ixn.WriteRequest.Token = token + + // Create + var reply string + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("bad: %v", err) + } + + // Try to do an update without a token; this should get rejected. + ixn.Op = structs.IntentionOpUpdate + ixn.Intention.ID = reply + ixn.WriteRequest.Token = "" + err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) + if !acl.IsErrPermissionDenied(err) { + t.Fatalf("bad: %v", err) + } + + // Try again with the original token; this should go through. + ixn.WriteRequest.Token = token + if err = msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } +} + func TestIntentionList(t *testing.T) { t.Parallel() From fd840da97a5fdb76544b10183d83677dad097a64 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 4 Mar 2018 11:35:39 -0800 Subject: [PATCH 037/539] agent/consul: Intention.Apply ACL on rename --- agent/consul/intention_endpoint.go | 9 ++ agent/consul/intention_endpoint_test.go | 109 ++++++++++++++++++++++++ 2 files changed, 118 insertions(+) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 0440f17e4..2a409dcbe 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -96,6 +96,15 @@ func (s *Intention) Apply( if ixn == nil { return fmt.Errorf("Cannot modify non-existent intention: '%s'", args.Intention.ID) } + + // Perform the ACL check that we have write to the old prefix too, + // which must be true to perform any rename. + if prefix, ok := ixn.GetACLPrefix(); ok { + if rule != nil && !rule.IntentionWrite(prefix) { + s.srv.logger.Printf("[WARN] consul.intention: Operation on intention '%s' denied due to ACLs", args.Intention.ID) + return acl.ErrPermissionDenied + } + } } // We always update the updatedat field. This has no effect for deletion. diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 946e66f0e..fd76bbb78 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -545,6 +545,115 @@ service "foo" { } } +// Test apply with a management token +func TestIntentionApply_aclManagement(t *testing.T) { + t.Parallel() + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + ixn.Intention.DestinationName = "foobar" + ixn.WriteRequest.Token = "root" + + // Create + var reply string + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("bad: %v", err) + } + ixn.Intention.ID = reply + + // Update + ixn.Op = structs.IntentionOpUpdate + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + + // Delete + ixn.Op = structs.IntentionOpDelete + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } +} + +// Test update changing the name where an ACL won't allow it +func TestIntentionApply_aclUpdateChange(t *testing.T) { + t.Parallel() + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create an ACL with write permissions + var token string + { + var rules = ` +service "foo" { + policy = "deny" + intentions = "write" +}` + + req := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: rules, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { + t.Fatalf("err: %v", err) + } + } + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + ixn.Intention.DestinationName = "bar" + ixn.WriteRequest.Token = "root" + + // Create + var reply string + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("bad: %v", err) + } + + // Try to do an update without a token; this should get rejected. + ixn.Op = structs.IntentionOpUpdate + ixn.Intention.ID = reply + ixn.Intention.DestinationName = "foo" + ixn.WriteRequest.Token = token + err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) + if !acl.IsErrPermissionDenied(err) { + t.Fatalf("bad: %v", err) + } +} + func TestIntentionList(t *testing.T) { t.Parallel() From db44a98a2dd33444cdfb43a35259f76a9f9e9587 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 4 Mar 2018 11:53:52 -0800 Subject: [PATCH 038/539] agent/consul: Intention.Get ACLs --- agent/consul/acl.go | 30 +++++++++ agent/consul/intention_endpoint.go | 11 ++- agent/consul/intention_endpoint_test.go | 90 +++++++++++++++++++++++++ 3 files changed, 130 insertions(+), 1 deletion(-) diff --git a/agent/consul/acl.go b/agent/consul/acl.go index 1e95e62e4..ce3282b40 100644 --- a/agent/consul/acl.go +++ b/agent/consul/acl.go @@ -454,6 +454,33 @@ func (f *aclFilter) filterCoordinates(coords *structs.Coordinates) { *coords = c } +// filterIntentions is used to filter intentions based on ACL rules. +// We prune entries the user doesn't have access to, and we redact any tokens +// if the user doesn't have a management token. +func (f *aclFilter) filterIntentions(ixns *structs.Intentions) { + // Management tokens can see everything with no filtering. + if f.acl.ACLList() { + return + } + + // Otherwise, we need to see what the token has access to. + ret := make(structs.Intentions, 0, len(*ixns)) + for _, ixn := range *ixns { + // If no prefix ACL applies to this then filter it, since + // we know at this point the user doesn't have a management + // token, otherwise see what the policy says. + prefix, ok := ixn.GetACLPrefix() + if !ok || !f.acl.IntentionRead(prefix) { + f.logger.Printf("[DEBUG] consul: dropping intention %q from result due to ACLs", ixn.ID) + continue + } + + ret = append(ret, ixn) + } + + *ixns = ret +} + // filterNodeDump is used to filter through all parts of a node dump and // remove elements the provided ACL token cannot access. func (f *aclFilter) filterNodeDump(dump *structs.NodeDump) { @@ -598,6 +625,9 @@ func (s *Server) filterACL(token string, subj interface{}) error { case *structs.IndexedHealthChecks: filt.filterHealthChecks(&v.HealthChecks) + case *structs.IndexedIntentions: + filt.filterIntentions(&v.Intentions) + case *structs.IndexedNodeDump: filt.filterNodeDump(&v.Dump) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 2a409dcbe..568446d73 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -168,7 +168,16 @@ func (s *Intention) Get( reply.Index = index reply.Intentions = structs.Intentions{ixn} - // TODO: acl filtering + // Filter + if err := s.srv.filterACL(args.Token, reply); err != nil { + return err + } + + // If ACLs prevented any responses, error + if len(reply.Intentions) == 0 { + s.srv.logger.Printf("[WARN] consul.intention: Request to get intention '%s' denied due to ACLs", args.IntentionID) + return acl.ErrPermissionDenied + } return nil }, diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index fd76bbb78..67c2a07d0 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -654,6 +654,96 @@ service "foo" { } } +// Test reading with ACLs +func TestIntentionGet_acl(t *testing.T) { + t.Parallel() + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create an ACL with service write permissions. This will grant + // intentions read. + var token string + { + var rules = ` +service "foo" { + policy = "write" +}` + + req := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: rules, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { + t.Fatalf("err: %v", err) + } + } + + // Setup a basic record to create + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + ixn.Intention.DestinationName = "foobar" + ixn.WriteRequest.Token = "root" + + // Create + var reply string + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + ixn.Intention.ID = reply + + // Read without token should be error + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: ixn.Intention.ID, + } + + var resp structs.IndexedIntentions + err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp) + if !acl.IsErrPermissionDenied(err) { + t.Fatalf("bad: %v", err) + } + if len(resp.Intentions) != 0 { + t.Fatalf("bad: %v", resp) + } + } + + // Read with token should work + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + IntentionID: ixn.Intention.ID, + QueryOptions: structs.QueryOptions{Token: token}, + } + + var resp structs.IndexedIntentions + if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + if len(resp.Intentions) != 1 { + t.Fatalf("bad: %v", resp) + } + } +} + func TestIntentionList(t *testing.T) { t.Parallel() From 3e10a1ae7a45fd54d5e7329c51c40d5b5b040dbf Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 4 Mar 2018 18:32:28 -0800 Subject: [PATCH 039/539] agent/consul: Intention.Match ACLs --- agent/consul/intention_endpoint.go | 23 ++- agent/consul/intention_endpoint_test.go | 233 ++++++++++++++++++++++++ 2 files changed, 250 insertions(+), 6 deletions(-) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 568446d73..2458a8ee9 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -206,8 +206,7 @@ func (s *Intention) List( reply.Intentions = make(structs.Intentions, 0) } - // filterACL - return nil + return s.srv.filterACL(args.Token, reply) }, ) } @@ -221,7 +220,22 @@ func (s *Intention) Match( return err } - // TODO(mitchellh): validate + // Get the ACL token for the request for the checks below. + rule, err := s.srv.resolveToken(args.Token) + if err != nil { + return err + } + + if rule != nil { + // We go through each entry and test the destination to check if it + // matches. + for _, entry := range args.Match.Entries { + if prefix := entry.Name; prefix != "" && !rule.IntentionRead(prefix) { + s.srv.logger.Printf("[WARN] consul.intention: Operation on intention prefix '%s' denied due to ACLs", prefix) + return acl.ErrPermissionDenied + } + } + } return s.srv.blockingQuery( &args.QueryOptions, @@ -234,9 +248,6 @@ func (s *Intention) Match( reply.Index = index reply.Matches = matches - - // TODO(mitchellh): acl filtering - return nil }, ) diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 67c2a07d0..5a0a8a723 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -768,6 +768,110 @@ func TestIntentionList(t *testing.T) { } } +// Test listing with ACLs +func TestIntentionList_acl(t *testing.T) { + t.Parallel() + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create an ACL with service write permissions. This will grant + // intentions read. + var token string + { + var rules = ` +service "foo" { + policy = "write" +}` + + req := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: rules, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { + t.Fatalf("err: %v", err) + } + } + + // Create a few records + for _, name := range []string{"foobar", "bar", "baz"} { + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + ixn.Intention.DestinationName = name + ixn.WriteRequest.Token = "root" + + // Create + var reply string + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + } + + // Test with no token + { + req := &structs.DCSpecificRequest{ + Datacenter: "dc1", + } + var resp structs.IndexedIntentions + if err := msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + + if len(resp.Intentions) != 0 { + t.Fatalf("bad: %v", resp) + } + } + + // Test with management token + { + req := &structs.DCSpecificRequest{ + Datacenter: "dc1", + QueryOptions: structs.QueryOptions{Token: "root"}, + } + var resp structs.IndexedIntentions + if err := msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + + if len(resp.Intentions) != 3 { + t.Fatalf("bad: %v", resp) + } + } + + // Test with user token + { + req := &structs.DCSpecificRequest{ + Datacenter: "dc1", + QueryOptions: structs.QueryOptions{Token: token}, + } + var resp structs.IndexedIntentions + if err := msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + + if len(resp.Intentions) != 1 { + t.Fatalf("bad: %v", resp) + } + } +} + // Test basic matching. We don't need to exhaustively test inputs since this // is tested in the agent/consul/state package. func TestIntentionMatch_good(t *testing.T) { @@ -836,3 +940,132 @@ func TestIntentionMatch_good(t *testing.T) { } assert.Equal(expected, actual) } + +// Test matching with ACLs +func TestIntentionMatch_acl(t *testing.T) { + t.Parallel() + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create an ACL with service write permissions. This will grant + // intentions read. + var token string + { + var rules = ` +service "bar" { + policy = "write" +}` + + req := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: rules, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { + t.Fatalf("err: %v", err) + } + } + + // Create some records + { + insert := [][]string{ + {"foo", "*"}, + {"foo", "bar"}, + {"foo", "baz"}, // shouldn't match + {"bar", "bar"}, // shouldn't match + {"bar", "*"}, // shouldn't match + {"*", "*"}, + } + + for _, v := range insert { + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + ixn.Intention.DestinationNS = v[0] + ixn.Intention.DestinationName = v[1] + ixn.WriteRequest.Token = "root" + + // Create + var reply string + if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { + t.Fatalf("err: %v", err) + } + } + } + + // Test with no token + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + Match: &structs.IntentionQueryMatch{ + Type: structs.IntentionMatchDestination, + Entries: []structs.IntentionMatchEntry{ + { + Namespace: "foo", + Name: "bar", + }, + }, + }, + } + var resp structs.IndexedIntentionMatches + err := msgpackrpc.CallWithCodec(codec, "Intention.Match", req, &resp) + if !acl.IsErrPermissionDenied(err) { + t.Fatalf("err: %v", err) + } + + if len(resp.Matches) != 0 { + t.Fatalf("bad: %#v", resp.Matches) + } + } + + // Test with proper token + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + Match: &structs.IntentionQueryMatch{ + Type: structs.IntentionMatchDestination, + Entries: []structs.IntentionMatchEntry{ + { + Namespace: "foo", + Name: "bar", + }, + }, + }, + QueryOptions: structs.QueryOptions{Token: token}, + } + var resp structs.IndexedIntentionMatches + if err := msgpackrpc.CallWithCodec(codec, "Intention.Match", req, &resp); err != nil { + t.Fatalf("err: %v", err) + } + + if len(resp.Matches) != 1 { + t.Fatalf("bad: %#v", resp.Matches) + } + + expected := [][]string{{"foo", "bar"}, {"foo", "*"}, {"*", "*"}} + var actual [][]string + for _, ixn := range resp.Matches[0] { + actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) + } + + if !reflect.DeepEqual(actual, expected) { + t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", actual, expected) + } + } +} From 6a8bba7d487bac6e62a374ac9d601af8e357cd8c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 4 Mar 2018 18:46:33 -0800 Subject: [PATCH 040/539] agent/consul,structs: add tests for ACL filter and prefix for intentions --- agent/consul/acl_test.go | 60 +++++++++++++++++++++++++++++++++ agent/structs/intention_test.go | 37 ++++++++++++++++++++ 2 files changed, 97 insertions(+) diff --git a/agent/consul/acl_test.go b/agent/consul/acl_test.go index 9a1eaba6c..a37bbf101 100644 --- a/agent/consul/acl_test.go +++ b/agent/consul/acl_test.go @@ -847,6 +847,66 @@ node "node1" { } } +func TestACL_filterIntentions(t *testing.T) { + t.Parallel() + fill := func() structs.Intentions { + return structs.Intentions{ + &structs.Intention{ + ID: "f004177f-2c28-83b7-4229-eacc25fe55d1", + DestinationName: "bar", + }, + &structs.Intention{ + ID: "f004177f-2c28-83b7-4229-eacc25fe55d2", + DestinationName: "foo", + }, + } + } + + // Try permissive filtering. + { + ixns := fill() + filt := newACLFilter(acl.AllowAll(), nil, false) + filt.filterIntentions(&ixns) + if len(ixns) != 2 { + t.Fatalf("bad: %#v", ixns) + } + } + + // Try restrictive filtering. + { + ixns := fill() + filt := newACLFilter(acl.DenyAll(), nil, false) + filt.filterIntentions(&ixns) + if len(ixns) != 0 { + t.Fatalf("bad: %#v", ixns) + } + } + + // Policy to see one + policy, err := acl.Parse(` +service "foo" { + policy = "read" +} +`, nil) + if err != nil { + t.Fatalf("err %v", err) + } + perms, err := acl.New(acl.DenyAll(), policy, nil) + if err != nil { + t.Fatalf("err: %v", err) + } + + // Filter + { + ixns := fill() + filt := newACLFilter(perms, nil, false) + filt.filterIntentions(&ixns) + if len(ixns) != 1 { + t.Fatalf("bad: %#v", ixns) + } + } +} + func TestACL_filterServices(t *testing.T) { t.Parallel() // Create some services diff --git a/agent/structs/intention_test.go b/agent/structs/intention_test.go index 9db4ff255..948ae920e 100644 --- a/agent/structs/intention_test.go +++ b/agent/structs/intention_test.go @@ -8,6 +8,43 @@ import ( "github.com/stretchr/testify/assert" ) +func TestIntentionGetACLPrefix(t *testing.T) { + cases := []struct { + Name string + Input *Intention + Expected string + }{ + { + "unset name", + &Intention{DestinationName: ""}, + "", + }, + + { + "set name", + &Intention{DestinationName: "fo"}, + "fo", + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + actual, ok := tc.Input.GetACLPrefix() + if tc.Expected == "" { + if !ok { + return + } + + t.Fatal("should not be ok") + } + + if actual != tc.Expected { + t.Fatalf("bad: %q", actual) + } + }) + } +} + func TestIntentionValidate(t *testing.T) { cases := []struct { Name string From 23ee0888ecf83dd0cb8e94e7481ee4df90db103c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 6 Mar 2018 10:51:26 -0800 Subject: [PATCH 041/539] agent/consul: convert intention ACLs to testify/assert --- acl/policy_test.go | 17 +- agent/consul/acl_test.go | 23 +-- agent/consul/intention_endpoint_test.go | 199 ++++++++---------------- 3 files changed, 77 insertions(+), 162 deletions(-) diff --git a/acl/policy_test.go b/acl/policy_test.go index 9d3ae8f69..19468e38b 100644 --- a/acl/policy_test.go +++ b/acl/policy_test.go @@ -4,6 +4,8 @@ import ( "reflect" "strings" "testing" + + "github.com/stretchr/testify/assert" ) func TestParse_table(t *testing.T) { @@ -69,21 +71,14 @@ service "foo" { for _, tc := range cases { t.Run(tc.Name, func(t *testing.T) { + assert := assert.New(t) actual, err := Parse(tc.Input, nil) - if (err != nil) != (tc.Err != "") { - t.Fatalf("err: %s", err) - } + assert.Equal(tc.Err != "", err != nil, err) if err != nil { - if !strings.Contains(err.Error(), tc.Err) { - t.Fatalf("err: %s", err) - } - + assert.Contains(err.Error(), tc.Err) return } - - if !reflect.DeepEqual(actual, tc.Expected) { - t.Fatalf("bad: %#v", actual) - } + assert.Equal(tc.Expected, actual) }) } } diff --git a/agent/consul/acl_test.go b/agent/consul/acl_test.go index a37bbf101..ace1284a8 100644 --- a/agent/consul/acl_test.go +++ b/agent/consul/acl_test.go @@ -11,6 +11,7 @@ import ( "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/testrpc" "github.com/hashicorp/consul/testutil/retry" + "github.com/stretchr/testify/assert" ) var testACLPolicy = ` @@ -849,6 +850,8 @@ node "node1" { func TestACL_filterIntentions(t *testing.T) { t.Parallel() + assert := assert.New(t) + fill := func() structs.Intentions { return structs.Intentions{ &structs.Intention{ @@ -867,9 +870,7 @@ func TestACL_filterIntentions(t *testing.T) { ixns := fill() filt := newACLFilter(acl.AllowAll(), nil, false) filt.filterIntentions(&ixns) - if len(ixns) != 2 { - t.Fatalf("bad: %#v", ixns) - } + assert.Len(ixns, 2) } // Try restrictive filtering. @@ -877,9 +878,7 @@ func TestACL_filterIntentions(t *testing.T) { ixns := fill() filt := newACLFilter(acl.DenyAll(), nil, false) filt.filterIntentions(&ixns) - if len(ixns) != 0 { - t.Fatalf("bad: %#v", ixns) - } + assert.Len(ixns, 0) } // Policy to see one @@ -888,22 +887,16 @@ service "foo" { policy = "read" } `, nil) - if err != nil { - t.Fatalf("err %v", err) - } + assert.Nil(err) perms, err := acl.New(acl.DenyAll(), policy, nil) - if err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(err) // Filter { ixns := fill() filt := newACLFilter(perms, nil, false) filt.filterIntentions(&ixns) - if len(ixns) != 1 { - t.Fatalf("bad: %#v", ixns) - } + assert.Len(ixns, 1) } } diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index 5a0a8a723..a1e1ae751 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -307,6 +307,8 @@ func TestIntentionApply_deleteGood(t *testing.T) { // Test apply with a deny ACL func TestIntentionApply_aclDeny(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServerWithConfig(t, func(c *Config) { c.ACLDatacenter = "dc1" c.ACLMasterToken = "root" @@ -338,9 +340,7 @@ service "foo" { }, WriteRequest: structs.WriteRequest{Token: "root"}, } - if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token)) } // Setup a basic record to create @@ -354,47 +354,38 @@ service "foo" { // Create without a token should error since default deny var reply string err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) - if !acl.IsErrPermissionDenied(err) { - t.Fatalf("bad: %v", err) - } + assert.True(acl.IsErrPermissionDenied(err)) // Now add the token and try again. ixn.WriteRequest.Token = token - if err = msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) // Read ixn.Intention.ID = reply { req := &structs.IntentionQueryRequest{ - Datacenter: "dc1", - IntentionID: ixn.Intention.ID, + Datacenter: "dc1", + IntentionID: ixn.Intention.ID, + QueryOptions: structs.QueryOptions{Token: "root"}, } var resp structs.IndexedIntentions - if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - if len(resp.Intentions) != 1 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp)) + assert.Len(resp.Intentions, 1) actual := resp.Intentions[0] - if resp.Index != actual.ModifyIndex { - t.Fatalf("bad index: %d", resp.Index) - } + assert.Equal(resp.Index, actual.ModifyIndex) actual.CreateIndex, actual.ModifyIndex = 0, 0 actual.CreatedAt = ixn.Intention.CreatedAt actual.UpdatedAt = ixn.Intention.UpdatedAt - if !reflect.DeepEqual(actual, ixn.Intention) { - t.Fatalf("bad:\n\n%#v\n\n%#v", actual, ixn.Intention) - } + assert.Equal(ixn.Intention, actual) } } // Test apply with delete and a default deny ACL func TestIntentionApply_aclDelete(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServerWithConfig(t, func(c *Config) { c.ACLDatacenter = "dc1" c.ACLMasterToken = "root" @@ -426,9 +417,7 @@ service "foo" { }, WriteRequest: structs.WriteRequest{Token: "root"}, } - if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token)) } // Setup a basic record to create @@ -442,24 +431,18 @@ service "foo" { // Create var reply string - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("bad: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) // Try to do a delete with no token; this should get rejected. ixn.Op = structs.IntentionOpDelete ixn.Intention.ID = reply ixn.WriteRequest.Token = "" err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) - if !acl.IsErrPermissionDenied(err) { - t.Fatalf("bad: %v", err) - } + assert.True(acl.IsErrPermissionDenied(err)) // Try again with the original token. This should go through. ixn.WriteRequest.Token = token - if err = msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) // Verify it is gone { @@ -469,15 +452,16 @@ service "foo" { } var resp structs.IndexedIntentions err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp) - if err == nil || err.Error() != ErrIntentionNotFound.Error() { - t.Fatalf("err: %v", err) - } + assert.NotNil(err) + assert.Contains(err.Error(), ErrIntentionNotFound.Error()) } } // Test apply with update and a default deny ACL func TestIntentionApply_aclUpdate(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServerWithConfig(t, func(c *Config) { c.ACLDatacenter = "dc1" c.ACLMasterToken = "root" @@ -509,9 +493,7 @@ service "foo" { }, WriteRequest: structs.WriteRequest{Token: "root"}, } - if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token)) } // Setup a basic record to create @@ -525,29 +507,25 @@ service "foo" { // Create var reply string - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("bad: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) // Try to do an update without a token; this should get rejected. ixn.Op = structs.IntentionOpUpdate ixn.Intention.ID = reply ixn.WriteRequest.Token = "" err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) - if !acl.IsErrPermissionDenied(err) { - t.Fatalf("bad: %v", err) - } + assert.True(acl.IsErrPermissionDenied(err)) // Try again with the original token; this should go through. ixn.WriteRequest.Token = token - if err = msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) } // Test apply with a management token func TestIntentionApply_aclManagement(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServerWithConfig(t, func(c *Config) { c.ACLDatacenter = "dc1" c.ACLMasterToken = "root" @@ -571,27 +549,23 @@ func TestIntentionApply_aclManagement(t *testing.T) { // Create var reply string - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("bad: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) ixn.Intention.ID = reply // Update ixn.Op = structs.IntentionOpUpdate - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) // Delete ixn.Op = structs.IntentionOpDelete - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) } // Test update changing the name where an ACL won't allow it func TestIntentionApply_aclUpdateChange(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServerWithConfig(t, func(c *Config) { c.ACLDatacenter = "dc1" c.ACLMasterToken = "root" @@ -623,9 +597,7 @@ service "foo" { }, WriteRequest: structs.WriteRequest{Token: "root"}, } - if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token)) } // Setup a basic record to create @@ -639,9 +611,7 @@ service "foo" { // Create var reply string - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("bad: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) // Try to do an update without a token; this should get rejected. ixn.Op = structs.IntentionOpUpdate @@ -649,14 +619,14 @@ service "foo" { ixn.Intention.DestinationName = "foo" ixn.WriteRequest.Token = token err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply) - if !acl.IsErrPermissionDenied(err) { - t.Fatalf("bad: %v", err) - } + assert.True(acl.IsErrPermissionDenied(err)) } // Test reading with ACLs func TestIntentionGet_acl(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServerWithConfig(t, func(c *Config) { c.ACLDatacenter = "dc1" c.ACLMasterToken = "root" @@ -688,9 +658,7 @@ service "foo" { }, WriteRequest: structs.WriteRequest{Token: "root"}, } - if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token)) } // Setup a basic record to create @@ -704,9 +672,7 @@ service "foo" { // Create var reply string - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) ixn.Intention.ID = reply // Read without token should be error @@ -718,12 +684,8 @@ service "foo" { var resp structs.IndexedIntentions err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp) - if !acl.IsErrPermissionDenied(err) { - t.Fatalf("bad: %v", err) - } - if len(resp.Intentions) != 0 { - t.Fatalf("bad: %v", resp) - } + assert.True(acl.IsErrPermissionDenied(err)) + assert.Len(resp.Intentions, 0) } // Read with token should work @@ -735,12 +697,8 @@ service "foo" { } var resp structs.IndexedIntentions - if err := msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - if len(resp.Intentions) != 1 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Get", req, &resp)) + assert.Len(resp.Intentions, 1) } } @@ -771,6 +729,8 @@ func TestIntentionList(t *testing.T) { // Test listing with ACLs func TestIntentionList_acl(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServerWithConfig(t, func(c *Config) { c.ACLDatacenter = "dc1" c.ACLMasterToken = "root" @@ -802,9 +762,7 @@ service "foo" { }, WriteRequest: structs.WriteRequest{Token: "root"}, } - if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token)) } // Create a few records @@ -819,9 +777,7 @@ service "foo" { // Create var reply string - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) } // Test with no token @@ -830,13 +786,8 @@ service "foo" { Datacenter: "dc1", } var resp structs.IndexedIntentions - if err := msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - - if len(resp.Intentions) != 0 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp)) + assert.Len(resp.Intentions, 0) } // Test with management token @@ -846,13 +797,8 @@ service "foo" { QueryOptions: structs.QueryOptions{Token: "root"}, } var resp structs.IndexedIntentions - if err := msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - - if len(resp.Intentions) != 3 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp)) + assert.Len(resp.Intentions, 3) } // Test with user token @@ -862,13 +808,8 @@ service "foo" { QueryOptions: structs.QueryOptions{Token: token}, } var resp structs.IndexedIntentions - if err := msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - - if len(resp.Intentions) != 1 { - t.Fatalf("bad: %v", resp) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.List", req, &resp)) + assert.Len(resp.Intentions, 1) } } @@ -944,6 +885,8 @@ func TestIntentionMatch_good(t *testing.T) { // Test matching with ACLs func TestIntentionMatch_acl(t *testing.T) { t.Parallel() + + assert := assert.New(t) dir1, s1 := testServerWithConfig(t, func(c *Config) { c.ACLDatacenter = "dc1" c.ACLMasterToken = "root" @@ -975,9 +918,7 @@ service "bar" { }, WriteRequest: structs.WriteRequest{Token: "root"}, } - if err := msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token)) } // Create some records @@ -1003,9 +944,7 @@ service "bar" { // Create var reply string - if err := msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply); err != nil { - t.Fatalf("err: %v", err) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) } } @@ -1025,13 +964,8 @@ service "bar" { } var resp structs.IndexedIntentionMatches err := msgpackrpc.CallWithCodec(codec, "Intention.Match", req, &resp) - if !acl.IsErrPermissionDenied(err) { - t.Fatalf("err: %v", err) - } - - if len(resp.Matches) != 0 { - t.Fatalf("bad: %#v", resp.Matches) - } + assert.True(acl.IsErrPermissionDenied(err)) + assert.Len(resp.Matches, 0) } // Test with proper token @@ -1050,13 +984,8 @@ service "bar" { QueryOptions: structs.QueryOptions{Token: token}, } var resp structs.IndexedIntentionMatches - if err := msgpackrpc.CallWithCodec(codec, "Intention.Match", req, &resp); err != nil { - t.Fatalf("err: %v", err) - } - - if len(resp.Matches) != 1 { - t.Fatalf("bad: %#v", resp.Matches) - } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Match", req, &resp)) + assert.Len(resp.Matches, 1) expected := [][]string{{"foo", "bar"}, {"foo", "*"}, {"*", "*"}} var actual [][]string @@ -1064,8 +993,6 @@ service "bar" { actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) } - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("bad (got, wanted):\n\n%#v\n\n%#v", actual, expected) - } + assert.Equal(expected, actual) } } From 09568ce7b5d5436e8cf504bb2af506821aeff1c9 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 5 Mar 2018 19:56:52 -0800 Subject: [PATCH 042/539] agent/consul/state: service registration with proxy works --- agent/consul/state/catalog_test.go | 34 ++++++++++++++++++++++++++++++ agent/structs/structs.go | 28 ++++++++++++++++++++++++ 2 files changed, 62 insertions(+) diff --git a/agent/consul/state/catalog_test.go b/agent/consul/state/catalog_test.go index a2b56cad8..dcd1d505a 100644 --- a/agent/consul/state/catalog_test.go +++ b/agent/consul/state/catalog_test.go @@ -981,6 +981,40 @@ func TestStateStore_EnsureService(t *testing.T) { } } +func TestStateStore_EnsureService_connectProxy(t *testing.T) { + s := testStateStore(t) + + // Create the service registration. + ns1 := &structs.NodeService{ + Kind: structs.ServiceKindConnectProxy, + ID: "connect-proxy", + Service: "connect-proxy", + Address: "1.1.1.1", + Port: 1111, + ProxyDestination: "foo", + } + + // Service successfully registers into the state store. + testRegisterNode(t, s, 0, "node1") + if err := s.EnsureService(10, "node1", ns1); err != nil { + t.Fatalf("err: %s", err) + } + + // Retrieve and verify + _, out, err := s.NodeServices(nil, "node1") + if err != nil { + t.Fatalf("err: %s", err) + } + if out == nil || len(out.Services) != 1 { + t.Fatalf("bad: %#v", out) + } + expect1 := *ns1 + expect1.CreateIndex, expect1.ModifyIndex = 10, 10 + if svc := out.Services["connect-proxy"]; !reflect.DeepEqual(&expect1, svc) { + t.Fatalf("bad: %#v", svc) + } +} + func TestStateStore_Services(t *testing.T) { s := testStateStore(t) diff --git a/agent/structs/structs.go b/agent/structs/structs.go index 8a1860912..23cd41acc 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -388,6 +388,7 @@ type ServiceNode struct { Datacenter string TaggedAddresses map[string]string NodeMeta map[string]string + ServiceKind ServiceKind ServiceID string ServiceName string ServiceTags []string @@ -395,6 +396,7 @@ type ServiceNode struct { ServiceMeta map[string]string ServicePort int ServiceEnableTagOverride bool + ServiceProxyDestination string RaftIndex } @@ -431,6 +433,7 @@ func (s *ServiceNode) PartialClone() *ServiceNode { // ToNodeService converts the given service node to a node service. func (s *ServiceNode) ToNodeService() *NodeService { return &NodeService{ + Kind: s.ServiceKind, ID: s.ServiceID, Service: s.ServiceName, Tags: s.ServiceTags, @@ -438,6 +441,7 @@ func (s *ServiceNode) ToNodeService() *NodeService { Port: s.ServicePort, Meta: s.ServiceMeta, EnableTagOverride: s.ServiceEnableTagOverride, + ProxyDestination: s.ServiceProxyDestination, RaftIndex: RaftIndex{ CreateIndex: s.CreateIndex, ModifyIndex: s.ModifyIndex, @@ -447,8 +451,26 @@ func (s *ServiceNode) ToNodeService() *NodeService { type ServiceNodes []*ServiceNode +// ServiceKind is the kind of service being registered. +type ServiceKind string + +const ( + // ServiceKindTypical is a typical, classic Consul service. + ServiceKindTypical ServiceKind = "typical" + + // ServiceKindConnectProxy is a proxy for the Connect feature. This + // service proxies another service within Consul and speaks the connect + // protocol. + ServiceKindConnectProxy ServiceKind = "connect-proxy" +) + // NodeService is a service provided by a node type NodeService struct { + // Kind is the kind of service this is. Different kinds of services may + // have differing validation, DNS behavior, etc. An empty kind will default + // to the Default kind. See ServiceKind for the full list of kinds. + Kind ServiceKind + ID string Service string Tags []string @@ -457,6 +479,10 @@ type NodeService struct { Port int EnableTagOverride bool + // ProxyDestination is the name of the service that this service is + // a Connect proxy for. This is only valid if Kind is "connect-proxy". + ProxyDestination string + RaftIndex } @@ -485,6 +511,7 @@ func (s *NodeService) ToServiceNode(node string) *ServiceNode { Node: node, // Skip Address, see ServiceNode definition. // Skip TaggedAddresses, see ServiceNode definition. + ServiceKind: s.Kind, ServiceID: s.ID, ServiceName: s.Service, ServiceTags: s.Tags, @@ -492,6 +519,7 @@ func (s *NodeService) ToServiceNode(node string) *ServiceNode { ServicePort: s.Port, ServiceMeta: s.Meta, ServiceEnableTagOverride: s.EnableTagOverride, + ServiceProxyDestination: s.ProxyDestination, RaftIndex: RaftIndex{ CreateIndex: s.CreateIndex, ModifyIndex: s.ModifyIndex, From 58bff8dd05c7c25b9bb339ee2727c817ae1b5636 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 6 Mar 2018 17:13:52 -0800 Subject: [PATCH 043/539] agent/consul/state: convert proxy test to testify/assert --- agent/consul/state/catalog_test.go | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/agent/consul/state/catalog_test.go b/agent/consul/state/catalog_test.go index dcd1d505a..c057ebea6 100644 --- a/agent/consul/state/catalog_test.go +++ b/agent/consul/state/catalog_test.go @@ -14,6 +14,7 @@ import ( "github.com/hashicorp/go-memdb" uuid "github.com/hashicorp/go-uuid" "github.com/pascaldekloe/goe/verify" + "github.com/stretchr/testify/assert" ) func makeRandomNodeID(t *testing.T) types.NodeID { @@ -982,6 +983,7 @@ func TestStateStore_EnsureService(t *testing.T) { } func TestStateStore_EnsureService_connectProxy(t *testing.T) { + assert := assert.New(t) s := testStateStore(t) // Create the service registration. @@ -996,23 +998,17 @@ func TestStateStore_EnsureService_connectProxy(t *testing.T) { // Service successfully registers into the state store. testRegisterNode(t, s, 0, "node1") - if err := s.EnsureService(10, "node1", ns1); err != nil { - t.Fatalf("err: %s", err) - } + assert.Nil(s.EnsureService(10, "node1", ns1)) // Retrieve and verify _, out, err := s.NodeServices(nil, "node1") - if err != nil { - t.Fatalf("err: %s", err) - } - if out == nil || len(out.Services) != 1 { - t.Fatalf("bad: %#v", out) - } + assert.Nil(err) + assert.NotNil(out) + assert.Len(out.Services, 1) + expect1 := *ns1 expect1.CreateIndex, expect1.ModifyIndex = 10, 10 - if svc := out.Services["connect-proxy"]; !reflect.DeepEqual(&expect1, svc) { - t.Fatalf("bad: %#v", svc) - } + assert.Equal(&expect1, out.Services["connect-proxy"]) } func TestStateStore_Services(t *testing.T) { From 761b561946201ac40c2e8d1b69ef85f69f9fccaf Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 6 Mar 2018 17:32:41 -0800 Subject: [PATCH 044/539] agent: /v1/catalog/service/:service works with proxies --- agent/catalog_endpoint_test.go | 25 +++++++++++++++++++++ agent/consul/catalog_endpoint_test.go | 32 +++++++++++++++++++++++++++ agent/structs/catalog.go | 7 ++++-- agent/structs/structs.go | 2 ++ agent/structs/testing_catalog.go | 22 ++++++++++++++++++ 5 files changed, 86 insertions(+), 2 deletions(-) create mode 100644 agent/structs/testing_catalog.go diff --git a/agent/catalog_endpoint_test.go b/agent/catalog_endpoint_test.go index 845929117..d3af9bf6d 100644 --- a/agent/catalog_endpoint_test.go +++ b/agent/catalog_endpoint_test.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/testutil/retry" "github.com/hashicorp/serf/coordinate" + "github.com/stretchr/testify/assert" ) func TestCatalogRegister_Service_InvalidAddress(t *testing.T) { @@ -750,6 +751,30 @@ func TestCatalogServiceNodes_DistanceSort(t *testing.T) { } } +func TestCatalogServiceNodes_ConnectProxy(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Register + args := structs.TestRegisterRequestProxy(t) + var out struct{} + assert.Nil(a.RPC("Catalog.Register", args, &out)) + + req, _ := http.NewRequest("GET", fmt.Sprintf( + "/v1/catalog/service/%s", args.Service.Service), nil) + resp := httptest.NewRecorder() + obj, err := a.srv.CatalogServiceNodes(resp, req) + assert.Nil(err) + assertIndex(t, resp) + + nodes := obj.(structs.ServiceNodes) + assert.Len(nodes, 1) + assert.Equal(structs.ServiceKindConnectProxy, nodes[0].ServiceKind) +} + func TestCatalogNodeServices(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), "") diff --git a/agent/consul/catalog_endpoint_test.go b/agent/consul/catalog_endpoint_test.go index f6825f990..db49875cb 100644 --- a/agent/consul/catalog_endpoint_test.go +++ b/agent/consul/catalog_endpoint_test.go @@ -16,6 +16,7 @@ import ( "github.com/hashicorp/consul/testutil/retry" "github.com/hashicorp/consul/types" "github.com/hashicorp/net-rpc-msgpackrpc" + "github.com/stretchr/testify/assert" ) func TestCatalog_Register(t *testing.T) { @@ -1599,6 +1600,37 @@ func TestCatalog_ListServiceNodes_DistanceSort(t *testing.T) { } } +func TestCatalog_ListServiceNodes_ConnectProxy(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Register the service + args := structs.TestRegisterRequestProxy(t) + var out struct{} + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", args, &out)) + + // List + req := structs.ServiceSpecificRequest{ + Datacenter: "dc1", + ServiceName: args.Service.Service, + TagFilter: false, + } + var resp structs.IndexedServiceNodes + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.ServiceNodes", &req, &resp)) + assert.Len(resp.ServiceNodes, 1) + v := resp.ServiceNodes[0] + assert.Equal(structs.ServiceKindConnectProxy, v.ServiceKind) + assert.Equal(args.Service.ProxyDestination, v.ServiceProxyDestination) +} + func TestCatalog_NodeServices(t *testing.T) { t.Parallel() dir1, s1 := testServer(t) diff --git a/agent/structs/catalog.go b/agent/structs/catalog.go index b6b443f6f..3f68f43a1 100644 --- a/agent/structs/catalog.go +++ b/agent/structs/catalog.go @@ -13,9 +13,12 @@ const ( SerfCheckFailedOutput = "Agent not live or unreachable" ) -// These are used to manage the "consul" service that's attached to every Consul -// server node in the catalog. const ( + // These are used to manage the "consul" service that's attached to every + // Consul server node in the catalog. ConsulServiceID = "consul" ConsulServiceName = "consul" + + // ConnectProxyServiceName is the name of the proxy services. + ConnectProxyServiceName = "connect-proxy" ) diff --git a/agent/structs/structs.go b/agent/structs/structs.go index 23cd41acc..65ec87024 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -416,6 +416,7 @@ func (s *ServiceNode) PartialClone() *ServiceNode { Node: s.Node, // Skip Address, see above. // Skip TaggedAddresses, see above. + ServiceKind: s.ServiceKind, ServiceID: s.ServiceID, ServiceName: s.ServiceName, ServiceTags: tags, @@ -423,6 +424,7 @@ func (s *ServiceNode) PartialClone() *ServiceNode { ServicePort: s.ServicePort, ServiceMeta: nsmeta, ServiceEnableTagOverride: s.ServiceEnableTagOverride, + ServiceProxyDestination: s.ServiceProxyDestination, RaftIndex: RaftIndex{ CreateIndex: s.CreateIndex, ModifyIndex: s.ModifyIndex, diff --git a/agent/structs/testing_catalog.go b/agent/structs/testing_catalog.go new file mode 100644 index 000000000..8a002d380 --- /dev/null +++ b/agent/structs/testing_catalog.go @@ -0,0 +1,22 @@ +package structs + +import ( + "github.com/mitchellh/go-testing-interface" +) + +// TestRegisterRequestProxy returns a RegisterRequest for registering a +// Connect proxy. +func TestRegisterRequestProxy(t testing.T) *RegisterRequest { + return &RegisterRequest{ + Datacenter: "dc1", + Node: "foo", + Address: "127.0.0.1", + Service: &NodeService{ + Kind: ServiceKindConnectProxy, + Service: ConnectProxyServiceName, + Address: "127.0.0.2", + Port: 2222, + ProxyDestination: "web", + }, + } +} From 8777ff139c4340e9961259975d1a3f4cc347977e Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 6 Mar 2018 17:41:39 -0800 Subject: [PATCH 045/539] agent: test /v1/catalog/node/:node to list connect proxies --- agent/catalog_endpoint_test.go | 25 +++++++++++++++++++++ agent/consul/catalog_endpoint_test.go | 31 +++++++++++++++++++++++++++ 2 files changed, 56 insertions(+) diff --git a/agent/catalog_endpoint_test.go b/agent/catalog_endpoint_test.go index d3af9bf6d..4df9d4275 100644 --- a/agent/catalog_endpoint_test.go +++ b/agent/catalog_endpoint_test.go @@ -810,6 +810,31 @@ func TestCatalogNodeServices(t *testing.T) { } } +func TestCatalogNodeServices_ConnectProxy(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Register + args := structs.TestRegisterRequestProxy(t) + var out struct{} + assert.Nil(a.RPC("Catalog.Register", args, &out)) + + req, _ := http.NewRequest("GET", fmt.Sprintf( + "/v1/catalog/node/%s", args.Node), nil) + resp := httptest.NewRecorder() + obj, err := a.srv.CatalogNodeServices(resp, req) + assert.Nil(err) + assertIndex(t, resp) + + ns := obj.(*structs.NodeServices) + assert.Len(ns.Services, 1) + v := ns.Services[args.Service.Service] + assert.Equal(structs.ServiceKindConnectProxy, v.Kind) +} + func TestCatalogNodeServices_WanTranslation(t *testing.T) { t.Parallel() a1 := NewTestAgent(t.Name(), ` diff --git a/agent/consul/catalog_endpoint_test.go b/agent/consul/catalog_endpoint_test.go index db49875cb..572ff86bb 100644 --- a/agent/consul/catalog_endpoint_test.go +++ b/agent/consul/catalog_endpoint_test.go @@ -1681,6 +1681,37 @@ func TestCatalog_NodeServices(t *testing.T) { } } +func TestCatalog_NodeServices_ConnectProxy(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Register the service + args := structs.TestRegisterRequestProxy(t) + var out struct{} + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", args, &out)) + + // List + req := structs.NodeSpecificRequest{ + Datacenter: "dc1", + Node: args.Node, + } + var resp structs.IndexedNodeServices + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.NodeServices", &req, &resp)) + + assert.Len(resp.NodeServices.Services, 1) + v := resp.NodeServices.Services[args.Service.Service] + assert.Equal(structs.ServiceKindConnectProxy, v.Kind) + assert.Equal(args.Service.ProxyDestination, v.ProxyDestination) +} + // Used to check for a regression against a known bug func TestCatalog_Register_FailedCase1(t *testing.T) { t.Parallel() From 6cd9e0e37c450f6451d62ea28826cad3da0f6d7a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 8 Mar 2018 10:54:05 -0800 Subject: [PATCH 046/539] agent: /v1/agent/services test with connect proxies (works w/ no change) --- agent/agent_endpoint_test.go | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 1a62a8427..126994196 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -23,6 +23,7 @@ import ( "github.com/hashicorp/consul/types" "github.com/hashicorp/serf/serf" "github.com/pascaldekloe/goe/verify" + "github.com/stretchr/testify/assert" ) func makeReadOnlyAgentACL(t *testing.T, srv *HTTPServer) string { @@ -68,6 +69,32 @@ func TestAgent_Services(t *testing.T) { } } +func TestAgent_Services_ConnectProxy(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + srv1 := &structs.NodeService{ + Kind: structs.ServiceKindConnectProxy, + ID: structs.ConnectProxyServiceName, + Service: structs.ConnectProxyServiceName, + Port: 5000, + ProxyDestination: "db", + } + a.State.AddService(srv1, "") + + req, _ := http.NewRequest("GET", "/v1/agent/services", nil) + obj, err := a.srv.AgentServices(nil, req) + assert.Nil(err) + val := obj.(map[string]*structs.NodeService) + assert.Len(val, 1) + actual := val[structs.ConnectProxyServiceName] + assert.Equal(structs.ServiceKindConnectProxy, actual.Kind) + assert.Equal("db", actual.ProxyDestination) +} + func TestAgent_Services_ACLFilter(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), TestACLConfig()) From 8a728264830a06a7f31d6f1da316ac316fc1c4dc Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 8 Mar 2018 22:13:35 -0800 Subject: [PATCH 047/539] agent/consul: proxy registration and tests --- agent/consul/catalog_endpoint.go | 18 ++++++ agent/consul/catalog_endpoint_test.go | 81 +++++++++++++++++++++++++++ agent/structs/structs.go | 26 +++++++++ agent/structs/structs_test.go | 55 ++++++++++++++++++ agent/structs/testing_catalog.go | 20 ++++--- 5 files changed, 193 insertions(+), 7 deletions(-) diff --git a/agent/consul/catalog_endpoint.go b/agent/consul/catalog_endpoint.go index 0c1cbe3de..5cb30b9c3 100644 --- a/agent/consul/catalog_endpoint.go +++ b/agent/consul/catalog_endpoint.go @@ -47,6 +47,24 @@ func (c *Catalog) Register(args *structs.RegisterRequest, reply *struct{}) error // Handle a service registration. if args.Service != nil { + // Connect proxy specific logic + if args.Service.Kind == structs.ServiceKindConnectProxy { + // Name is optional, if it isn't set, we default to the + // proxy name. It actually MUST be this, but the validation + // below this will verify. + if args.Service.Service == "" { + args.Service.Service = fmt.Sprintf( + "%s-connect-proxy", args.Service.ProxyDestination) + } + } + + // Validate the service. This is in addition to the below since + // the above just hasn't been moved over yet. We should move it over + // in time. + if err := args.Service.Validate(); err != nil { + return err + } + // If no service id, but service name, use default if args.Service.ID == "" && args.Service.Service != "" { args.Service.ID = args.Service.Service diff --git a/agent/consul/catalog_endpoint_test.go b/agent/consul/catalog_endpoint_test.go index 572ff86bb..2399e9b2f 100644 --- a/agent/consul/catalog_endpoint_test.go +++ b/agent/consul/catalog_endpoint_test.go @@ -333,6 +333,87 @@ func TestCatalog_Register_ForwardDC(t *testing.T) { } } +func TestCatalog_Register_ConnectProxy(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + args := structs.TestRegisterRequestProxy(t) + + // Register + var out struct{} + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out)) + + // List + req := structs.ServiceSpecificRequest{ + Datacenter: "dc1", + ServiceName: args.Service.Service, + } + var resp structs.IndexedServiceNodes + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.ServiceNodes", &req, &resp)) + assert.Len(resp.ServiceNodes, 1) + v := resp.ServiceNodes[0] + assert.Equal(structs.ServiceKindConnectProxy, v.ServiceKind) + assert.Equal(args.Service.ProxyDestination, v.ServiceProxyDestination) +} + +// Test an invalid ConnectProxy. We don't need to exhaustively test because +// this is all tested in structs on the Validate method. +func TestCatalog_Register_ConnectProxy_invalid(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + args := structs.TestRegisterRequestProxy(t) + args.Service.ProxyDestination = "" + + // Register + var out struct{} + err := msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out) + assert.NotNil(err) + assert.Contains(err.Error(), "ProxyDestination") +} + +// Test registering a proxy with no name set, which should work. +func TestCatalog_Register_ConnectProxy_noName(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + args := structs.TestRegisterRequestProxy(t) + args.Service.Service = "" + + // Register + var out struct{} + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out)) + + // List + req := structs.ServiceSpecificRequest{ + Datacenter: "dc1", + ServiceName: fmt.Sprintf("%s-connect-proxy", args.Service.ProxyDestination), + } + var resp structs.IndexedServiceNodes + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.ServiceNodes", &req, &resp)) + assert.Len(resp.ServiceNodes, 1) + v := resp.ServiceNodes[0] + assert.Equal(structs.ServiceKindConnectProxy, v.ServiceKind) +} + func TestCatalog_Deregister(t *testing.T) { t.Parallel() dir1, s1 := testServer(t) diff --git a/agent/structs/structs.go b/agent/structs/structs.go index 65ec87024..e1ab91ab5 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -12,6 +12,7 @@ import ( "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/types" "github.com/hashicorp/go-msgpack/codec" + "github.com/hashicorp/go-multierror" "github.com/hashicorp/serf/coordinate" ) @@ -488,6 +489,31 @@ type NodeService struct { RaftIndex } +// Validate validates the node service configuration. +// +// NOTE(mitchellh): This currently only validates fields for a ConnectProxy. +// Historically validation has been directly in the Catalog.Register RPC. +// ConnectProxy validation was moved here for easier table testing, but +// other validation still exists in Catalog.Register. +func (s *NodeService) Validate() error { + var result error + + // ConnectProxy validation + if s.Kind == ServiceKindConnectProxy { + if strings.TrimSpace(s.ProxyDestination) == "" { + result = multierror.Append(result, fmt.Errorf( + "ProxyDestination must be non-empty for Connect proxy services")) + } + + if s.Port == 0 { + result = multierror.Append(result, fmt.Errorf( + "Port must be set for a Connect proxy")) + } + } + + return result +} + // IsSame checks if one NodeService is the same as another, without looking // at the Raft information (that's why we didn't call it IsEqual). This is // useful for seeing if an update would be idempotent for all the functional diff --git a/agent/structs/structs_test.go b/agent/structs/structs_test.go index dcb8e0c4e..972146d93 100644 --- a/agent/structs/structs_test.go +++ b/agent/structs/structs_test.go @@ -8,6 +8,7 @@ import ( "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/types" + "github.com/stretchr/testify/assert" ) func TestEncodeDecode(t *testing.T) { @@ -208,6 +209,60 @@ func TestStructs_ServiceNode_Conversions(t *testing.T) { } } +func TestStructs_NodeService_ValidateConnectProxy(t *testing.T) { + cases := []struct { + Name string + Modify func(*NodeService) + Err string + }{ + { + "valid", + func(x *NodeService) {}, + "", + }, + + { + "connect-proxy: no ProxyDestination", + func(x *NodeService) { x.ProxyDestination = "" }, + "ProxyDestination must be", + }, + + { + "connect-proxy: whitespace ProxyDestination", + func(x *NodeService) { x.ProxyDestination = " " }, + "ProxyDestination must be", + }, + + { + "connect-proxy: valid ProxyDestination", + func(x *NodeService) { x.ProxyDestination = "hello" }, + "", + }, + + { + "connect-proxy: no port set", + func(x *NodeService) { x.Port = 0 }, + "Port must", + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + assert := assert.New(t) + ns := TestNodeServiceProxy(t) + tc.Modify(ns) + + err := ns.Validate() + assert.Equal(err != nil, tc.Err != "", err) + if err == nil { + return + } + + assert.Contains(strings.ToLower(err.Error()), strings.ToLower(tc.Err)) + }) + } +} + func TestStructs_NodeService_IsSame(t *testing.T) { ns := &NodeService{ ID: "node1", diff --git a/agent/structs/testing_catalog.go b/agent/structs/testing_catalog.go index 8a002d380..4a55f1e3d 100644 --- a/agent/structs/testing_catalog.go +++ b/agent/structs/testing_catalog.go @@ -11,12 +11,18 @@ func TestRegisterRequestProxy(t testing.T) *RegisterRequest { Datacenter: "dc1", Node: "foo", Address: "127.0.0.1", - Service: &NodeService{ - Kind: ServiceKindConnectProxy, - Service: ConnectProxyServiceName, - Address: "127.0.0.2", - Port: 2222, - ProxyDestination: "web", - }, + Service: TestNodeServiceProxy(t), + } +} + +// TestNodeServiceProxy returns a *NodeService representing a valid +// Connect proxy. +func TestNodeServiceProxy(t testing.T) *NodeService { + return &NodeService{ + Kind: ServiceKindConnectProxy, + Service: ConnectProxyServiceName, + Address: "127.0.0.2", + Port: 2222, + ProxyDestination: "web", } } From 200100d3f401d1a94954f8b1127e605a0c5680f5 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 8 Mar 2018 22:31:44 -0800 Subject: [PATCH 048/539] agent/consul: enforce ACL on ProxyDestination --- agent/consul/catalog_endpoint.go | 7 +++ agent/consul/catalog_endpoint_test.go | 61 +++++++++++++++++++++++++++ 2 files changed, 68 insertions(+) diff --git a/agent/consul/catalog_endpoint.go b/agent/consul/catalog_endpoint.go index 5cb30b9c3..f6fb9a91d 100644 --- a/agent/consul/catalog_endpoint.go +++ b/agent/consul/catalog_endpoint.go @@ -91,6 +91,13 @@ func (c *Catalog) Register(args *structs.RegisterRequest, reply *struct{}) error return acl.ErrPermissionDenied } } + + // Proxies must have write permission on their destination + if args.Service.Kind == structs.ServiceKindConnectProxy { + if rule != nil && !rule.ServiceWrite(args.Service.ProxyDestination, nil) { + return acl.ErrPermissionDenied + } + } } // Move the old format single check into the slice, and fixup IDs. diff --git a/agent/consul/catalog_endpoint_test.go b/agent/consul/catalog_endpoint_test.go index 2399e9b2f..e810f3f63 100644 --- a/agent/consul/catalog_endpoint_test.go +++ b/agent/consul/catalog_endpoint_test.go @@ -414,6 +414,67 @@ func TestCatalog_Register_ConnectProxy_noName(t *testing.T) { assert.Equal(structs.ServiceKindConnectProxy, v.ServiceKind) } +// Test that write is required for the proxy destination to register a proxy. +func TestCatalog_Register_ConnectProxy_ACLProxyDestination(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + c.ACLEnforceVersion8 = false + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create the ACL. + arg := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: ` +service "foo" { + policy = "write" +} +`, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + var token string + assert.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &arg, &token)) + + // Register should fail because we don't have permission on the destination + args := structs.TestRegisterRequestProxy(t) + args.Service.Service = "foo" + args.Service.ProxyDestination = "bar" + args.WriteRequest.Token = token + var out struct{} + err := msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out) + assert.True(acl.IsErrPermissionDenied(err)) + + // Register should fail with the right destination but wrong name + args = structs.TestRegisterRequestProxy(t) + args.Service.Service = "bar" + args.Service.ProxyDestination = "foo" + args.WriteRequest.Token = token + err = msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out) + assert.True(acl.IsErrPermissionDenied(err)) + + // Register should work with the right destination + args = structs.TestRegisterRequestProxy(t) + args.Service.Service = "foo" + args.Service.ProxyDestination = "foo" + args.WriteRequest.Token = token + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out)) +} + func TestCatalog_Deregister(t *testing.T) { t.Parallel() dir1, s1 := testServer(t) From 06957f6d7ff67890cc8749bdaf4f7135651bc6c8 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 9 Mar 2018 08:11:39 -0800 Subject: [PATCH 049/539] agent/consul/state: ConnectServiceNodes --- agent/consul/state/catalog.go | 46 ++++++++++++++++++++++++++++++ agent/consul/state/catalog_test.go | 42 +++++++++++++++++++++++++++ 2 files changed, 88 insertions(+) diff --git a/agent/consul/state/catalog.go b/agent/consul/state/catalog.go index 2a81c1071..3eb733bbe 100644 --- a/agent/consul/state/catalog.go +++ b/agent/consul/state/catalog.go @@ -10,6 +10,10 @@ import ( "github.com/hashicorp/go-memdb" ) +const ( + servicesTableName = "services" +) + // nodesTableSchema returns a new table schema used for storing node // information. func nodesTableSchema() *memdb.TableSchema { @@ -87,6 +91,15 @@ func servicesTableSchema() *memdb.TableSchema { Lowercase: true, }, }, + "proxy_destination": &memdb.IndexSchema{ + Name: "proxy_destination", + AllowMissing: true, + Unique: false, + Indexer: &memdb.StringFieldIndex{ + Field: "ServiceProxyDestination", + Lowercase: true, + }, + }, }, } } @@ -839,6 +852,39 @@ func (s *Store) ServiceTagNodes(ws memdb.WatchSet, service string, tag string) ( return idx, results, nil } +// ConnectServiceNodes returns the nodes associated with a Connect +// compatible destination for the given service name. This will include +// both proxies and native integrations. +func (s *Store) ConnectServiceNodes(ws memdb.WatchSet, serviceName string) (uint64, structs.ServiceNodes, error) { + tx := s.db.Txn(false) + defer tx.Abort() + + // Get the table index. + idx := maxIndexForService(tx, serviceName, false) + + // Find all the proxies. When we support native integrations we'll have + // to perform another table lookup here. + services, err := tx.Get(servicesTableName, "proxy_destination", serviceName) + if err != nil { + return 0, nil, fmt.Errorf("failed service lookup: %s", err) + } + ws.Add(services.WatchCh()) + + // Store them + var results structs.ServiceNodes + for service := services.Next(); service != nil; service = services.Next() { + results = append(results, service.(*structs.ServiceNode)) + } + + // Fill in the node details. + results, err = s.parseServiceNodes(tx, ws, results) + if err != nil { + return 0, nil, fmt.Errorf("failed parsing service nodes: %s", err) + } + + return idx, results, nil +} + // serviceTagFilter returns true (should filter) if the given service node // doesn't contain the given tag. func serviceTagFilter(sn *structs.ServiceNode, tag string) bool { diff --git a/agent/consul/state/catalog_test.go b/agent/consul/state/catalog_test.go index c057ebea6..1f20fb9b8 100644 --- a/agent/consul/state/catalog_test.go +++ b/agent/consul/state/catalog_test.go @@ -1572,6 +1572,48 @@ func TestStateStore_DeleteService(t *testing.T) { } } +func TestStateStore_ConnectServiceNodes(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + // Listing with no results returns an empty list. + ws := memdb.NewWatchSet() + idx, nodes, err := s.ConnectServiceNodes(ws, "db") + assert.Nil(err) + assert.Equal(idx, uint64(0)) + assert.Len(nodes, 0) + + // Create some nodes and services. + assert.Nil(s.EnsureNode(10, &structs.Node{Node: "foo", Address: "127.0.0.1"})) + assert.Nil(s.EnsureNode(11, &structs.Node{Node: "bar", Address: "127.0.0.2"})) + assert.Nil(s.EnsureService(12, "foo", &structs.NodeService{ID: "db", Service: "db", Tags: nil, Address: "", Port: 5000})) + assert.Nil(s.EnsureService(13, "bar", &structs.NodeService{ID: "api", Service: "api", Tags: nil, Address: "", Port: 5000})) + assert.Nil(s.EnsureService(14, "foo", &structs.NodeService{Kind: structs.ServiceKindConnectProxy, ID: "proxy", Service: "proxy", ProxyDestination: "db", Port: 8000})) + assert.Nil(s.EnsureService(15, "bar", &structs.NodeService{Kind: structs.ServiceKindConnectProxy, ID: "proxy", Service: "proxy", ProxyDestination: "db", Port: 8000})) + assert.Nil(s.EnsureService(16, "bar", &structs.NodeService{ID: "db2", Service: "db", Tags: []string{"slave"}, Address: "", Port: 8001})) + assert.True(watchFired(ws)) + + // Read everything back. + ws = memdb.NewWatchSet() + idx, nodes, err = s.ConnectServiceNodes(ws, "db") + assert.Nil(err) + assert.Equal(idx, uint64(idx)) + assert.Len(nodes, 2) + + for _, n := range nodes { + assert.Equal(structs.ServiceKindConnectProxy, n.ServiceKind) + assert.Equal("db", n.ServiceProxyDestination) + } + + // Registering some unrelated node should not fire the watch. + testRegisterNode(t, s, 17, "nope") + assert.False(watchFired(ws)) + + // But removing a node with the "db" service should fire the watch. + assert.Nil(s.DeleteNode(18, "bar")) + assert.True(watchFired(ws)) +} + func TestStateStore_Service_Snapshot(t *testing.T) { s := testStateStore(t) From 253256352cc99b825726de6096d3a116949f059c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 9 Mar 2018 08:34:55 -0800 Subject: [PATCH 050/539] agent/consul: Catalog.ServiceNodes supports Connect filtering --- agent/consul/catalog_endpoint.go | 48 +++++++++++++++++-------- agent/consul/catalog_endpoint_test.go | 51 +++++++++++++++++++++++++++ agent/structs/structs.go | 4 +++ agent/structs/testing_catalog.go | 14 ++++++++ 4 files changed, 103 insertions(+), 14 deletions(-) diff --git a/agent/consul/catalog_endpoint.go b/agent/consul/catalog_endpoint.go index f6fb9a91d..840b97fa6 100644 --- a/agent/consul/catalog_endpoint.go +++ b/agent/consul/catalog_endpoint.go @@ -269,24 +269,37 @@ func (c *Catalog) ServiceNodes(args *structs.ServiceSpecificRequest, reply *stru return fmt.Errorf("Must provide service name") } + // Determine the function we'll call + var f func(memdb.WatchSet, *state.Store) (uint64, structs.ServiceNodes, error) + switch { + case args.Connect: + f = func(ws memdb.WatchSet, s *state.Store) (uint64, structs.ServiceNodes, error) { + return s.ConnectServiceNodes(ws, args.ServiceName) + } + + default: + f = func(ws memdb.WatchSet, s *state.Store) (uint64, structs.ServiceNodes, error) { + if args.ServiceAddress != "" { + return s.ServiceAddressNodes(ws, args.ServiceAddress) + } + + if args.TagFilter { + return s.ServiceTagNodes(ws, args.ServiceName, args.ServiceTag) + } + + return s.ServiceNodes(ws, args.ServiceName) + } + } + err := c.srv.blockingQuery( &args.QueryOptions, &reply.QueryMeta, func(ws memdb.WatchSet, state *state.Store) error { - var index uint64 - var services structs.ServiceNodes - var err error - if args.TagFilter { - index, services, err = state.ServiceTagNodes(ws, args.ServiceName, args.ServiceTag) - } else { - index, services, err = state.ServiceNodes(ws, args.ServiceName) - } - if args.ServiceAddress != "" { - index, services, err = state.ServiceAddressNodes(ws, args.ServiceAddress) - } + index, services, err := f(ws, state) if err != nil { return err } + reply.Index, reply.ServiceNodes = index, services if len(args.NodeMetaFilters) > 0 { var filtered structs.ServiceNodes @@ -305,17 +318,24 @@ func (c *Catalog) ServiceNodes(args *structs.ServiceSpecificRequest, reply *stru // Provide some metrics if err == nil { - metrics.IncrCounterWithLabels([]string{"catalog", "service", "query"}, 1, + // For metrics, we separate Connect-based lookups from non-Connect + key := "service" + if args.Connect { + key = "connect" + } + + metrics.IncrCounterWithLabels([]string{"catalog", key, "query"}, 1, []metrics.Label{{Name: "service", Value: args.ServiceName}}) if args.ServiceTag != "" { - metrics.IncrCounterWithLabels([]string{"catalog", "service", "query-tag"}, 1, + metrics.IncrCounterWithLabels([]string{"catalog", key, "query-tag"}, 1, []metrics.Label{{Name: "service", Value: args.ServiceName}, {Name: "tag", Value: args.ServiceTag}}) } if len(reply.ServiceNodes) == 0 { - metrics.IncrCounterWithLabels([]string{"catalog", "service", "not-found"}, 1, + metrics.IncrCounterWithLabels([]string{"catalog", key, "not-found"}, 1, []metrics.Label{{Name: "service", Value: args.ServiceName}}) } } + return err } diff --git a/agent/consul/catalog_endpoint_test.go b/agent/consul/catalog_endpoint_test.go index e810f3f63..b095c3f3a 100644 --- a/agent/consul/catalog_endpoint_test.go +++ b/agent/consul/catalog_endpoint_test.go @@ -1773,6 +1773,57 @@ func TestCatalog_ListServiceNodes_ConnectProxy(t *testing.T) { assert.Equal(args.Service.ProxyDestination, v.ServiceProxyDestination) } +func TestCatalog_ListServiceNodes_ConnectDestination(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Register the proxy service + args := structs.TestRegisterRequestProxy(t) + var out struct{} + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", args, &out)) + + // Register the service + { + dst := args.Service.ProxyDestination + args := structs.TestRegisterRequest(t) + args.Service.Service = dst + var out struct{} + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", args, &out)) + } + + // List + req := structs.ServiceSpecificRequest{ + Connect: true, + Datacenter: "dc1", + ServiceName: args.Service.ProxyDestination, + } + var resp structs.IndexedServiceNodes + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.ServiceNodes", &req, &resp)) + assert.Len(resp.ServiceNodes, 1) + v := resp.ServiceNodes[0] + assert.Equal(structs.ServiceKindConnectProxy, v.ServiceKind) + assert.Equal(args.Service.ProxyDestination, v.ServiceProxyDestination) + + // List by non-Connect + req = structs.ServiceSpecificRequest{ + Datacenter: "dc1", + ServiceName: args.Service.ProxyDestination, + } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.ServiceNodes", &req, &resp)) + assert.Len(resp.ServiceNodes, 1) + v = resp.ServiceNodes[0] + assert.Equal(args.Service.ProxyDestination, v.ServiceName) + assert.Equal("", v.ServiceProxyDestination) +} + func TestCatalog_NodeServices(t *testing.T) { t.Parallel() dir1, s1 := testServer(t) diff --git a/agent/structs/structs.go b/agent/structs/structs.go index e1ab91ab5..4301c7e93 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -284,6 +284,10 @@ type ServiceSpecificRequest struct { ServiceAddress string TagFilter bool // Controls tag filtering Source QuerySource + + // Connect if true will only search for Connect-compatible services. + Connect bool + QueryOptions } diff --git a/agent/structs/testing_catalog.go b/agent/structs/testing_catalog.go index 4a55f1e3d..d61266ad5 100644 --- a/agent/structs/testing_catalog.go +++ b/agent/structs/testing_catalog.go @@ -4,6 +4,20 @@ import ( "github.com/mitchellh/go-testing-interface" ) +// TestRegisterRequest returns a RegisterRequest for registering a typical service. +func TestRegisterRequest(t testing.T) *RegisterRequest { + return &RegisterRequest{ + Datacenter: "dc1", + Node: "foo", + Address: "127.0.0.1", + Service: &NodeService{ + Service: "web", + Address: "", + Port: 80, + }, + } +} + // TestRegisterRequestProxy returns a RegisterRequest for registering a // Connect proxy. func TestRegisterRequestProxy(t testing.T) *RegisterRequest { From fa4f0d353b6b229d76b1636b6fe8dbf976450076 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 9 Mar 2018 08:43:17 -0800 Subject: [PATCH 051/539] agent: /v1/catalog/connect/:service --- agent/catalog_endpoint.go | 49 ++++++++++++++++++++++++++++++++++ agent/catalog_endpoint_test.go | 24 +++++++++++++++++ agent/http_oss.go | 1 + 3 files changed, 74 insertions(+) diff --git a/agent/catalog_endpoint.go b/agent/catalog_endpoint.go index 0088741e1..86e4e95ee 100644 --- a/agent/catalog_endpoint.go +++ b/agent/catalog_endpoint.go @@ -217,6 +217,55 @@ RETRY_ONCE: return out.ServiceNodes, nil } +func (s *HTTPServer) CatalogConnectServiceNodes(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + metrics.IncrCounterWithLabels([]string{"client", "api", "catalog_connect_service_nodes"}, 1, + []metrics.Label{{Name: "node", Value: s.nodeName()}}) + if req.Method != "GET" { + return nil, MethodNotAllowedError{req.Method, []string{"GET"}} + } + + // Set default DC + args := structs.ServiceSpecificRequest{Connect: true} + s.parseSource(req, &args.Source) + args.NodeMetaFilters = s.parseMetaFilter(req) + if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { + return nil, nil + } + + // Pull out the service name + args.ServiceName = strings.TrimPrefix(req.URL.Path, "/v1/catalog/connect/") + if args.ServiceName == "" { + resp.WriteHeader(http.StatusBadRequest) + fmt.Fprint(resp, "Missing service name") + return nil, nil + } + + // Make the RPC request + var out structs.IndexedServiceNodes + defer setMeta(resp, &out.QueryMeta) + if err := s.agent.RPC("Catalog.ServiceNodes", &args, &out); err != nil { + metrics.IncrCounterWithLabels([]string{"client", "rpc", "error", "catalog_connect_service_nodes"}, 1, + []metrics.Label{{Name: "node", Value: s.nodeName()}}) + return nil, err + } + s.agent.TranslateAddresses(args.Datacenter, out.ServiceNodes) + + // Use empty list instead of nil + if out.ServiceNodes == nil { + out.ServiceNodes = make(structs.ServiceNodes, 0) + } + for i, s := range out.ServiceNodes { + if s.ServiceTags == nil { + clone := *s + clone.ServiceTags = make([]string, 0) + out.ServiceNodes[i] = &clone + } + } + metrics.IncrCounterWithLabels([]string{"client", "api", "success", "catalog_connect_service_nodes"}, 1, + []metrics.Label{{Name: "node", Value: s.nodeName()}}) + return out.ServiceNodes, nil +} + func (s *HTTPServer) CatalogNodeServices(resp http.ResponseWriter, req *http.Request) (interface{}, error) { metrics.IncrCounterWithLabels([]string{"client", "api", "catalog_node_services"}, 1, []metrics.Label{{Name: "node", Value: s.nodeName()}}) diff --git a/agent/catalog_endpoint_test.go b/agent/catalog_endpoint_test.go index 4df9d4275..71c848ede 100644 --- a/agent/catalog_endpoint_test.go +++ b/agent/catalog_endpoint_test.go @@ -775,6 +775,30 @@ func TestCatalogServiceNodes_ConnectProxy(t *testing.T) { assert.Equal(structs.ServiceKindConnectProxy, nodes[0].ServiceKind) } +func TestCatalogConnectServiceNodes_good(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Register + args := structs.TestRegisterRequestProxy(t) + var out struct{} + assert.Nil(a.RPC("Catalog.Register", args, &out)) + + req, _ := http.NewRequest("GET", fmt.Sprintf( + "/v1/catalog/connect/%s", args.Service.ProxyDestination), nil) + resp := httptest.NewRecorder() + obj, err := a.srv.CatalogConnectServiceNodes(resp, req) + assert.Nil(err) + assertIndex(t, resp) + + nodes := obj.(structs.ServiceNodes) + assert.Len(nodes, 1) + assert.Equal(structs.ServiceKindConnectProxy, nodes[0].ServiceKind) +} + func TestCatalogNodeServices(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), "") diff --git a/agent/http_oss.go b/agent/http_oss.go index d3bb7adc4..185c8c1e0 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -33,6 +33,7 @@ func init() { registerEndpoint("/v1/agent/service/deregister/", []string{"PUT"}, (*HTTPServer).AgentDeregisterService) registerEndpoint("/v1/agent/service/maintenance/", []string{"PUT"}, (*HTTPServer).AgentServiceMaintenance) registerEndpoint("/v1/catalog/register", []string{"PUT"}, (*HTTPServer).CatalogRegister) + registerEndpoint("/v1/catalog/connect/", []string{"GET"}, (*HTTPServer).CatalogConnectServiceNodes) registerEndpoint("/v1/catalog/deregister", []string{"PUT"}, (*HTTPServer).CatalogDeregister) registerEndpoint("/v1/catalog/datacenters", []string{"GET"}, (*HTTPServer).CatalogDatacenters) registerEndpoint("/v1/catalog/nodes", []string{"GET"}, (*HTTPServer).CatalogNodes) From a5fe6204d5b986d3ba35d61eb55087fc2dc6518c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 9 Mar 2018 09:09:21 -0800 Subject: [PATCH 052/539] agent: working DNS for Connect queries, I think, but have to implement Health endpoints to be sure --- agent/dns.go | 23 ++++++++++++++++------- agent/dns_test.go | 43 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+), 7 deletions(-) diff --git a/agent/dns.go b/agent/dns.go index 1d3c46d97..e014c1330 100644 --- a/agent/dns.go +++ b/agent/dns.go @@ -337,7 +337,7 @@ func (d *DNSServer) addSOA(msg *dns.Msg) { // nameservers returns the names and ip addresses of up to three random servers // in the current cluster which serve as authoritative name servers for zone. func (d *DNSServer) nameservers(edns bool) (ns []dns.RR, extra []dns.RR) { - out, err := d.lookupServiceNodes(d.agent.config.Datacenter, structs.ConsulServiceName, "") + out, err := d.lookupServiceNodes(d.agent.config.Datacenter, structs.ConsulServiceName, "", false) if err != nil { d.logger.Printf("[WARN] dns: Unable to get list of servers: %s", err) return nil, nil @@ -415,7 +415,7 @@ PARSE: n = n + 1 } - switch labels[n-1] { + switch kind := labels[n-1]; kind { case "service": if n == 1 { goto INVALID @@ -433,7 +433,7 @@ PARSE: } // _name._tag.service.consul - d.serviceLookup(network, datacenter, labels[n-3][1:], tag, req, resp) + d.serviceLookup(network, datacenter, labels[n-3][1:], tag, false, req, resp) // Consul 0.3 and prior format for SRV queries } else { @@ -445,9 +445,17 @@ PARSE: } // tag[.tag].name.service.consul - d.serviceLookup(network, datacenter, labels[n-2], tag, req, resp) + d.serviceLookup(network, datacenter, labels[n-2], tag, false, req, resp) } + case "connect": + if n == 1 { + goto INVALID + } + + // name.connect.consul + d.serviceLookup(network, datacenter, labels[n-2], "", true, req, resp) + case "node": if n == 1 { goto INVALID @@ -898,8 +906,9 @@ func (d *DNSServer) trimDNSResponse(network string, req, resp *dns.Msg) (trimmed } // lookupServiceNodes returns nodes with a given service. -func (d *DNSServer) lookupServiceNodes(datacenter, service, tag string) (structs.IndexedCheckServiceNodes, error) { +func (d *DNSServer) lookupServiceNodes(datacenter, service, tag string, connect bool) (structs.IndexedCheckServiceNodes, error) { args := structs.ServiceSpecificRequest{ + Connect: connect, Datacenter: datacenter, ServiceName: service, ServiceTag: tag, @@ -935,8 +944,8 @@ func (d *DNSServer) lookupServiceNodes(datacenter, service, tag string) (structs } // serviceLookup is used to handle a service query -func (d *DNSServer) serviceLookup(network, datacenter, service, tag string, req, resp *dns.Msg) { - out, err := d.lookupServiceNodes(datacenter, service, tag) +func (d *DNSServer) serviceLookup(network, datacenter, service, tag string, connect bool, req, resp *dns.Msg) { + out, err := d.lookupServiceNodes(datacenter, service, tag, connect) if err != nil { d.logger.Printf("[ERR] dns: rpc error: %v", err) resp.SetRcode(req, dns.RcodeServerFailure) diff --git a/agent/dns_test.go b/agent/dns_test.go index 41aca8e0e..5d1082888 100644 --- a/agent/dns_test.go +++ b/agent/dns_test.go @@ -17,6 +17,7 @@ import ( "github.com/hashicorp/serf/coordinate" "github.com/miekg/dns" "github.com/pascaldekloe/goe/verify" + "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) @@ -1041,6 +1042,48 @@ func TestDNS_ServiceLookupWithInternalServiceAddress(t *testing.T) { verify.Values(t, "extra", in.Extra, wantExtra) } +func TestDNS_ConnectServiceLookup(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Register a node with an external service. + { + args := structs.TestRegisterRequestProxy(t) + args.Service.ProxyDestination = "db" + args.Service.Port = 12345 + var out struct{} + assert.Nil(a.RPC("Catalog.Register", args, &out)) + } + + // Look up the service + questions := []string{ + "db.connect.consul.", + } + for _, question := range questions { + m := new(dns.Msg) + m.SetQuestion(question, dns.TypeSRV) + + c := new(dns.Client) + in, _, err := c.Exchange(m, a.DNSAddr()) + assert.Nil(err) + assert.Len(in.Answer, 1) + + srvRec, ok := in.Answer[0].(*dns.SRV) + assert.True(ok) + assert.Equal(12345, srvRec.Port) + assert.Equal("foo.node.dc1.consul.", srvRec.Target) + assert.Equal(0, srvRec.Hdr.Ttl) + + cnameRec, ok := in.Extra[0].(*dns.CNAME) + assert.True(ok) + assert.Equal("foo.node.dc1.consul.", cnameRec.Hdr.Name) + assert.Equal(0, srvRec.Hdr.Ttl) + } +} + func TestDNS_ExternalServiceLookup(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), "") From 119ffe3ed91c748bd3dd317eb9f76516835521d4 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 9 Mar 2018 09:32:22 -0800 Subject: [PATCH 053/539] agent/consul: implement Health.ServiceNodes for Connect, DNS works --- agent/consul/health_endpoint.go | 40 ++++++++++++++++++++-------- agent/consul/state/catalog.go | 24 ++++++++++++++++- agent/consul/state/catalog_test.go | 42 ++++++++++++++++++++++++++++++ agent/dns_test.go | 9 ++++--- 4 files changed, 99 insertions(+), 16 deletions(-) diff --git a/agent/consul/health_endpoint.go b/agent/consul/health_endpoint.go index db59356c8..70cc2e37d 100644 --- a/agent/consul/health_endpoint.go +++ b/agent/consul/health_endpoint.go @@ -111,18 +111,30 @@ func (h *Health) ServiceNodes(args *structs.ServiceSpecificRequest, reply *struc return fmt.Errorf("Must provide service name") } + // Determine the function we'll call + var f func(memdb.WatchSet, *state.Store) (uint64, structs.CheckServiceNodes, error) + switch { + case args.Connect: + f = func(ws memdb.WatchSet, s *state.Store) (uint64, structs.CheckServiceNodes, error) { + return s.CheckConnectServiceNodes(ws, args.ServiceName) + } + + case args.TagFilter: + f = func(ws memdb.WatchSet, s *state.Store) (uint64, structs.CheckServiceNodes, error) { + return s.CheckServiceTagNodes(ws, args.ServiceName, args.ServiceTag) + } + + default: + f = func(ws memdb.WatchSet, s *state.Store) (uint64, structs.CheckServiceNodes, error) { + return s.CheckServiceNodes(ws, args.ServiceName) + } + } + err := h.srv.blockingQuery( &args.QueryOptions, &reply.QueryMeta, func(ws memdb.WatchSet, state *state.Store) error { - var index uint64 - var nodes structs.CheckServiceNodes - var err error - if args.TagFilter { - index, nodes, err = state.CheckServiceTagNodes(ws, args.ServiceName, args.ServiceTag) - } else { - index, nodes, err = state.CheckServiceNodes(ws, args.ServiceName) - } + index, nodes, err := f(ws, state) if err != nil { return err } @@ -139,14 +151,20 @@ func (h *Health) ServiceNodes(args *structs.ServiceSpecificRequest, reply *struc // Provide some metrics if err == nil { - metrics.IncrCounterWithLabels([]string{"health", "service", "query"}, 1, + // For metrics, we separate Connect-based lookups from non-Connect + key := "service" + if args.Connect { + key = "connect" + } + + metrics.IncrCounterWithLabels([]string{"health", key, "query"}, 1, []metrics.Label{{Name: "service", Value: args.ServiceName}}) if args.ServiceTag != "" { - metrics.IncrCounterWithLabels([]string{"health", "service", "query-tag"}, 1, + metrics.IncrCounterWithLabels([]string{"health", key, "query-tag"}, 1, []metrics.Label{{Name: "service", Value: args.ServiceName}, {Name: "tag", Value: args.ServiceTag}}) } if len(reply.Nodes) == 0 { - metrics.IncrCounterWithLabels([]string{"health", "service", "not-found"}, 1, + metrics.IncrCounterWithLabels([]string{"health", key, "not-found"}, 1, []metrics.Label{{Name: "service", Value: args.ServiceName}}) } } diff --git a/agent/consul/state/catalog.go b/agent/consul/state/catalog.go index 3eb733bbe..2ce2da36b 100644 --- a/agent/consul/state/catalog.go +++ b/agent/consul/state/catalog.go @@ -1525,14 +1525,36 @@ func (s *Store) deleteCheckTxn(tx *memdb.Txn, idx uint64, node string, checkID t // CheckServiceNodes is used to query all nodes and checks for a given service. func (s *Store) CheckServiceNodes(ws memdb.WatchSet, serviceName string) (uint64, structs.CheckServiceNodes, error) { + return s.checkServiceNodes(ws, serviceName, false) +} + +// CheckConnectServiceNodes is used to query all nodes and checks for Connect +// compatible endpoints for a given service. +func (s *Store) CheckConnectServiceNodes(ws memdb.WatchSet, serviceName string) (uint64, structs.CheckServiceNodes, error) { + return s.checkServiceNodes(ws, serviceName, true) +} + +func (s *Store) checkServiceNodes(ws memdb.WatchSet, serviceName string, connect bool) (uint64, structs.CheckServiceNodes, error) { tx := s.db.Txn(false) defer tx.Abort() // Get the table index. idx := maxIndexForService(tx, serviceName, true) + // Function for lookup + var f func() (memdb.ResultIterator, error) + if !connect { + f = func() (memdb.ResultIterator, error) { + return tx.Get("services", "service", serviceName) + } + } else { + f = func() (memdb.ResultIterator, error) { + return tx.Get("services", "proxy_destination", serviceName) + } + } + // Query the state store for the service. - iter, err := tx.Get("services", "service", serviceName) + iter, err := f() if err != nil { return 0, nil, fmt.Errorf("failed service lookup: %s", err) } diff --git a/agent/consul/state/catalog_test.go b/agent/consul/state/catalog_test.go index 1f20fb9b8..9d771ca48 100644 --- a/agent/consul/state/catalog_test.go +++ b/agent/consul/state/catalog_test.go @@ -2529,6 +2529,48 @@ func TestStateStore_CheckServiceNodes(t *testing.T) { } } +func TestStateStore_CheckConnectServiceNodes(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + // Listing with no results returns an empty list. + ws := memdb.NewWatchSet() + idx, nodes, err := s.CheckConnectServiceNodes(ws, "db") + assert.Nil(err) + assert.Equal(idx, uint64(0)) + assert.Len(nodes, 0) + + // Create some nodes and services. + assert.Nil(s.EnsureNode(10, &structs.Node{Node: "foo", Address: "127.0.0.1"})) + assert.Nil(s.EnsureNode(11, &structs.Node{Node: "bar", Address: "127.0.0.2"})) + assert.Nil(s.EnsureService(12, "foo", &structs.NodeService{ID: "db", Service: "db", Tags: nil, Address: "", Port: 5000})) + assert.Nil(s.EnsureService(13, "bar", &structs.NodeService{ID: "api", Service: "api", Tags: nil, Address: "", Port: 5000})) + assert.Nil(s.EnsureService(14, "foo", &structs.NodeService{Kind: structs.ServiceKindConnectProxy, ID: "proxy", Service: "proxy", ProxyDestination: "db", Port: 8000})) + assert.Nil(s.EnsureService(15, "bar", &structs.NodeService{Kind: structs.ServiceKindConnectProxy, ID: "proxy", Service: "proxy", ProxyDestination: "db", Port: 8000})) + assert.Nil(s.EnsureService(16, "bar", &structs.NodeService{ID: "db2", Service: "db", Tags: []string{"slave"}, Address: "", Port: 8001})) + assert.True(watchFired(ws)) + + // Register node checks + testRegisterCheck(t, s, 17, "foo", "", "check1", api.HealthPassing) + testRegisterCheck(t, s, 18, "bar", "", "check2", api.HealthPassing) + + // Register checks against the services. + testRegisterCheck(t, s, 19, "foo", "db", "check3", api.HealthPassing) + testRegisterCheck(t, s, 20, "bar", "proxy", "check4", api.HealthPassing) + + // Read everything back. + ws = memdb.NewWatchSet() + idx, nodes, err = s.CheckConnectServiceNodes(ws, "db") + assert.Nil(err) + assert.Equal(idx, uint64(idx)) + assert.Len(nodes, 2) + + for _, n := range nodes { + assert.Equal(structs.ServiceKindConnectProxy, n.Service.Kind) + assert.Equal("db", n.Service.ProxyDestination) + } +} + func BenchmarkCheckServiceNodes(b *testing.B) { s, err := NewStateStore(nil) if err != nil { diff --git a/agent/dns_test.go b/agent/dns_test.go index 5d1082888..a501a9c9f 100644 --- a/agent/dns_test.go +++ b/agent/dns_test.go @@ -1053,6 +1053,7 @@ func TestDNS_ConnectServiceLookup(t *testing.T) { { args := structs.TestRegisterRequestProxy(t) args.Service.ProxyDestination = "db" + args.Service.Address = "" args.Service.Port = 12345 var out struct{} assert.Nil(a.RPC("Catalog.Register", args, &out)) @@ -1073,14 +1074,14 @@ func TestDNS_ConnectServiceLookup(t *testing.T) { srvRec, ok := in.Answer[0].(*dns.SRV) assert.True(ok) - assert.Equal(12345, srvRec.Port) + assert.Equal(uint16(12345), srvRec.Port) assert.Equal("foo.node.dc1.consul.", srvRec.Target) - assert.Equal(0, srvRec.Hdr.Ttl) + assert.Equal(uint32(0), srvRec.Hdr.Ttl) - cnameRec, ok := in.Extra[0].(*dns.CNAME) + cnameRec, ok := in.Extra[0].(*dns.A) assert.True(ok) assert.Equal("foo.node.dc1.consul.", cnameRec.Hdr.Name) - assert.Equal(0, srvRec.Hdr.Ttl) + assert.Equal(uint32(0), srvRec.Hdr.Ttl) } } From 3d82d261bd65e3a21cad3e1be92f9b69f2c53967 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 9 Mar 2018 09:52:32 -0800 Subject: [PATCH 054/539] agent: /v1/health/connect/:service --- agent/health_endpoint.go | 22 +++++++- agent/health_endpoint_test.go | 100 ++++++++++++++++++++++++++++++++++ agent/http_oss.go | 1 + 3 files changed, 121 insertions(+), 2 deletions(-) diff --git a/agent/health_endpoint.go b/agent/health_endpoint.go index 9c0aac2b6..e57b5f48b 100644 --- a/agent/health_endpoint.go +++ b/agent/health_endpoint.go @@ -143,9 +143,21 @@ RETRY_ONCE: return out.HealthChecks, nil } +func (s *HTTPServer) HealthConnectServiceNodes(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + return s.healthServiceNodes(resp, req, true) +} + func (s *HTTPServer) HealthServiceNodes(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + return s.healthServiceNodes(resp, req, false) +} + +func (s *HTTPServer) healthServiceNodes(resp http.ResponseWriter, req *http.Request, connect bool) (interface{}, error) { + if req.Method != "GET" { + return nil, MethodNotAllowedError{req.Method, []string{"GET"}} + } + // Set default DC - args := structs.ServiceSpecificRequest{} + args := structs.ServiceSpecificRequest{Connect: connect} s.parseSource(req, &args.Source) args.NodeMetaFilters = s.parseMetaFilter(req) if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { @@ -159,8 +171,14 @@ func (s *HTTPServer) HealthServiceNodes(resp http.ResponseWriter, req *http.Requ args.TagFilter = true } + // Determine the prefix + prefix := "/v1/health/service/" + if connect { + prefix = "/v1/health/connect/" + } + // Pull out the service name - args.ServiceName = strings.TrimPrefix(req.URL.Path, "/v1/health/service/") + args.ServiceName = strings.TrimPrefix(req.URL.Path, prefix) if args.ServiceName == "" { resp.WriteHeader(http.StatusBadRequest) fmt.Fprint(resp, "Missing service name") diff --git a/agent/health_endpoint_test.go b/agent/health_endpoint_test.go index 5d2ae1445..688924df1 100644 --- a/agent/health_endpoint_test.go +++ b/agent/health_endpoint_test.go @@ -13,6 +13,7 @@ import ( "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/testutil/retry" "github.com/hashicorp/serf/coordinate" + "github.com/stretchr/testify/assert" ) func TestHealthChecksInState(t *testing.T) { @@ -770,6 +771,105 @@ func TestHealthServiceNodes_WanTranslation(t *testing.T) { } } +func TestHealthConnectServiceNodes(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Register + args := structs.TestRegisterRequestProxy(t) + var out struct{} + assert.Nil(a.RPC("Catalog.Register", args, &out)) + + // Request + req, _ := http.NewRequest("GET", fmt.Sprintf( + "/v1/health/connect/%s?dc=dc1", args.Service.ProxyDestination), nil) + resp := httptest.NewRecorder() + obj, err := a.srv.HealthConnectServiceNodes(resp, req) + assert.Nil(err) + assertIndex(t, resp) + + // Should be a non-nil empty list for checks + nodes := obj.(structs.CheckServiceNodes) + assert.Len(nodes, 1) + assert.Len(nodes[0].Checks, 0) +} + +func TestHealthConnectServiceNodes_PassingFilter(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Register + args := structs.TestRegisterRequestProxy(t) + args.Check = &structs.HealthCheck{ + Node: args.Node, + Name: "check", + ServiceID: args.Service.Service, + Status: api.HealthCritical, + } + var out struct{} + assert.Nil(t, a.RPC("Catalog.Register", args, &out)) + + t.Run("bc_no_query_value", func(t *testing.T) { + assert := assert.New(t) + req, _ := http.NewRequest("GET", fmt.Sprintf( + "/v1/health/connect/%s?passing", args.Service.ProxyDestination), nil) + resp := httptest.NewRecorder() + obj, err := a.srv.HealthConnectServiceNodes(resp, req) + assert.Nil(err) + assertIndex(t, resp) + + // Should be 0 health check for consul + nodes := obj.(structs.CheckServiceNodes) + assert.Len(nodes, 0) + }) + + t.Run("passing_true", func(t *testing.T) { + assert := assert.New(t) + req, _ := http.NewRequest("GET", fmt.Sprintf( + "/v1/health/connect/%s?passing=true", args.Service.ProxyDestination), nil) + resp := httptest.NewRecorder() + obj, err := a.srv.HealthConnectServiceNodes(resp, req) + assert.Nil(err) + assertIndex(t, resp) + + // Should be 0 health check for consul + nodes := obj.(structs.CheckServiceNodes) + assert.Len(nodes, 0) + }) + + t.Run("passing_false", func(t *testing.T) { + assert := assert.New(t) + req, _ := http.NewRequest("GET", fmt.Sprintf( + "/v1/health/connect/%s?passing=false", args.Service.ProxyDestination), nil) + resp := httptest.NewRecorder() + obj, err := a.srv.HealthConnectServiceNodes(resp, req) + assert.Nil(err) + assertIndex(t, resp) + + // Should be 0 health check for consul + nodes := obj.(structs.CheckServiceNodes) + assert.Len(nodes, 1) + }) + + t.Run("passing_bad", func(t *testing.T) { + assert := assert.New(t) + req, _ := http.NewRequest("GET", fmt.Sprintf( + "/v1/health/connect/%s?passing=nope-nope", args.Service.ProxyDestination), nil) + resp := httptest.NewRecorder() + a.srv.HealthConnectServiceNodes(resp, req) + assert.Equal(400, resp.Code) + + body, err := ioutil.ReadAll(resp.Body) + assert.Nil(err) + assert.True(bytes.Contains(body, []byte("Invalid value for ?passing"))) + }) +} + func TestFilterNonPassing(t *testing.T) { t.Parallel() nodes := structs.CheckServiceNodes{ diff --git a/agent/http_oss.go b/agent/http_oss.go index 185c8c1e0..2e2c9751a 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -53,6 +53,7 @@ func init() { registerEndpoint("/v1/health/checks/", []string{"GET"}, (*HTTPServer).HealthServiceChecks) registerEndpoint("/v1/health/state/", []string{"GET"}, (*HTTPServer).HealthChecksInState) registerEndpoint("/v1/health/service/", []string{"GET"}, (*HTTPServer).HealthServiceNodes) + registerEndpoint("/v1/health/connect/", []string{"GET"}, (*HTTPServer).HealthConnectServiceNodes) registerEndpoint("/v1/internal/ui/nodes", []string{"GET"}, (*HTTPServer).UINodes) registerEndpoint("/v1/internal/ui/node/", []string{"GET"}, (*HTTPServer).UINodeInfo) registerEndpoint("/v1/internal/ui/services", []string{"GET"}, (*HTTPServer).UIServices) From daaa6e2403da3f475520d8dfe65c35e1c31d88b7 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 9 Mar 2018 10:01:42 -0800 Subject: [PATCH 055/539] agent: clean up connect/non-connect duplication by using shared methods --- agent/catalog_endpoint.go | 70 +++++++++-------------------------- agent/consul/state/catalog.go | 59 +++++++++++++---------------- 2 files changed, 43 insertions(+), 86 deletions(-) diff --git a/agent/catalog_endpoint.go b/agent/catalog_endpoint.go index 86e4e95ee..4c0fd8f52 100644 --- a/agent/catalog_endpoint.go +++ b/agent/catalog_endpoint.go @@ -157,12 +157,27 @@ RETRY_ONCE: return out.Services, nil } +func (s *HTTPServer) CatalogConnectServiceNodes(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + return s.catalogServiceNodes(resp, req, true) +} + func (s *HTTPServer) CatalogServiceNodes(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - metrics.IncrCounterWithLabels([]string{"client", "api", "catalog_service_nodes"}, 1, + return s.catalogServiceNodes(resp, req, false) +} + +func (s *HTTPServer) catalogServiceNodes(resp http.ResponseWriter, req *http.Request, connect bool) (interface{}, error) { + metricsKey := "catalog_service_nodes" + pathPrefix := "/v1/catalog/service/" + if connect { + metricsKey = "catalog_connect_service_nodes" + pathPrefix = "/v1/catalog/connect/" + } + + metrics.IncrCounterWithLabels([]string{"client", "api", metricsKey}, 1, []metrics.Label{{Name: "node", Value: s.nodeName()}}) // Set default DC - args := structs.ServiceSpecificRequest{} + args := structs.ServiceSpecificRequest{Connect: connect} s.parseSource(req, &args.Source) args.NodeMetaFilters = s.parseMetaFilter(req) if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { @@ -177,7 +192,7 @@ func (s *HTTPServer) CatalogServiceNodes(resp http.ResponseWriter, req *http.Req } // Pull out the service name - args.ServiceName = strings.TrimPrefix(req.URL.Path, "/v1/catalog/service/") + args.ServiceName = strings.TrimPrefix(req.URL.Path, pathPrefix) if args.ServiceName == "" { resp.WriteHeader(http.StatusBadRequest) fmt.Fprint(resp, "Missing service name") @@ -217,55 +232,6 @@ RETRY_ONCE: return out.ServiceNodes, nil } -func (s *HTTPServer) CatalogConnectServiceNodes(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - metrics.IncrCounterWithLabels([]string{"client", "api", "catalog_connect_service_nodes"}, 1, - []metrics.Label{{Name: "node", Value: s.nodeName()}}) - if req.Method != "GET" { - return nil, MethodNotAllowedError{req.Method, []string{"GET"}} - } - - // Set default DC - args := structs.ServiceSpecificRequest{Connect: true} - s.parseSource(req, &args.Source) - args.NodeMetaFilters = s.parseMetaFilter(req) - if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { - return nil, nil - } - - // Pull out the service name - args.ServiceName = strings.TrimPrefix(req.URL.Path, "/v1/catalog/connect/") - if args.ServiceName == "" { - resp.WriteHeader(http.StatusBadRequest) - fmt.Fprint(resp, "Missing service name") - return nil, nil - } - - // Make the RPC request - var out structs.IndexedServiceNodes - defer setMeta(resp, &out.QueryMeta) - if err := s.agent.RPC("Catalog.ServiceNodes", &args, &out); err != nil { - metrics.IncrCounterWithLabels([]string{"client", "rpc", "error", "catalog_connect_service_nodes"}, 1, - []metrics.Label{{Name: "node", Value: s.nodeName()}}) - return nil, err - } - s.agent.TranslateAddresses(args.Datacenter, out.ServiceNodes) - - // Use empty list instead of nil - if out.ServiceNodes == nil { - out.ServiceNodes = make(structs.ServiceNodes, 0) - } - for i, s := range out.ServiceNodes { - if s.ServiceTags == nil { - clone := *s - clone.ServiceTags = make([]string, 0) - out.ServiceNodes[i] = &clone - } - } - metrics.IncrCounterWithLabels([]string{"client", "api", "success", "catalog_connect_service_nodes"}, 1, - []metrics.Label{{Name: "node", Value: s.nodeName()}}) - return out.ServiceNodes, nil -} - func (s *HTTPServer) CatalogNodeServices(resp http.ResponseWriter, req *http.Request) (interface{}, error) { metrics.IncrCounterWithLabels([]string{"client", "api", "catalog_node_services"}, 1, []metrics.Label{{Name: "node", Value: s.nodeName()}}) diff --git a/agent/consul/state/catalog.go b/agent/consul/state/catalog.go index 2ce2da36b..90a3dc5eb 100644 --- a/agent/consul/state/catalog.go +++ b/agent/consul/state/catalog.go @@ -792,15 +792,39 @@ func maxIndexForService(tx *memdb.Txn, serviceName string, checks bool) uint64 { return maxIndexTxn(tx, "nodes", "services") } +// ConnectServiceNodes returns the nodes associated with a Connect +// compatible destination for the given service name. This will include +// both proxies and native integrations. +func (s *Store) ConnectServiceNodes(ws memdb.WatchSet, serviceName string) (uint64, structs.ServiceNodes, error) { + return s.serviceNodes(ws, serviceName, true) +} + // ServiceNodes returns the nodes associated with a given service name. func (s *Store) ServiceNodes(ws memdb.WatchSet, serviceName string) (uint64, structs.ServiceNodes, error) { + return s.serviceNodes(ws, serviceName, false) +} + +func (s *Store) serviceNodes(ws memdb.WatchSet, serviceName string, connect bool) (uint64, structs.ServiceNodes, error) { tx := s.db.Txn(false) defer tx.Abort() // Get the table index. idx := maxIndexForService(tx, serviceName, false) + + // Function for lookup + var f func() (memdb.ResultIterator, error) + if !connect { + f = func() (memdb.ResultIterator, error) { + return tx.Get("services", "service", serviceName) + } + } else { + f = func() (memdb.ResultIterator, error) { + return tx.Get("services", "proxy_destination", serviceName) + } + } + // List all the services. - services, err := tx.Get("services", "service", serviceName) + services, err := f() if err != nil { return 0, nil, fmt.Errorf("failed service lookup: %s", err) } @@ -852,39 +876,6 @@ func (s *Store) ServiceTagNodes(ws memdb.WatchSet, service string, tag string) ( return idx, results, nil } -// ConnectServiceNodes returns the nodes associated with a Connect -// compatible destination for the given service name. This will include -// both proxies and native integrations. -func (s *Store) ConnectServiceNodes(ws memdb.WatchSet, serviceName string) (uint64, structs.ServiceNodes, error) { - tx := s.db.Txn(false) - defer tx.Abort() - - // Get the table index. - idx := maxIndexForService(tx, serviceName, false) - - // Find all the proxies. When we support native integrations we'll have - // to perform another table lookup here. - services, err := tx.Get(servicesTableName, "proxy_destination", serviceName) - if err != nil { - return 0, nil, fmt.Errorf("failed service lookup: %s", err) - } - ws.Add(services.WatchCh()) - - // Store them - var results structs.ServiceNodes - for service := services.Next(); service != nil; service = services.Next() { - results = append(results, service.(*structs.ServiceNode)) - } - - // Fill in the node details. - results, err = s.parseServiceNodes(tx, ws, results) - if err != nil { - return 0, nil, fmt.Errorf("failed parsing service nodes: %s", err) - } - - return idx, results, nil -} - // serviceTagFilter returns true (should filter) if the given service node // doesn't contain the given tag. func serviceTagFilter(sn *structs.ServiceNode, tag string) bool { From c43ccd024ad3662f6791a1ac6e4b62deb5e857b2 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 9 Mar 2018 17:16:12 -0800 Subject: [PATCH 056/539] agent/local: anti-entropy for connect proxy services --- agent/agent_endpoint_test.go | 33 +++++ agent/local/state_test.go | 140 ++++++++++++++++++++ agent/structs/service_definition.go | 4 + agent/structs/testing_service_definition.go | 13 ++ 4 files changed, 190 insertions(+) create mode 100644 agent/structs/testing_service_definition.go diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 126994196..e30323007 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -1365,6 +1365,39 @@ func TestAgent_RegisterService_InvalidAddress(t *testing.T) { } } +func TestAgent_RegisterService_ConnectProxy(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + args := &structs.ServiceDefinition{ + Kind: structs.ServiceKindConnectProxy, + Name: "connect-proxy", + Port: 8000, + ProxyDestination: "db", + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=abc123", jsonReader(args)) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentRegisterService(resp, req) + assert.Nil(err) + assert.Nil(obj) + + // Ensure the servie + svc, ok := a.State.Services()["connect-proxy"] + assert.True(ok, "has service") + assert.Equal(structs.ServiceKindConnectProxy, svc.Kind) + assert.Equal("db", svc.ProxyDestination) + + // Ensure the token was configured + assert.Equal("abc123", a.State.ServiceToken("connect-proxy")) +} + func TestAgent_DeregisterService(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), "") diff --git a/agent/local/state_test.go b/agent/local/state_test.go index a6e9e1738..d0c006a95 100644 --- a/agent/local/state_test.go +++ b/agent/local/state_test.go @@ -16,6 +16,7 @@ import ( "github.com/hashicorp/consul/testutil/retry" "github.com/hashicorp/consul/types" "github.com/pascaldekloe/goe/verify" + "github.com/stretchr/testify/assert" ) func TestAgentAntiEntropy_Services(t *testing.T) { @@ -224,6 +225,145 @@ func TestAgentAntiEntropy_Services(t *testing.T) { } } +func TestAgentAntiEntropy_Services_ConnectProxy(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := &agent.TestAgent{Name: t.Name()} + a.Start() + defer a.Shutdown() + + // Register node info + var out struct{} + args := &structs.RegisterRequest{ + Datacenter: "dc1", + Node: a.Config.NodeName, + Address: "127.0.0.1", + } + + // Exists both same (noop) + srv1 := &structs.NodeService{ + Kind: structs.ServiceKindConnectProxy, + ID: "mysql-proxy", + Service: "mysql-proxy", + Port: 5000, + ProxyDestination: "db", + } + a.State.AddService(srv1, "") + args.Service = srv1 + assert.Nil(a.RPC("Catalog.Register", args, &out)) + + // Exists both, different (update) + srv2 := &structs.NodeService{ + ID: "redis-proxy", + Service: "redis-proxy", + Port: 8000, + Kind: structs.ServiceKindConnectProxy, + ProxyDestination: "redis", + } + a.State.AddService(srv2, "") + + srv2_mod := new(structs.NodeService) + *srv2_mod = *srv2 + srv2_mod.Port = 9000 + args.Service = srv2_mod + assert.Nil(a.RPC("Catalog.Register", args, &out)) + + // Exists local (create) + srv3 := &structs.NodeService{ + ID: "web-proxy", + Service: "web-proxy", + Port: 80, + Kind: structs.ServiceKindConnectProxy, + ProxyDestination: "web", + } + a.State.AddService(srv3, "") + + // Exists remote (delete) + srv4 := &structs.NodeService{ + ID: "lb-proxy", + Service: "lb-proxy", + Port: 443, + Kind: structs.ServiceKindConnectProxy, + ProxyDestination: "lb", + } + args.Service = srv4 + assert.Nil(a.RPC("Catalog.Register", args, &out)) + + // Exists local, in sync, remote missing (create) + srv5 := &structs.NodeService{ + ID: "cache-proxy", + Service: "cache-proxy", + Port: 11211, + Kind: structs.ServiceKindConnectProxy, + ProxyDestination: "cache-proxy", + } + a.State.SetServiceState(&local.ServiceState{ + Service: srv5, + InSync: true, + }) + + assert.Nil(a.State.SyncFull()) + + var services structs.IndexedNodeServices + req := structs.NodeSpecificRequest{ + Datacenter: "dc1", + Node: a.Config.NodeName, + } + assert.Nil(a.RPC("Catalog.NodeServices", &req, &services)) + + // We should have 5 services (consul included) + assert.Len(services.NodeServices.Services, 5) + + // All the services should match + for id, serv := range services.NodeServices.Services { + serv.CreateIndex, serv.ModifyIndex = 0, 0 + switch id { + case "mysql-proxy": + assert.Equal(srv1, serv) + case "redis-proxy": + assert.Equal(srv2, serv) + case "web-proxy": + assert.Equal(srv3, serv) + case "cache-proxy": + assert.Equal(srv5, serv) + case structs.ConsulServiceID: + // ignore + default: + t.Fatalf("unexpected service: %v", id) + } + } + + assert.Nil(servicesInSync(a.State, 4)) + + // Remove one of the services + a.State.RemoveService("cache-proxy") + assert.Nil(a.State.SyncFull()) + assert.Nil(a.RPC("Catalog.NodeServices", &req, &services)) + + // We should have 4 services (consul included) + assert.Len(services.NodeServices.Services, 4) + + // All the services should match + for id, serv := range services.NodeServices.Services { + serv.CreateIndex, serv.ModifyIndex = 0, 0 + switch id { + case "mysql-proxy": + assert.Equal(srv1, serv) + case "redis-proxy": + assert.Equal(srv2, serv) + case "web-proxy": + assert.Equal(srv3, serv) + case structs.ConsulServiceID: + // ignore + default: + t.Fatalf("unexpected service: %v", id) + } + } + + assert.Nil(servicesInSync(a.State, 3)) +} + func TestAgentAntiEntropy_EnableTagOverride(t *testing.T) { t.Parallel() a := &agent.TestAgent{Name: t.Name()} diff --git a/agent/structs/service_definition.go b/agent/structs/service_definition.go index 4dc8ccfca..d469ed5d7 100644 --- a/agent/structs/service_definition.go +++ b/agent/structs/service_definition.go @@ -2,6 +2,7 @@ package structs // ServiceDefinition is used to JSON decode the Service definitions type ServiceDefinition struct { + Kind ServiceKind ID string Name string Tags []string @@ -12,10 +13,12 @@ type ServiceDefinition struct { Checks CheckTypes Token string EnableTagOverride bool + ProxyDestination string } func (s *ServiceDefinition) NodeService() *NodeService { ns := &NodeService{ + Kind: s.Kind, ID: s.ID, Service: s.Name, Tags: s.Tags, @@ -23,6 +26,7 @@ func (s *ServiceDefinition) NodeService() *NodeService { Meta: s.Meta, Port: s.Port, EnableTagOverride: s.EnableTagOverride, + ProxyDestination: s.ProxyDestination, } if ns.ID == "" && ns.Service != "" { ns.ID = ns.Service diff --git a/agent/structs/testing_service_definition.go b/agent/structs/testing_service_definition.go new file mode 100644 index 000000000..b14e1e2ff --- /dev/null +++ b/agent/structs/testing_service_definition.go @@ -0,0 +1,13 @@ +package structs + +import ( + "github.com/mitchellh/go-testing-interface" +) + +// TestServiceDefinitionProxy returns a ServiceDefinition for a proxy. +func TestServiceDefinitionProxy(t testing.T) *ServiceDefinition { + return &ServiceDefinition{ + Kind: ServiceKindConnectProxy, + ProxyDestination: "db", + } +} From b5fd3017bb1947d90cdf4911910ac9d394179545 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 9 Mar 2018 17:21:26 -0800 Subject: [PATCH 057/539] agent/structs: tests for PartialClone and IsSame for proxy fields --- agent/structs/structs.go | 4 +++- agent/structs/structs_test.go | 6 ++++++ 2 files changed, 9 insertions(+), 1 deletion(-) diff --git a/agent/structs/structs.go b/agent/structs/structs.go index 4301c7e93..f8f339a5b 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -529,7 +529,9 @@ func (s *NodeService) IsSame(other *NodeService) bool { s.Address != other.Address || s.Port != other.Port || !reflect.DeepEqual(s.Meta, other.Meta) || - s.EnableTagOverride != other.EnableTagOverride { + s.EnableTagOverride != other.EnableTagOverride || + s.Kind != other.Kind || + s.ProxyDestination != other.ProxyDestination { return false } diff --git a/agent/structs/structs_test.go b/agent/structs/structs_test.go index 972146d93..077636be0 100644 --- a/agent/structs/structs_test.go +++ b/agent/structs/structs_test.go @@ -134,6 +134,7 @@ func testServiceNode() *ServiceNode { NodeMeta: map[string]string{ "tag": "value", }, + ServiceKind: ServiceKindTypical, ServiceID: "service1", ServiceName: "dogs", ServiceTags: []string{"prod", "v1"}, @@ -143,6 +144,7 @@ func testServiceNode() *ServiceNode { "service": "metadata", }, ServiceEnableTagOverride: true, + ServiceProxyDestination: "cats", RaftIndex: RaftIndex{ CreateIndex: 1, ModifyIndex: 2, @@ -275,6 +277,7 @@ func TestStructs_NodeService_IsSame(t *testing.T) { }, Port: 1234, EnableTagOverride: true, + ProxyDestination: "db", } if !ns.IsSame(ns) { t.Fatalf("should be equal to itself") @@ -292,6 +295,7 @@ func TestStructs_NodeService_IsSame(t *testing.T) { "meta2": "value2", "meta1": "value1", }, + ProxyDestination: "db", RaftIndex: RaftIndex{ CreateIndex: 1, ModifyIndex: 2, @@ -325,6 +329,8 @@ func TestStructs_NodeService_IsSame(t *testing.T) { check(func() { other.Port = 9999 }, func() { other.Port = 1234 }) check(func() { other.Meta["meta2"] = "wrongValue" }, func() { other.Meta["meta2"] = "value2" }) check(func() { other.EnableTagOverride = false }, func() { other.EnableTagOverride = true }) + check(func() { other.Kind = ServiceKindConnectProxy }, func() { other.Kind = "" }) + check(func() { other.ProxyDestination = "" }, func() { other.ProxyDestination = "db" }) } func TestStructs_HealthCheck_IsSame(t *testing.T) { From 4207bb42c077ba83db2aa21aa36f2adfc99ab9ea Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 10 Mar 2018 17:42:30 -0800 Subject: [PATCH 058/539] agent: validate service entry on register --- agent/agent_endpoint.go | 8 ++++++++ agent/agent_endpoint_test.go | 29 +++++++++++++++++++++++++++++ 2 files changed, 37 insertions(+) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 8d7728f1c..75d5807c0 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -554,6 +554,14 @@ func (s *HTTPServer) AgentRegisterService(resp http.ResponseWriter, req *http.Re return nil, nil } + // Run validation. This is the same validation that would happen on + // the catalog endpoint so it helps ensure the sync will work properly. + if err := ns.Validate(); err != nil { + resp.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(resp, err.Error()) + return nil, nil + } + // Verify the check type. chkTypes, err := args.CheckTypes() if err != nil { diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index e30323007..05e8b6ca3 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -1398,6 +1398,35 @@ func TestAgent_RegisterService_ConnectProxy(t *testing.T) { assert.Equal("abc123", a.State.ServiceToken("connect-proxy")) } +func TestAgent_RegisterService_ConnectProxyInvalid(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + args := &structs.ServiceDefinition{ + Kind: structs.ServiceKindConnectProxy, + Name: "connect-proxy", + ProxyDestination: "db", + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=abc123", jsonReader(args)) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentRegisterService(resp, req) + assert.Nil(err) + assert.Nil(obj) + assert.Equal(http.StatusBadRequest, resp.Code) + assert.Contains(resp.Body.String(), "Port") + + // Ensure the service doesn't exist + _, ok := a.State.Services()["connect-proxy"] + assert.False(ok) +} + func TestAgent_DeregisterService(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), "") From 566c98b2fcb7bc0f6877e5078785cb57e5e5d30f Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 11 Mar 2018 09:11:10 -0700 Subject: [PATCH 059/539] agent/consul: require name for proxies --- agent/consul/catalog_endpoint.go | 11 ---------- agent/consul/catalog_endpoint_test.go | 30 --------------------------- agent/structs/structs.go | 7 +++++-- 3 files changed, 5 insertions(+), 43 deletions(-) diff --git a/agent/consul/catalog_endpoint.go b/agent/consul/catalog_endpoint.go index 840b97fa6..adde8e52e 100644 --- a/agent/consul/catalog_endpoint.go +++ b/agent/consul/catalog_endpoint.go @@ -47,17 +47,6 @@ func (c *Catalog) Register(args *structs.RegisterRequest, reply *struct{}) error // Handle a service registration. if args.Service != nil { - // Connect proxy specific logic - if args.Service.Kind == structs.ServiceKindConnectProxy { - // Name is optional, if it isn't set, we default to the - // proxy name. It actually MUST be this, but the validation - // below this will verify. - if args.Service.Service == "" { - args.Service.Service = fmt.Sprintf( - "%s-connect-proxy", args.Service.ProxyDestination) - } - } - // Validate the service. This is in addition to the below since // the above just hasn't been moved over yet. We should move it over // in time. diff --git a/agent/consul/catalog_endpoint_test.go b/agent/consul/catalog_endpoint_test.go index b095c3f3a..fd437c978 100644 --- a/agent/consul/catalog_endpoint_test.go +++ b/agent/consul/catalog_endpoint_test.go @@ -384,36 +384,6 @@ func TestCatalog_Register_ConnectProxy_invalid(t *testing.T) { assert.Contains(err.Error(), "ProxyDestination") } -// Test registering a proxy with no name set, which should work. -func TestCatalog_Register_ConnectProxy_noName(t *testing.T) { - t.Parallel() - - assert := assert.New(t) - dir1, s1 := testServer(t) - defer os.RemoveAll(dir1) - defer s1.Shutdown() - codec := rpcClient(t, s1) - defer codec.Close() - - args := structs.TestRegisterRequestProxy(t) - args.Service.Service = "" - - // Register - var out struct{} - assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out)) - - // List - req := structs.ServiceSpecificRequest{ - Datacenter: "dc1", - ServiceName: fmt.Sprintf("%s-connect-proxy", args.Service.ProxyDestination), - } - var resp structs.IndexedServiceNodes - assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.ServiceNodes", &req, &resp)) - assert.Len(resp.ServiceNodes, 1) - v := resp.ServiceNodes[0] - assert.Equal(structs.ServiceKindConnectProxy, v.ServiceKind) -} - // Test that write is required for the proxy destination to register a proxy. func TestCatalog_Register_ConnectProxy_ACLProxyDestination(t *testing.T) { t.Parallel() diff --git a/agent/structs/structs.go b/agent/structs/structs.go index f8f339a5b..40b606d17 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -462,8 +462,11 @@ type ServiceNodes []*ServiceNode type ServiceKind string const ( - // ServiceKindTypical is a typical, classic Consul service. - ServiceKindTypical ServiceKind = "typical" + // ServiceKindTypical is a typical, classic Consul service. This is + // represented by the absense of a value. This was chosen for ease of + // backwards compatibility: existing services in the catalog would + // default to the typical service. + ServiceKindTypical ServiceKind = "" // ServiceKindConnectProxy is a proxy for the Connect feature. This // service proxies another service within Consul and speaks the connect From 4cc4de1ff68d18ecbff45370743d9adb2630e929 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 11 Mar 2018 09:31:31 -0700 Subject: [PATCH 060/539] agent: remove ConnectProxyServiceName --- agent/agent_endpoint_test.go | 6 +++--- agent/structs/catalog.go | 3 --- agent/structs/testing_catalog.go | 2 +- 3 files changed, 4 insertions(+), 7 deletions(-) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 05e8b6ca3..167a23377 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -78,8 +78,8 @@ func TestAgent_Services_ConnectProxy(t *testing.T) { srv1 := &structs.NodeService{ Kind: structs.ServiceKindConnectProxy, - ID: structs.ConnectProxyServiceName, - Service: structs.ConnectProxyServiceName, + ID: "db-proxy", + Service: "db-proxy", Port: 5000, ProxyDestination: "db", } @@ -90,7 +90,7 @@ func TestAgent_Services_ConnectProxy(t *testing.T) { assert.Nil(err) val := obj.(map[string]*structs.NodeService) assert.Len(val, 1) - actual := val[structs.ConnectProxyServiceName] + actual := val["db-proxy"] assert.Equal(structs.ServiceKindConnectProxy, actual.Kind) assert.Equal("db", actual.ProxyDestination) } diff --git a/agent/structs/catalog.go b/agent/structs/catalog.go index 3f68f43a1..b118b9935 100644 --- a/agent/structs/catalog.go +++ b/agent/structs/catalog.go @@ -18,7 +18,4 @@ const ( // Consul server node in the catalog. ConsulServiceID = "consul" ConsulServiceName = "consul" - - // ConnectProxyServiceName is the name of the proxy services. - ConnectProxyServiceName = "connect-proxy" ) diff --git a/agent/structs/testing_catalog.go b/agent/structs/testing_catalog.go index d61266ad5..1394b7081 100644 --- a/agent/structs/testing_catalog.go +++ b/agent/structs/testing_catalog.go @@ -34,7 +34,7 @@ func TestRegisterRequestProxy(t testing.T) *RegisterRequest { func TestNodeServiceProxy(t testing.T) *NodeService { return &NodeService{ Kind: ServiceKindConnectProxy, - Service: ConnectProxyServiceName, + Service: "connect-proxy", Address: "127.0.0.2", Port: 2222, ProxyDestination: "web", From 641c982480d8673e968a1126bdd813756a65edd7 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 11 Mar 2018 09:31:39 -0700 Subject: [PATCH 061/539] agent/consul: Catalog endpoint ACL requirements for Connect proxies --- agent/consul/catalog_endpoint.go | 15 +++++ agent/consul/catalog_endpoint_test.go | 85 +++++++++++++++++++++++++++ 2 files changed, 100 insertions(+) diff --git a/agent/consul/catalog_endpoint.go b/agent/consul/catalog_endpoint.go index adde8e52e..a31ca59eb 100644 --- a/agent/consul/catalog_endpoint.go +++ b/agent/consul/catalog_endpoint.go @@ -280,6 +280,21 @@ func (c *Catalog) ServiceNodes(args *structs.ServiceSpecificRequest, reply *stru } } + // If we're doing a connect query, we need read access to the service + // we're trying to find proxies for, so check that. + if args.Connect { + // Fetch the ACL token, if any. + rule, err := c.srv.resolveToken(args.Token) + if err != nil { + return err + } + + if rule != nil && !rule.ServiceRead(args.ServiceName) { + // Just return nil, which will return an empty response (tested) + return nil + } + } + err := c.srv.blockingQuery( &args.QueryOptions, &reply.QueryMeta, diff --git a/agent/consul/catalog_endpoint_test.go b/agent/consul/catalog_endpoint_test.go index fd437c978..d08438f9d 100644 --- a/agent/consul/catalog_endpoint_test.go +++ b/agent/consul/catalog_endpoint_test.go @@ -1794,6 +1794,91 @@ func TestCatalog_ListServiceNodes_ConnectDestination(t *testing.T) { assert.Equal("", v.ServiceProxyDestination) } +func TestCatalog_ListServiceNodes_ConnectProxy_ACL(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + c.ACLEnforceVersion8 = false + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create the ACL. + arg := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: ` +service "foo" { + policy = "write" +} +`, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + var token string + assert.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &arg, &token)) + + { + // Register a proxy + args := structs.TestRegisterRequestProxy(t) + args.Service.Service = "foo-proxy" + args.Service.ProxyDestination = "bar" + args.WriteRequest.Token = "root" + var out struct{} + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out)) + + // Register a proxy + args = structs.TestRegisterRequestProxy(t) + args.Service.Service = "foo-proxy" + args.Service.ProxyDestination = "foo" + args.WriteRequest.Token = "root" + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out)) + + // Register a proxy + args = structs.TestRegisterRequestProxy(t) + args.Service.Service = "another-proxy" + args.Service.ProxyDestination = "foo" + args.WriteRequest.Token = "root" + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out)) + } + + // List w/ token. This should disallow because we don't have permission + // to read "bar" + req := structs.ServiceSpecificRequest{ + Connect: true, + Datacenter: "dc1", + ServiceName: "bar", + QueryOptions: structs.QueryOptions{Token: token}, + } + var resp structs.IndexedServiceNodes + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.ServiceNodes", &req, &resp)) + assert.Len(resp.ServiceNodes, 0) + + // List w/ token. This should work since we're requesting "foo", but should + // also only contain the proxies with names that adhere to our ACL. + req = structs.ServiceSpecificRequest{ + Connect: true, + Datacenter: "dc1", + ServiceName: "foo", + QueryOptions: structs.QueryOptions{Token: token}, + } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.ServiceNodes", &req, &resp)) + assert.Len(resp.ServiceNodes, 1) + v := resp.ServiceNodes[0] + assert.Equal("foo-proxy", v.ServiceName) +} + func TestCatalog_NodeServices(t *testing.T) { t.Parallel() dir1, s1 := testServer(t) From 62cbb892e38b8cf357cbc23e69274a3636bfc8ac Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 11 Mar 2018 11:49:12 -0700 Subject: [PATCH 062/539] agent/consul: Health.ServiceNodes ACL check for Connect --- agent/consul/health_endpoint.go | 15 ++++ agent/consul/health_endpoint_test.go | 101 +++++++++++++++++++++++++++ 2 files changed, 116 insertions(+) diff --git a/agent/consul/health_endpoint.go b/agent/consul/health_endpoint.go index 70cc2e37d..214de777d 100644 --- a/agent/consul/health_endpoint.go +++ b/agent/consul/health_endpoint.go @@ -130,6 +130,21 @@ func (h *Health) ServiceNodes(args *structs.ServiceSpecificRequest, reply *struc } } + // If we're doing a connect query, we need read access to the service + // we're trying to find proxies for, so check that. + if args.Connect { + // Fetch the ACL token, if any. + rule, err := h.srv.resolveToken(args.Token) + if err != nil { + return err + } + + if rule != nil && !rule.ServiceRead(args.ServiceName) { + // Just return nil, which will return an empty response (tested) + return nil + } + } + err := h.srv.blockingQuery( &args.QueryOptions, &reply.QueryMeta, diff --git a/agent/consul/health_endpoint_test.go b/agent/consul/health_endpoint_test.go index c9581e3a7..117afd646 100644 --- a/agent/consul/health_endpoint_test.go +++ b/agent/consul/health_endpoint_test.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/consul/lib" "github.com/hashicorp/consul/testrpc" "github.com/hashicorp/net-rpc-msgpackrpc" + "github.com/stretchr/testify/assert" ) func TestHealth_ChecksInState(t *testing.T) { @@ -821,6 +822,106 @@ func TestHealth_ServiceNodes_DistanceSort(t *testing.T) { } } +func TestHealth_ServiceNodes_ConnectProxy_ACL(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + c.ACLEnforceVersion8 = false + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create the ACL. + arg := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: ` +service "foo" { + policy = "write" +} +`, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + var token string + assert.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", arg, &token)) + + { + var out struct{} + + // Register a service + args := structs.TestRegisterRequestProxy(t) + args.WriteRequest.Token = "root" + args.Service.ID = "foo-proxy-0" + args.Service.Service = "foo-proxy" + args.Service.ProxyDestination = "bar" + args.Check = &structs.HealthCheck{ + Name: "proxy", + Status: api.HealthPassing, + ServiceID: args.Service.ID, + } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out)) + + // Register a service + args = structs.TestRegisterRequestProxy(t) + args.WriteRequest.Token = "root" + args.Service.Service = "foo-proxy" + args.Service.ProxyDestination = "foo" + args.Check = &structs.HealthCheck{ + Name: "proxy", + Status: api.HealthPassing, + ServiceID: args.Service.Service, + } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out)) + + // Register a service + args = structs.TestRegisterRequestProxy(t) + args.WriteRequest.Token = "root" + args.Service.Service = "another-proxy" + args.Service.ProxyDestination = "foo" + args.Check = &structs.HealthCheck{ + Name: "proxy", + Status: api.HealthPassing, + ServiceID: args.Service.Service, + } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out)) + } + + // List w/ token. This should disallow because we don't have permission + // to read "bar" + req := structs.ServiceSpecificRequest{ + Connect: true, + Datacenter: "dc1", + ServiceName: "bar", + QueryOptions: structs.QueryOptions{Token: token}, + } + var resp structs.IndexedCheckServiceNodes + assert.Nil(msgpackrpc.CallWithCodec(codec, "Health.ServiceNodes", &req, &resp)) + assert.Len(resp.Nodes, 0) + + // List w/ token. This should work since we're requesting "foo", but should + // also only contain the proxies with names that adhere to our ACL. + req = structs.ServiceSpecificRequest{ + Connect: true, + Datacenter: "dc1", + ServiceName: "foo", + QueryOptions: structs.QueryOptions{Token: token}, + } + assert.Nil(msgpackrpc.CallWithCodec(codec, "Health.ServiceNodes", &req, &resp)) + assert.Len(resp.Nodes, 1) +} + func TestHealth_NodeChecks_FilterACL(t *testing.T) { t.Parallel() dir, token, srv, codec := testACLFilterServer(t) From f9a55aa7e093191c96ba45d411a08f622b99f357 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 12 Mar 2018 10:13:44 -0700 Subject: [PATCH 063/539] agent: clarified a number of comments per PR feedback --- agent/agent_endpoint_test.go | 5 ++++- agent/consul/catalog_endpoint_test.go | 2 -- agent/dns_test.go | 2 +- agent/structs/service_definition.go | 3 ++- agent/structs/structs.go | 3 +++ 5 files changed, 10 insertions(+), 5 deletions(-) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 167a23377..d59c804ea 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -1372,6 +1372,9 @@ func TestAgent_RegisterService_ConnectProxy(t *testing.T) { a := NewTestAgent(t.Name(), "") defer a.Shutdown() + // Register a proxy. Note that the destination doesn't exist here on + // this agent or in the catalog at all. This is intended and part + // of the design. args := &structs.ServiceDefinition{ Kind: structs.ServiceKindConnectProxy, Name: "connect-proxy", @@ -1388,7 +1391,7 @@ func TestAgent_RegisterService_ConnectProxy(t *testing.T) { assert.Nil(err) assert.Nil(obj) - // Ensure the servie + // Ensure the service svc, ok := a.State.Services()["connect-proxy"] assert.True(ok, "has service") assert.Equal(structs.ServiceKindConnectProxy, svc.Kind) diff --git a/agent/consul/catalog_endpoint_test.go b/agent/consul/catalog_endpoint_test.go index d08438f9d..7b2247af7 100644 --- a/agent/consul/catalog_endpoint_test.go +++ b/agent/consul/catalog_endpoint_test.go @@ -393,7 +393,6 @@ func TestCatalog_Register_ConnectProxy_ACLProxyDestination(t *testing.T) { c.ACLDatacenter = "dc1" c.ACLMasterToken = "root" c.ACLDefaultPolicy = "deny" - c.ACLEnforceVersion8 = false }) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -1802,7 +1801,6 @@ func TestCatalog_ListServiceNodes_ConnectProxy_ACL(t *testing.T) { c.ACLDatacenter = "dc1" c.ACLMasterToken = "root" c.ACLDefaultPolicy = "deny" - c.ACLEnforceVersion8 = false }) defer os.RemoveAll(dir1) defer s1.Shutdown() diff --git a/agent/dns_test.go b/agent/dns_test.go index a501a9c9f..d897a921e 100644 --- a/agent/dns_test.go +++ b/agent/dns_test.go @@ -1049,7 +1049,7 @@ func TestDNS_ConnectServiceLookup(t *testing.T) { a := NewTestAgent(t.Name(), "") defer a.Shutdown() - // Register a node with an external service. + // Register { args := structs.TestRegisterRequestProxy(t) args.Service.ProxyDestination = "db" diff --git a/agent/structs/service_definition.go b/agent/structs/service_definition.go index d469ed5d7..a10f1527f 100644 --- a/agent/structs/service_definition.go +++ b/agent/structs/service_definition.go @@ -1,6 +1,7 @@ package structs -// ServiceDefinition is used to JSON decode the Service definitions +// ServiceDefinition is used to JSON decode the Service definitions. For +// documentation on specific fields see NodeService which is better documented. type ServiceDefinition struct { Kind ServiceKind ID string diff --git a/agent/structs/structs.go b/agent/structs/structs.go index 40b606d17..95c0ba069 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -491,6 +491,9 @@ type NodeService struct { // ProxyDestination is the name of the service that this service is // a Connect proxy for. This is only valid if Kind is "connect-proxy". + // The destination may be a service that isn't present in the catalog. + // This is expected and allowed to allow for proxies to come up + // earlier than their target services. ProxyDestination string RaftIndex From 767d2eaef6217a1d713db220d3d6a59e2b582a95 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 12 Mar 2018 13:05:06 -0700 Subject: [PATCH 064/539] agent: commenting some tests --- agent/agent_endpoint_test.go | 8 ++++++++ agent/catalog_endpoint_test.go | 6 ++++++ 2 files changed, 14 insertions(+) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index d59c804ea..566d397cf 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -69,6 +69,8 @@ func TestAgent_Services(t *testing.T) { } } +// This tests that the agent services endpoint (/v1/agent/services) returns +// Connect proxies. func TestAgent_Services_ConnectProxy(t *testing.T) { t.Parallel() @@ -1365,6 +1367,9 @@ func TestAgent_RegisterService_InvalidAddress(t *testing.T) { } } +// This tests local agent service registration of a connect proxy. This +// verifies that it is put in the local state store properly for syncing +// later. func TestAgent_RegisterService_ConnectProxy(t *testing.T) { t.Parallel() @@ -1401,6 +1406,9 @@ func TestAgent_RegisterService_ConnectProxy(t *testing.T) { assert.Equal("abc123", a.State.ServiceToken("connect-proxy")) } +// This tests that connect proxy validation is done for local agent +// registration. This doesn't need to test validation exhaustively since +// that is done via a table test in the structs package. func TestAgent_RegisterService_ConnectProxyInvalid(t *testing.T) { t.Parallel() diff --git a/agent/catalog_endpoint_test.go b/agent/catalog_endpoint_test.go index 71c848ede..64e6c3dbe 100644 --- a/agent/catalog_endpoint_test.go +++ b/agent/catalog_endpoint_test.go @@ -751,6 +751,8 @@ func TestCatalogServiceNodes_DistanceSort(t *testing.T) { } } +// Test that connect proxies can be queried via /v1/catalog/service/:service +// directly and that their results contain the proxy fields. func TestCatalogServiceNodes_ConnectProxy(t *testing.T) { t.Parallel() @@ -775,6 +777,8 @@ func TestCatalogServiceNodes_ConnectProxy(t *testing.T) { assert.Equal(structs.ServiceKindConnectProxy, nodes[0].ServiceKind) } +// Test that the Connect-compatible endpoints can be queried for a +// service via /v1/catalog/connect/:service. func TestCatalogConnectServiceNodes_good(t *testing.T) { t.Parallel() @@ -834,6 +838,8 @@ func TestCatalogNodeServices(t *testing.T) { } } +// Test that the services on a node contain all the Connect proxies on +// the node as well with their fields properly populated. func TestCatalogNodeServices_ConnectProxy(t *testing.T) { t.Parallel() From 7e8d6067178c647df8996f8cb9db95dd23b9ef0a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 16:54:44 -1000 Subject: [PATCH 065/539] agent: address PR feedback --- agent/catalog_endpoint_test.go | 2 ++ agent/consul/health_endpoint.go | 33 ++++++++++++++++++++------------- agent/dns_test.go | 2 ++ agent/health_endpoint_test.go | 2 +- 4 files changed, 25 insertions(+), 14 deletions(-) diff --git a/agent/catalog_endpoint_test.go b/agent/catalog_endpoint_test.go index 64e6c3dbe..f97b22dbc 100644 --- a/agent/catalog_endpoint_test.go +++ b/agent/catalog_endpoint_test.go @@ -788,6 +788,7 @@ func TestCatalogConnectServiceNodes_good(t *testing.T) { // Register args := structs.TestRegisterRequestProxy(t) + args.Service.Address = "127.0.0.55" var out struct{} assert.Nil(a.RPC("Catalog.Register", args, &out)) @@ -801,6 +802,7 @@ func TestCatalogConnectServiceNodes_good(t *testing.T) { nodes := obj.(structs.ServiceNodes) assert.Len(nodes, 1) assert.Equal(structs.ServiceKindConnectProxy, nodes[0].ServiceKind) + assert.Equal(args.Service.Address, nodes[0].ServiceAddress) } func TestCatalogNodeServices(t *testing.T) { diff --git a/agent/consul/health_endpoint.go b/agent/consul/health_endpoint.go index 214de777d..38b7a9c0a 100644 --- a/agent/consul/health_endpoint.go +++ b/agent/consul/health_endpoint.go @@ -112,22 +112,14 @@ func (h *Health) ServiceNodes(args *structs.ServiceSpecificRequest, reply *struc } // Determine the function we'll call - var f func(memdb.WatchSet, *state.Store) (uint64, structs.CheckServiceNodes, error) + var f func(memdb.WatchSet, *state.Store, *structs.ServiceSpecificRequest) (uint64, structs.CheckServiceNodes, error) switch { case args.Connect: - f = func(ws memdb.WatchSet, s *state.Store) (uint64, structs.CheckServiceNodes, error) { - return s.CheckConnectServiceNodes(ws, args.ServiceName) - } - + f = h.serviceNodesConnect case args.TagFilter: - f = func(ws memdb.WatchSet, s *state.Store) (uint64, structs.CheckServiceNodes, error) { - return s.CheckServiceTagNodes(ws, args.ServiceName, args.ServiceTag) - } - + f = h.serviceNodesTagFilter default: - f = func(ws memdb.WatchSet, s *state.Store) (uint64, structs.CheckServiceNodes, error) { - return s.CheckServiceNodes(ws, args.ServiceName) - } + f = h.serviceNodesDefault } // If we're doing a connect query, we need read access to the service @@ -149,7 +141,7 @@ func (h *Health) ServiceNodes(args *structs.ServiceSpecificRequest, reply *struc &args.QueryOptions, &reply.QueryMeta, func(ws memdb.WatchSet, state *state.Store) error { - index, nodes, err := f(ws, state) + index, nodes, err := f(ws, state, args) if err != nil { return err } @@ -185,3 +177,18 @@ func (h *Health) ServiceNodes(args *structs.ServiceSpecificRequest, reply *struc } return err } + +// The serviceNodes* functions below are the various lookup methods that +// can be used by the ServiceNodes endpoint. + +func (h *Health) serviceNodesConnect(ws memdb.WatchSet, s *state.Store, args *structs.ServiceSpecificRequest) (uint64, structs.CheckServiceNodes, error) { + return s.CheckConnectServiceNodes(ws, args.ServiceName) +} + +func (h *Health) serviceNodesTagFilter(ws memdb.WatchSet, s *state.Store, args *structs.ServiceSpecificRequest) (uint64, structs.CheckServiceNodes, error) { + return s.CheckServiceTagNodes(ws, args.ServiceName, args.ServiceTag) +} + +func (h *Health) serviceNodesDefault(ws memdb.WatchSet, s *state.Store, args *structs.ServiceSpecificRequest) (uint64, structs.CheckServiceNodes, error) { + return s.CheckServiceNodes(ws, args.ServiceName) +} diff --git a/agent/dns_test.go b/agent/dns_test.go index d897a921e..d7bb2102d 100644 --- a/agent/dns_test.go +++ b/agent/dns_test.go @@ -1052,6 +1052,7 @@ func TestDNS_ConnectServiceLookup(t *testing.T) { // Register { args := structs.TestRegisterRequestProxy(t) + args.Address = "127.0.0.55" args.Service.ProxyDestination = "db" args.Service.Address = "" args.Service.Port = 12345 @@ -1082,6 +1083,7 @@ func TestDNS_ConnectServiceLookup(t *testing.T) { assert.True(ok) assert.Equal("foo.node.dc1.consul.", cnameRec.Hdr.Name) assert.Equal(uint32(0), srvRec.Hdr.Ttl) + assert.Equal("127.0.0.55", cnameRec.A.String()) } } diff --git a/agent/health_endpoint_test.go b/agent/health_endpoint_test.go index 688924df1..8164be477 100644 --- a/agent/health_endpoint_test.go +++ b/agent/health_endpoint_test.go @@ -851,7 +851,7 @@ func TestHealthConnectServiceNodes_PassingFilter(t *testing.T) { assert.Nil(err) assertIndex(t, resp) - // Should be 0 health check for consul + // Should be 1 nodes := obj.(structs.CheckServiceNodes) assert.Len(nodes, 1) }) From cfb62677c05dffe8ec10ab4d71d89a49d2b6f80b Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 16 Mar 2018 21:20:54 -0700 Subject: [PATCH 066/539] agent/consul/state: CARoot structs and initial state store --- agent/consul/state/connect_ca.go | 106 ++++++++++++++++++++++++++++++ agent/consul/state/state_store.go | 4 ++ agent/structs/connect_ca.go | 37 +++++++++++ 3 files changed, 147 insertions(+) create mode 100644 agent/consul/state/connect_ca.go create mode 100644 agent/structs/connect_ca.go diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go new file mode 100644 index 000000000..9e3195918 --- /dev/null +++ b/agent/consul/state/connect_ca.go @@ -0,0 +1,106 @@ +package state + +import ( + "fmt" + + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/go-memdb" +) + +const ( + caRootTableName = "connect-ca-roots" +) + +// caRootTableSchema returns a new table schema used for storing +// CA roots for Connect. +func caRootTableSchema() *memdb.TableSchema { + return &memdb.TableSchema{ + Name: caRootTableName, + Indexes: map[string]*memdb.IndexSchema{ + "id": &memdb.IndexSchema{ + Name: "id", + AllowMissing: false, + Unique: true, + Indexer: &memdb.UUIDFieldIndex{ + Field: "ID", + }, + }, + }, + } +} + +func init() { + registerSchema(caRootTableSchema) +} + +// CARoots returns the list of all CA roots. +func (s *Store) CARoots(ws memdb.WatchSet) (uint64, structs.CARoots, error) { + tx := s.db.Txn(false) + defer tx.Abort() + + // Get the index + idx := maxIndexTxn(tx, caRootTableName) + + // Get all + iter, err := tx.Get(caRootTableName, "id") + if err != nil { + return 0, nil, fmt.Errorf("failed CA root lookup: %s", err) + } + ws.Add(iter.WatchCh()) + + var results structs.CARoots + for v := iter.Next(); v != nil; v = iter.Next() { + results = append(results, v.(*structs.CARoot)) + } + return idx, results, nil +} + +// CARootSet creates or updates a CA root. +// +// NOTE(mitchellh): I have a feeling we'll want a CARootMultiSetCAS to +// perform a check-and-set on the entire set of CARoots versus an individual +// set, since we'll want to modify them atomically during events such as +// rotation. +func (s *Store) CARootSet(idx uint64, v *structs.CARoot) error { + tx := s.db.Txn(true) + defer tx.Abort() + + if err := s.caRootSetTxn(tx, idx, v); err != nil { + return err + } + + tx.Commit() + return nil +} + +// caRootSetTxn is the inner method used to insert or update a CA root with +// the proper indexes into the state store. +func (s *Store) caRootSetTxn(tx *memdb.Txn, idx uint64, v *structs.CARoot) error { + // ID is required + if v.ID == "" { + return ErrMissingCARootID + } + + // Check for an existing value + existing, err := tx.First(caRootTableName, "id", v.ID) + if err != nil { + return fmt.Errorf("failed CA root lookup: %s", err) + } + if existing != nil { + old := existing.(*structs.CARoot) + v.CreateIndex = old.CreateIndex + } else { + v.CreateIndex = idx + } + v.ModifyIndex = idx + + // Insert + if err := tx.Insert(caRootTableName, v); err != nil { + return err + } + if err := tx.Insert("index", &IndexEntry{caRootTableName, idx}); err != nil { + return fmt.Errorf("failed updating index: %s", err) + } + + return nil +} diff --git a/agent/consul/state/state_store.go b/agent/consul/state/state_store.go index 62b6a8bff..c59e09e93 100644 --- a/agent/consul/state/state_store.go +++ b/agent/consul/state/state_store.go @@ -29,6 +29,10 @@ var ( // a Query with an empty ID. ErrMissingQueryID = errors.New("Missing Query ID") + // ErrMissingCARootID is returned when an CARoot set is called + // with an CARoot with an empty ID. + ErrMissingCARootID = errors.New("Missing CA Root ID") + // ErrMissingIntentionID is returned when an Intention set is called // with an Intention with an empty ID. ErrMissingIntentionID = errors.New("Missing Intention ID") diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go new file mode 100644 index 000000000..87211e09f --- /dev/null +++ b/agent/structs/connect_ca.go @@ -0,0 +1,37 @@ +package structs + +// IndexedCARoots is the list of currently trusted CA Roots. +type IndexedCARoots struct { + // ActiveRootID is the ID of a root in Roots that is the active CA root. + // Other roots are still valid if they're in the Roots list but are in + // the process of being rotated out. + ActiveRootID string + + // Roots is a list of root CA certs to trust. + Roots []*CARoot + + QueryMeta +} + +// CARoot represents a root CA certificate that is trusted. +type CARoot struct { + // ID is a globally unique ID (UUID) representing this CA root. + ID string + + // Name is a human-friendly name for this CA root. This value is + // opaque to Consul and is not used for anything internally. + Name string + + // RootCert is the PEM-encoded public certificate. + RootCert string + + // SigningCert is the PEM-encoded signing certificate and SigningKey + // is the PEM-encoded private key for the signing certificate. + SigningCert string + SigningKey string + + RaftIndex +} + +// CARoots is a list of CARoot structures. +type CARoots []*CARoot From 24830f4cfad3c935fce306dcdb6bd65d473af1af Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 16 Mar 2018 21:28:27 -0700 Subject: [PATCH 067/539] agent/consul: RPC endpoints to list roots --- agent/consul/connect_ca_endpoint.go | 55 +++++++++++++++++++++++++++++ agent/consul/server_oss.go | 1 + 2 files changed, 56 insertions(+) create mode 100644 agent/consul/connect_ca_endpoint.go diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go new file mode 100644 index 000000000..3f35ad79f --- /dev/null +++ b/agent/consul/connect_ca_endpoint.go @@ -0,0 +1,55 @@ +package consul + +import ( + "github.com/hashicorp/consul/agent/consul/state" + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/go-memdb" +) + +// ConnectCA manages the Connect CA. +type ConnectCA struct { + // srv is a pointer back to the server. + srv *Server +} + +// Roots returns the currently trusted root certificates. +func (s *ConnectCA) Roots( + args *structs.DCSpecificRequest, + reply *structs.IndexedCARoots) error { + // Forward if necessary + if done, err := s.srv.forward("ConnectCA.Roots", args, args, reply); done { + return err + } + + return s.srv.blockingQuery( + &args.QueryOptions, &reply.QueryMeta, + func(ws memdb.WatchSet, state *state.Store) error { + index, roots, err := state.CARoots(ws) + if err != nil { + return err + } + + reply.Index, reply.Roots = index, roots + if reply.Roots == nil { + reply.Roots = make(structs.CARoots, 0) + } + + // The API response must NEVER contain the secret information + // such as keys and so on. We use a whitelist below to copy the + // specific fields we want to expose. + for i, r := range reply.Roots { + // IMPORTANT: r must NEVER be modified, since it is a pointer + // directly to the structure in the memdb store. + + reply.Roots[i] = &structs.CARoot{ + ID: r.ID, + Name: r.Name, + RootCert: r.RootCert, + RaftIndex: r.RaftIndex, + } + } + + return nil + }, + ) +} diff --git a/agent/consul/server_oss.go b/agent/consul/server_oss.go index e633c2699..016420476 100644 --- a/agent/consul/server_oss.go +++ b/agent/consul/server_oss.go @@ -4,6 +4,7 @@ func init() { registerEndpoint(func(s *Server) interface{} { return &ACL{s} }) registerEndpoint(func(s *Server) interface{} { return &Catalog{s} }) registerEndpoint(func(s *Server) interface{} { return NewCoordinate(s) }) + registerEndpoint(func(s *Server) interface{} { return &ConnectCA{s} }) registerEndpoint(func(s *Server) interface{} { return &Health{s} }) registerEndpoint(func(s *Server) interface{} { return &Intention{s} }) registerEndpoint(func(s *Server) interface{} { return &Internal{s} }) From 9ad2a12441a288bf95ed8768f45d8c7089210a4a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 16 Mar 2018 21:39:26 -0700 Subject: [PATCH 068/539] agent: /v1/connect/ca/roots --- agent/agent_endpoint.go | 9 +++++++++ agent/connect_ca_endpoint.go | 28 ++++++++++++++++++++++++++++ agent/http_oss.go | 2 ++ 3 files changed, 39 insertions(+) create mode 100644 agent/connect_ca_endpoint.go diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 75d5807c0..e3e8fcd51 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -836,3 +836,12 @@ func (s *HTTPServer) AgentToken(resp http.ResponseWriter, req *http.Request) (in s.agent.logger.Printf("[INFO] agent: Updated agent's ACL token %q", target) return nil, nil } + +// AgentConnectCARoots returns the trusted CA roots. +func (s *HTTPServer) AgentConnectCARoots(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + if req.Method != "GET" { + return nil, MethodNotAllowedError{req.Method, []string{"GET"}} + } + + return nil, nil +} diff --git a/agent/connect_ca_endpoint.go b/agent/connect_ca_endpoint.go new file mode 100644 index 000000000..8e92417bc --- /dev/null +++ b/agent/connect_ca_endpoint.go @@ -0,0 +1,28 @@ +package agent + +import ( + "net/http" + + "github.com/hashicorp/consul/agent/structs" +) + +// GET /v1/connect/ca/roots +func (s *HTTPServer) ConnectCARoots(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Test the method + if req.Method != "GET" { + return nil, MethodNotAllowedError{req.Method, []string{"GET"}} + } + + var args structs.DCSpecificRequest + if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { + return nil, nil + } + + var reply structs.IndexedCARoots + defer setMeta(resp, &reply.QueryMeta) + if err := s.agent.RPC("ConnectCA.Roots", &args, &reply); err != nil { + return nil, err + } + + return reply.Roots, nil +} diff --git a/agent/http_oss.go b/agent/http_oss.go index 2e2c9751a..3cb18b2e1 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -29,6 +29,7 @@ func init() { registerEndpoint("/v1/agent/check/warn/", []string{"PUT"}, (*HTTPServer).AgentCheckWarn) registerEndpoint("/v1/agent/check/fail/", []string{"PUT"}, (*HTTPServer).AgentCheckFail) registerEndpoint("/v1/agent/check/update/", []string{"PUT"}, (*HTTPServer).AgentCheckUpdate) + registerEndpoint("/v1/agent/connect/ca/roots", []string{"GET"}, (*HTTPServer).AgentConnectCARoots) registerEndpoint("/v1/agent/service/register", []string{"PUT"}, (*HTTPServer).AgentRegisterService) registerEndpoint("/v1/agent/service/deregister/", []string{"PUT"}, (*HTTPServer).AgentDeregisterService) registerEndpoint("/v1/agent/service/maintenance/", []string{"PUT"}, (*HTTPServer).AgentServiceMaintenance) @@ -40,6 +41,7 @@ func init() { registerEndpoint("/v1/catalog/services", []string{"GET"}, (*HTTPServer).CatalogServices) registerEndpoint("/v1/catalog/service/", []string{"GET"}, (*HTTPServer).CatalogServiceNodes) registerEndpoint("/v1/catalog/node/", []string{"GET"}, (*HTTPServer).CatalogNodeServices) + registerEndpoint("/v1/connect/ca/roots", []string{"GET"}, (*HTTPServer).ConnectCARoots) registerEndpoint("/v1/connect/intentions", []string{"GET", "POST"}, (*HTTPServer).IntentionEndpoint) registerEndpoint("/v1/connect/intentions/match", []string{"GET"}, (*HTTPServer).IntentionMatch) registerEndpoint("/v1/connect/intentions/", []string{"GET"}, (*HTTPServer).IntentionSpecific) From f433f61fdfbc8b7d76334b65bf8d671519e2f9da Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 18 Mar 2018 22:07:52 -0700 Subject: [PATCH 069/539] agent/structs: json omit QueryMeta --- agent/connect_ca_endpoint.go | 2 +- agent/structs/connect_ca.go | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/agent/connect_ca_endpoint.go b/agent/connect_ca_endpoint.go index 8e92417bc..1c7871015 100644 --- a/agent/connect_ca_endpoint.go +++ b/agent/connect_ca_endpoint.go @@ -24,5 +24,5 @@ func (s *HTTPServer) ConnectCARoots(resp http.ResponseWriter, req *http.Request) return nil, err } - return reply.Roots, nil + return reply, nil } diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 87211e09f..46725dcc7 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -10,7 +10,9 @@ type IndexedCARoots struct { // Roots is a list of root CA certs to trust. Roots []*CARoot - QueryMeta + // QueryMeta contains the meta sent via a header. We ignore for JSON + // so this whole structure can be returned. + QueryMeta `json:"-"` } // CARoot represents a root CA certificate that is trusted. From d4e232f69b332105e42caa8c711391b8df9b4ce6 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 19 Mar 2018 10:48:38 -0700 Subject: [PATCH 070/539] connect: create connect package for helpers --- connect/ca.go | 48 +++++++++ connect/connect.go | 3 + connect/testing.go | 230 ++++++++++++++++++++++++++++++++++++++++ connect/testing_test.go | 109 +++++++++++++++++++ 4 files changed, 390 insertions(+) create mode 100644 connect/ca.go create mode 100644 connect/connect.go create mode 100644 connect/testing.go create mode 100644 connect/testing_test.go diff --git a/connect/ca.go b/connect/ca.go new file mode 100644 index 000000000..e9ada4953 --- /dev/null +++ b/connect/ca.go @@ -0,0 +1,48 @@ +package connect + +import ( + "crypto" + "crypto/rand" + "crypto/x509" + "encoding/pem" + "fmt" + "math/big" +) + +// ParseCert parses the x509 certificate from a PEM-encoded value. +func ParseCert(pemValue string) (*x509.Certificate, error) { + block, _ := pem.Decode([]byte(pemValue)) + if block == nil { + return nil, fmt.Errorf("no PEM-encoded data found") + } + + if block.Type != "CERTIFICATE" { + return nil, fmt.Errorf("first PEM-block should be CERTIFICATE type") + } + + return x509.ParseCertificate(block.Bytes) +} + +// ParseSigner parses a crypto.Signer from a PEM-encoded key. The private key +// is expected to be the first block in the PEM value. +func ParseSigner(pemValue string) (crypto.Signer, error) { + block, _ := pem.Decode([]byte(pemValue)) + if block == nil { + return nil, fmt.Errorf("no PEM-encoded data found") + } + + switch block.Type { + case "EC PRIVATE KEY": + return x509.ParseECPrivateKey(block.Bytes) + + default: + return nil, fmt.Errorf("unknown PEM block type for signing key: %s", block.Type) + } +} + +// SerialNumber generates a serial number suitable for a certificate. +// +// This function is taken directly from the Vault implementation. +func SerialNumber() (*big.Int, error) { + return rand.Int(rand.Reader, (&big.Int{}).Exp(big.NewInt(2), big.NewInt(159), nil)) +} diff --git a/connect/connect.go b/connect/connect.go new file mode 100644 index 000000000..b2ad85f71 --- /dev/null +++ b/connect/connect.go @@ -0,0 +1,3 @@ +// Package connect contains utilities and helpers for working with the +// Connect feature of Consul. +package connect diff --git a/connect/testing.go b/connect/testing.go new file mode 100644 index 000000000..78008270a --- /dev/null +++ b/connect/testing.go @@ -0,0 +1,230 @@ +package connect + +import ( + "bytes" + "crypto" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/sha256" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "fmt" + "net/url" + "sync/atomic" + "time" + + "github.com/hashicorp/consul/agent/structs" + "github.com/mitchellh/go-testing-interface" +) + +// testClusterID is the Consul cluster ID for testing. +// +// NOTE(mitchellh): This might have to change some other constant for +// real testing once we integrate the Cluster ID into the core. For now it +// is unchecked. +const testClusterID = "11111111-2222-3333-4444-555555555555" + +// testCACounter is just an atomically incremented counter for creating +// unique names for the CA certs. +var testCACounter uint64 = 0 + +// TestCA creates a test CA certificate and signing key and returns it +// in the CARoot structure format. The CARoot returned will NOT have an ID +// set. +// +// If xc is non-nil, then the returned certificate will have a signing cert +// that is cross-signed with the previous cert, and this will be set as +// SigningCert. +func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { + var result structs.CARoot + result.Name = fmt.Sprintf("Test CA %d", atomic.AddUint64(&testCACounter, 1)) + + // Create the private key we'll use for this CA cert. + signer := testPrivateKey(t, &result) + + // The serial number for the cert + sn, err := SerialNumber() + if err != nil { + t.Fatalf("error generating serial number: %s", err) + } + + // The URI (SPIFFE compatible) for the cert + uri, err := url.Parse(fmt.Sprintf("spiffe://%s.consul", testClusterID)) + if err != nil { + t.Fatalf("error parsing CA URI: %s", err) + } + + // Create the CA cert + template := x509.Certificate{ + SerialNumber: sn, + Subject: pkix.Name{CommonName: result.Name}, + URIs: []*url.URL{uri}, + PermittedDNSDomainsCritical: true, + PermittedDNSDomains: []string{uri.Hostname()}, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign, + IsCA: true, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: testKeyID(t, signer.Public()), + SubjectKeyId: testKeyID(t, signer.Public()), + } + + bs, err := x509.CreateCertificate( + rand.Reader, &template, &template, signer.Public(), signer) + if err != nil { + t.Fatalf("error generating CA certificate: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + t.Fatalf("error encoding private key: %s", err) + } + result.RootCert = buf.String() + + // If there is a prior CA to cross-sign with, then we need to create that + // and set it as the signing cert. + if xc != nil { + xccert, err := ParseCert(xc.RootCert) + if err != nil { + t.Fatalf("error parsing CA cert: %s", err) + } + xcsigner, err := ParseSigner(xc.SigningKey) + if err != nil { + t.Fatalf("error parsing signing key: %s", err) + } + + // Set the authority key to be the previous one + template.AuthorityKeyId = testKeyID(t, xcsigner.Public()) + + // Create the new certificate where the parent is the previous + // CA, the public key is the new public key, and the signing private + // key is the old private key. + bs, err := x509.CreateCertificate( + rand.Reader, &template, xccert, signer.Public(), xcsigner) + if err != nil { + t.Fatalf("error generating CA certificate: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + t.Fatalf("error encoding private key: %s", err) + } + result.SigningCert = buf.String() + } + + return &result +} + +// TestLeaf returns a valid leaf certificate for the named service with +// the given CA Root. +func TestLeaf(t testing.T, service string, root *structs.CARoot) string { + // Parse the CA cert and signing key from the root + caCert, err := ParseCert(root.RootCert) + if err != nil { + t.Fatalf("error parsing CA cert: %s", err) + } + signer, err := ParseSigner(root.SigningKey) + if err != nil { + t.Fatalf("error parsing signing key: %s", err) + } + + // The serial number for the cert + sn, err := SerialNumber() + if err != nil { + t.Fatalf("error generating serial number: %s", err) + } + + // Cert template for generation + template := x509.Certificate{ + SerialNumber: sn, + Subject: pkix.Name{CommonName: service}, + SignatureAlgorithm: x509.ECDSAWithSHA256, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageDataEncipherment | x509.KeyUsageKeyAgreement, + ExtKeyUsage: []x509.ExtKeyUsage{ + x509.ExtKeyUsageClientAuth, + x509.ExtKeyUsageServerAuth, + }, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: testKeyID(t, signer.Public()), + SubjectKeyId: testKeyID(t, signer.Public()), + } + + // Create the certificate, PEM encode it and return that value. + var buf bytes.Buffer + bs, err := x509.CreateCertificate( + rand.Reader, &template, caCert, signer.Public(), signer) + if err != nil { + t.Fatalf("error generating certificate: %s", err) + } + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + t.Fatalf("error encoding private key: %s", err) + } + + return buf.String() +} + +// testKeyID returns a KeyID from the given public key. The "raw" must be +// an *ecdsa.PublicKey, but is an interface type to suppot crypto.Signer.Public +// values. +func testKeyID(t testing.T, raw interface{}) []byte { + pub, ok := raw.(*ecdsa.PublicKey) + if !ok { + t.Fatalf("raw is type %T, expected *ecdsa.PublicKey", raw) + } + + // This is not standard; RFC allows any unique identifier as long as they + // match in subject/authority chains but suggests specific hashing of DER + // bytes of public key including DER tags. I can't be bothered to do esp. + // since ECDSA keys don't have a handy way to marshal the publick key alone. + h := sha256.New() + h.Write(pub.X.Bytes()) + h.Write(pub.Y.Bytes()) + return h.Sum([]byte{}) +} + +// testMemoizePK is the private key that we memoize once we generate it +// once so that our tests don't rely on too much system entropy. +var testMemoizePK atomic.Value + +// testPrivateKey creates an ECDSA based private key. +func testPrivateKey(t testing.T, ca *structs.CARoot) crypto.Signer { + // If we already generated a private key, use that + var pk *ecdsa.PrivateKey + if v := testMemoizePK.Load(); v != nil { + pk = v.(*ecdsa.PrivateKey) + } + + // If we have no key, then create a new one. + if pk == nil { + var err error + pk, err = ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if err != nil { + t.Fatalf("error generating private key: %s", err) + } + } + + bs, err := x509.MarshalECPrivateKey(pk) + if err != nil { + t.Fatalf("error generating private key: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) + if err != nil { + t.Fatalf("error encoding private key: %s", err) + } + ca.SigningKey = buf.String() + + // Memoize the key + testMemoizePK.Store(pk) + + return pk +} diff --git a/connect/testing_test.go b/connect/testing_test.go new file mode 100644 index 000000000..d07aac201 --- /dev/null +++ b/connect/testing_test.go @@ -0,0 +1,109 @@ +package connect + +import ( + "io/ioutil" + "os" + "os/exec" + "path/filepath" + "testing" + + "github.com/stretchr/testify/assert" +) + +// hasOpenSSL is used to determine if the openssl CLI exists for unit tests. +var hasOpenSSL bool + +func init() { + _, err := exec.LookPath("openssl") + hasOpenSSL = err == nil +} + +// Test that the TestCA and TestLeaf functions generate valid certificates. +func TestTestCAAndLeaf(t *testing.T) { + if !hasOpenSSL { + t.Skip("openssl not found") + return + } + + assert := assert.New(t) + + // Create the certs + ca := TestCA(t, nil) + leaf := TestLeaf(t, "web", ca) + + // Create a temporary directory for storing the certs + td, err := ioutil.TempDir("", "consul") + assert.Nil(err) + defer os.RemoveAll(td) + + // Write the cert + assert.Nil(ioutil.WriteFile(filepath.Join(td, "ca.pem"), []byte(ca.RootCert), 0644)) + assert.Nil(ioutil.WriteFile(filepath.Join(td, "leaf.pem"), []byte(leaf), 0644)) + + // Use OpenSSL to verify so we have an external, known-working process + // that can verify this outside of our own implementations. + cmd := exec.Command( + "openssl", "verify", "-verbose", "-CAfile", "ca.pem", "leaf.pem") + cmd.Dir = td + output, err := cmd.Output() + t.Log(string(output)) + assert.Nil(err) +} + +// Test cross-signing. +func TestTestCAAndLeaf_xc(t *testing.T) { + if !hasOpenSSL { + t.Skip("openssl not found") + return + } + + assert := assert.New(t) + + // Create the certs + ca1 := TestCA(t, nil) + ca2 := TestCA(t, ca1) + leaf1 := TestLeaf(t, "web", ca1) + leaf2 := TestLeaf(t, "web", ca2) + + // Create a temporary directory for storing the certs + td, err := ioutil.TempDir("", "consul") + assert.Nil(err) + defer os.RemoveAll(td) + + // Write the cert + xcbundle := []byte(ca1.RootCert) + xcbundle = append(xcbundle, '\n') + xcbundle = append(xcbundle, []byte(ca2.SigningCert)...) + assert.Nil(ioutil.WriteFile(filepath.Join(td, "ca.pem"), xcbundle, 0644)) + assert.Nil(ioutil.WriteFile(filepath.Join(td, "leaf1.pem"), []byte(leaf1), 0644)) + assert.Nil(ioutil.WriteFile(filepath.Join(td, "leaf2.pem"), []byte(leaf2), 0644)) + + // OpenSSL verify the cross-signed leaf (leaf2) + { + cmd := exec.Command( + "openssl", "verify", "-verbose", "-CAfile", "ca.pem", "leaf2.pem") + cmd.Dir = td + output, err := cmd.Output() + t.Log(string(output)) + assert.Nil(err) + } + + // OpenSSL verify the old leaf (leaf1) + { + cmd := exec.Command( + "openssl", "verify", "-verbose", "-CAfile", "ca.pem", "leaf1.pem") + cmd.Dir = td + output, err := cmd.Output() + t.Log(string(output)) + assert.Nil(err) + } +} + +// Test that the private key is memoized to preseve system entropy. +func TestTestPrivateKey_memoize(t *testing.T) { + ca1 := TestCA(t, nil) + ca2 := TestCA(t, nil) + if ca1.SigningKey != ca2.SigningKey { + t.Fatal("should have the same signing keys for tests") + } +} From 6550ff949248502fcea2ba9a605a5eb57227a730 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 19 Mar 2018 13:53:57 -0700 Subject: [PATCH 071/539] agent/connect: package for agent-related Connect, parse SPIFFE IDs --- agent/connect/ca.go | 59 ++++++++++++++++++++++++++++++++++++ agent/connect/spiffe.go | 55 +++++++++++++++++++++++++++++++++ agent/connect/spiffe_test.go | 57 ++++++++++++++++++++++++++++++++++ 3 files changed, 171 insertions(+) create mode 100644 agent/connect/ca.go create mode 100644 agent/connect/spiffe.go create mode 100644 agent/connect/spiffe_test.go diff --git a/agent/connect/ca.go b/agent/connect/ca.go new file mode 100644 index 000000000..90c484529 --- /dev/null +++ b/agent/connect/ca.go @@ -0,0 +1,59 @@ +package connect + +import ( + "crypto" + "crypto/ecdsa" + "crypto/sha256" + "crypto/x509" + "encoding/pem" + "fmt" +) + +// ParseCert parses the x509 certificate from a PEM-encoded value. +func ParseCert(pemValue string) (*x509.Certificate, error) { + block, _ := pem.Decode([]byte(pemValue)) + if block == nil { + return nil, fmt.Errorf("no PEM-encoded data found") + } + + if block.Type != "CERTIFICATE" { + return nil, fmt.Errorf("first PEM-block should be CERTIFICATE type") + } + + return x509.ParseCertificate(block.Bytes) +} + +// ParseSigner parses a crypto.Signer from a PEM-encoded key. The private key +// is expected to be the first block in the PEM value. +func ParseSigner(pemValue string) (crypto.Signer, error) { + block, _ := pem.Decode([]byte(pemValue)) + if block == nil { + return nil, fmt.Errorf("no PEM-encoded data found") + } + + switch block.Type { + case "EC PRIVATE KEY": + return x509.ParseECPrivateKey(block.Bytes) + + default: + return nil, fmt.Errorf("unknown PEM block type for signing key: %s", block.Type) + } +} + +// KeyId returns a x509 KeyId from the given signing key. The key must be +// an *ecdsa.PublicKey, but is an interface type to support crypto.Signer. +func KeyId(raw interface{}) ([]byte, error) { + pub, ok := raw.(*ecdsa.PublicKey) + if !ok { + return nil, fmt.Errorf("invalid key type: %T", raw) + } + + // This is not standard; RFC allows any unique identifier as long as they + // match in subject/authority chains but suggests specific hashing of DER + // bytes of public key including DER tags. I can't be bothered to do esp. + // since ECDSA keys don't have a handy way to marshal the publick key alone. + h := sha256.New() + h.Write(pub.X.Bytes()) + h.Write(pub.Y.Bytes()) + return h.Sum([]byte{}), nil +} diff --git a/agent/connect/spiffe.go b/agent/connect/spiffe.go new file mode 100644 index 000000000..58a6b83e3 --- /dev/null +++ b/agent/connect/spiffe.go @@ -0,0 +1,55 @@ +package connect + +import ( + "fmt" + "net/url" + "regexp" +) + +// SpiffeID represents a Connect-valid SPIFFE ID. The user should type switch +// on the various implementations in this package to determine the type of ID. +type SpiffeID interface { + URI() *url.URL +} + +var ( + spiffeIDServiceRegexp = regexp.MustCompile( + `^/ns/(\w+)/dc/(\w+)/svc/(\w+)$`) +) + +// ParseSpiffeID parses a SPIFFE ID from the input URI. +func ParseSpiffeID(input *url.URL) (SpiffeID, error) { + if input.Scheme != "spiffe" { + return nil, fmt.Errorf("SPIFFE ID must have 'spiffe' scheme") + } + + // Test for service IDs + if v := spiffeIDServiceRegexp.FindStringSubmatch(input.Path); v != nil { + return &SpiffeIDService{ + Host: input.Host, + Namespace: v[1], + Datacenter: v[2], + Service: v[3], + }, nil + } + + return nil, fmt.Errorf("SPIFFE ID is not in the expected format") +} + +// SpiffeIDService is the structure to represent the SPIFFE ID for a service. +type SpiffeIDService struct { + Host string + Namespace string + Datacenter string + Service string +} + +// URI returns the *url.URL for this SPIFFE ID. +func (id *SpiffeIDService) URI() *url.URL { + var result url.URL + result.Scheme = "spiffe" + result.Host = id.Host + result.Path = fmt.Sprintf("/ns/%s/dc/%s/svc/%s", + id.Namespace, id.Datacenter, id.Service) + return &result +} diff --git a/agent/connect/spiffe_test.go b/agent/connect/spiffe_test.go new file mode 100644 index 000000000..861a4fa63 --- /dev/null +++ b/agent/connect/spiffe_test.go @@ -0,0 +1,57 @@ +package connect + +import ( + "net/url" + "testing" + + "github.com/stretchr/testify/assert" +) + +// testSpiffeIDCases contains the test cases for parsing and encoding +// the SPIFFE IDs. This is a global since it is used in multiple test functions. +var testSpiffeIDCases = []struct { + Name string + URI string + Struct interface{} + ParseError string +}{ + { + "invalid scheme", + "http://google.com/", + nil, + "scheme", + }, + + { + "basic service ID", + "spiffe://1234.consul/ns/default/dc/dc01/svc/web", + &SpiffeIDService{ + Host: "1234.consul", + Namespace: "default", + Datacenter: "dc01", + Service: "web", + }, + "", + }, +} + +func TestParseSpiffeID(t *testing.T) { + for _, tc := range testSpiffeIDCases { + t.Run(tc.Name, func(t *testing.T) { + assert := assert.New(t) + + // Parse the URI, should always be valid + uri, err := url.Parse(tc.URI) + assert.Nil(err) + + // Parse the ID and check the error/return value + actual, err := ParseSpiffeID(uri) + assert.Equal(tc.ParseError != "", err != nil, "error value") + if err != nil { + assert.Contains(err.Error(), tc.ParseError) + return + } + assert.Equal(tc.Struct, actual) + }) + } +} From a360c5cca41e7561ff2d3d6ad07835cc8e090e32 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 19 Mar 2018 14:36:17 -0700 Subject: [PATCH 072/539] agent/consul: basic sign endpoint not tested yet --- agent/connect/ca.go | 15 +++ {connect => agent/connect}/testing.go | 56 +++++++++++- {connect => agent/connect}/testing_test.go | 0 agent/consul/connect_ca_endpoint.go | 101 +++++++++++++++++++++ agent/structs/connect_ca.go | 21 ++++- connect/ca.go | 48 ---------- connect/connect.go | 3 - 7 files changed, 188 insertions(+), 56 deletions(-) rename {connect => agent/connect}/testing.go (83%) rename {connect => agent/connect}/testing_test.go (100%) delete mode 100644 connect/ca.go delete mode 100644 connect/connect.go diff --git a/agent/connect/ca.go b/agent/connect/ca.go index 90c484529..a0a65ece6 100644 --- a/agent/connect/ca.go +++ b/agent/connect/ca.go @@ -40,6 +40,21 @@ func ParseSigner(pemValue string) (crypto.Signer, error) { } } +// ParseCSR parses a CSR from a PEM-encoded value. The certificate request +// must be the the first block in the PEM value. +func ParseCSR(pemValue string) (*x509.CertificateRequest, error) { + block, _ := pem.Decode([]byte(pemValue)) + if block == nil { + return nil, fmt.Errorf("no PEM-encoded data found") + } + + if block.Type != "CERTIFICATE REQUEST" { + return nil, fmt.Errorf("first PEM-block should be CERTIFICATE REQUEST type") + } + + return x509.ParseCertificateRequest(block.Bytes) +} + // KeyId returns a x509 KeyId from the given signing key. The key must be // an *ecdsa.PublicKey, but is an interface type to support crypto.Signer. func KeyId(raw interface{}) ([]byte, error) { diff --git a/connect/testing.go b/agent/connect/testing.go similarity index 83% rename from connect/testing.go rename to agent/connect/testing.go index 78008270a..96b13dcf5 100644 --- a/connect/testing.go +++ b/agent/connect/testing.go @@ -11,6 +11,7 @@ import ( "crypto/x509/pkix" "encoding/pem" "fmt" + "math/big" "net/url" "sync/atomic" "time" @@ -45,7 +46,7 @@ func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { signer := testPrivateKey(t, &result) // The serial number for the cert - sn, err := SerialNumber() + sn, err := testSerialNumber() if err != nil { t.Fatalf("error generating serial number: %s", err) } @@ -124,7 +125,11 @@ func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { // the given CA Root. func TestLeaf(t testing.T, service string, root *structs.CARoot) string { // Parse the CA cert and signing key from the root - caCert, err := ParseCert(root.RootCert) + cert := root.SigningCert + if cert == "" { + cert = root.RootCert + } + caCert, err := ParseCert(cert) if err != nil { t.Fatalf("error parsing CA cert: %s", err) } @@ -133,8 +138,16 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { t.Fatalf("error parsing signing key: %s", err) } + // Build the SPIFFE ID + spiffeId := &SpiffeIDService{ + Host: fmt.Sprintf("%s.consul", testClusterID), + Namespace: "default", + Datacenter: "dc01", + Service: service, + } + // The serial number for the cert - sn, err := SerialNumber() + sn, err := testSerialNumber() if err != nil { t.Fatalf("error generating serial number: %s", err) } @@ -143,6 +156,7 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { template := x509.Certificate{ SerialNumber: sn, Subject: pkix.Name{CommonName: service}, + URIs: []*url.URL{spiffeId.URI()}, SignatureAlgorithm: x509.ECDSAWithSHA256, BasicConstraintsValid: true, KeyUsage: x509.KeyUsageDataEncipherment | x509.KeyUsageKeyAgreement, @@ -171,6 +185,30 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { return buf.String() } +// TestCSR returns a CSR to sign the given service. +func TestCSR(t testing.T, id SpiffeID) string { + template := &x509.CertificateRequest{ + URIs: []*url.URL{id.URI()}, + } + + // Create the private key we'll use + signer := testPrivateKey(t, nil) + + // Create the CSR itself + bs, err := x509.CreateCertificateRequest(rand.Reader, template, signer) + if err != nil { + t.Fatalf("error creating CSR: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: bs}) + if err != nil { + t.Fatalf("error encoding CSR: %s", err) + } + + return buf.String() +} + // testKeyID returns a KeyID from the given public key. The "raw" must be // an *ecdsa.PublicKey, but is an interface type to suppot crypto.Signer.Public // values. @@ -221,10 +259,20 @@ func testPrivateKey(t testing.T, ca *structs.CARoot) crypto.Signer { if err != nil { t.Fatalf("error encoding private key: %s", err) } - ca.SigningKey = buf.String() + if ca != nil { + ca.SigningKey = buf.String() + } // Memoize the key testMemoizePK.Store(pk) return pk } + +// testSerialNumber generates a serial number suitable for a certificate. +// For testing, this just sets it to a random number. +// +// This function is taken directly from the Vault implementation. +func testSerialNumber() (*big.Int, error) { + return rand.Int(rand.Reader, (&big.Int{}).Exp(big.NewInt(2), big.NewInt(159), nil)) +} diff --git a/connect/testing_test.go b/agent/connect/testing_test.go similarity index 100% rename from connect/testing_test.go rename to agent/connect/testing_test.go diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 3f35ad79f..f07fbd90f 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -1,6 +1,16 @@ package consul import ( + "bytes" + "crypto/rand" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "fmt" + "math/big" + "time" + + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/consul/state" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" @@ -53,3 +63,94 @@ func (s *ConnectCA) Roots( }, ) } + +// Sign signs a certificate for a service. +// +// NOTE(mitchellh): There is a LOT missing from this. I do next to zero +// validation of the incoming CSR, the way the cert is signed probably +// isn't right, we're not using enough of the CSR fields, etc. +func (s *ConnectCA) Sign( + args *structs.CASignRequest, + reply *structs.IndexedCARoots) error { + // Parse the CSR + csr, err := connect.ParseCSR(args.CSR) + if err != nil { + return err + } + + // Parse the SPIFFE ID + spiffeId, err := connect.ParseSpiffeID(csr.URIs[0]) + if err != nil { + return err + } + serviceId, ok := spiffeId.(*connect.SpiffeIDService) + if !ok { + return fmt.Errorf("SPIFFE ID in CSR must be a service ID") + } + + var root *structs.CARoot + + // Determine the signing certificate. It is the set signing cert + // unless that is empty, in which case it is identically to the public + // cert. + certPem := root.SigningCert + if certPem == "" { + certPem = root.RootCert + } + + // Parse the CA cert and signing key from the root + caCert, err := connect.ParseCert(certPem) + if err != nil { + return fmt.Errorf("error parsing CA cert: %s", err) + } + signer, err := connect.ParseSigner(root.SigningKey) + if err != nil { + return fmt.Errorf("error parsing signing key: %s", err) + } + + // The serial number for the cert. NOTE(mitchellh): in the final + // implementation this should be monotonically increasing based on + // some raft state. + sn, err := rand.Int(rand.Reader, (&big.Int{}).Exp(big.NewInt(2), big.NewInt(159), nil)) + if err != nil { + return fmt.Errorf("error generating serial number: %s", err) + } + + // Create the keyId for the cert from the signing public key. + keyId, err := connect.KeyId(signer.Public()) + if err != nil { + return err + } + + // Cert template for generation + template := x509.Certificate{ + SerialNumber: sn, + Subject: pkix.Name{CommonName: serviceId.Service}, + URIs: csr.URIs, + SignatureAlgorithm: x509.ECDSAWithSHA256, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageDataEncipherment | x509.KeyUsageKeyAgreement, + ExtKeyUsage: []x509.ExtKeyUsage{ + x509.ExtKeyUsageClientAuth, + x509.ExtKeyUsageServerAuth, + }, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: keyId, + SubjectKeyId: keyId, + } + + // Create the certificate, PEM encode it and return that value. + var buf bytes.Buffer + bs, err := x509.CreateCertificate( + rand.Reader, &template, caCert, signer.Public(), signer) + if err != nil { + return fmt.Errorf("error generating certificate: %s", err) + } + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + return fmt.Errorf("error encoding private key: %s", err) + } + + return nil +} diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 46725dcc7..992fce85a 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -28,7 +28,8 @@ type CARoot struct { RootCert string // SigningCert is the PEM-encoded signing certificate and SigningKey - // is the PEM-encoded private key for the signing certificate. + // is the PEM-encoded private key for the signing certificate. These + // may actually be empty if the CA plugin in use manages these for us. SigningCert string SigningKey string @@ -37,3 +38,21 @@ type CARoot struct { // CARoots is a list of CARoot structures. type CARoots []*CARoot + +// CASignRequest is the request for signing a service certificate. +type CASignRequest struct { + // Datacenter is the target for this request. + Datacenter string + + // CSR is the PEM-encoded CSR. + CSR string + + // WriteRequest is a common struct containing ACL tokens and other + // write-related common elements for requests. + WriteRequest +} + +// RequestDatacenter returns the datacenter for a given request. +func (q *CASignRequest) RequestDatacenter() string { + return q.Datacenter +} diff --git a/connect/ca.go b/connect/ca.go deleted file mode 100644 index e9ada4953..000000000 --- a/connect/ca.go +++ /dev/null @@ -1,48 +0,0 @@ -package connect - -import ( - "crypto" - "crypto/rand" - "crypto/x509" - "encoding/pem" - "fmt" - "math/big" -) - -// ParseCert parses the x509 certificate from a PEM-encoded value. -func ParseCert(pemValue string) (*x509.Certificate, error) { - block, _ := pem.Decode([]byte(pemValue)) - if block == nil { - return nil, fmt.Errorf("no PEM-encoded data found") - } - - if block.Type != "CERTIFICATE" { - return nil, fmt.Errorf("first PEM-block should be CERTIFICATE type") - } - - return x509.ParseCertificate(block.Bytes) -} - -// ParseSigner parses a crypto.Signer from a PEM-encoded key. The private key -// is expected to be the first block in the PEM value. -func ParseSigner(pemValue string) (crypto.Signer, error) { - block, _ := pem.Decode([]byte(pemValue)) - if block == nil { - return nil, fmt.Errorf("no PEM-encoded data found") - } - - switch block.Type { - case "EC PRIVATE KEY": - return x509.ParseECPrivateKey(block.Bytes) - - default: - return nil, fmt.Errorf("unknown PEM block type for signing key: %s", block.Type) - } -} - -// SerialNumber generates a serial number suitable for a certificate. -// -// This function is taken directly from the Vault implementation. -func SerialNumber() (*big.Int, error) { - return rand.Int(rand.Reader, (&big.Int{}).Exp(big.NewInt(2), big.NewInt(159), nil)) -} diff --git a/connect/connect.go b/connect/connect.go deleted file mode 100644 index b2ad85f71..000000000 --- a/connect/connect.go +++ /dev/null @@ -1,3 +0,0 @@ -// Package connect contains utilities and helpers for working with the -// Connect feature of Consul. -package connect From 9a8653f45e1b003bf1ac93cfe2419f7c1f913f94 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 19 Mar 2018 20:29:14 -0700 Subject: [PATCH 073/539] agent/consul: test for ConnectCA.Sign --- agent/connect/{testing.go => testing_ca.go} | 16 ++++++- .../{testing_test.go => testing_ca_test.go} | 0 agent/connect/testing_spiffe.go | 15 +++++++ agent/consul/connect_ca_endpoint.go | 7 +++- agent/consul/connect_ca_endpoint_test.go | 42 +++++++++++++++++++ agent/consul/state/connect_ca.go | 18 ++++++++ agent/structs/connect_ca.go | 6 +++ 7 files changed, 101 insertions(+), 3 deletions(-) rename agent/connect/{testing.go => testing_ca.go} (95%) rename agent/connect/{testing_test.go => testing_ca_test.go} (100%) create mode 100644 agent/connect/testing_spiffe.go create mode 100644 agent/consul/connect_ca_endpoint_test.go diff --git a/agent/connect/testing.go b/agent/connect/testing_ca.go similarity index 95% rename from agent/connect/testing.go rename to agent/connect/testing_ca.go index 96b13dcf5..b6140bb04 100644 --- a/agent/connect/testing.go +++ b/agent/connect/testing_ca.go @@ -17,6 +17,7 @@ import ( "time" "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/go-uuid" "github.com/mitchellh/go-testing-interface" ) @@ -32,14 +33,15 @@ const testClusterID = "11111111-2222-3333-4444-555555555555" var testCACounter uint64 = 0 // TestCA creates a test CA certificate and signing key and returns it -// in the CARoot structure format. The CARoot returned will NOT have an ID -// set. +// in the CARoot structure format. The returned CA will be set as Active = true. // // If xc is non-nil, then the returned certificate will have a signing cert // that is cross-signed with the previous cert, and this will be set as // SigningCert. func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { var result structs.CARoot + result.ID = testUUID(t) + result.Active = true result.Name = fmt.Sprintf("Test CA %d", atomic.AddUint64(&testCACounter, 1)) // Create the private key we'll use for this CA cert. @@ -276,3 +278,13 @@ func testPrivateKey(t testing.T, ca *structs.CARoot) crypto.Signer { func testSerialNumber() (*big.Int, error) { return rand.Int(rand.Reader, (&big.Int{}).Exp(big.NewInt(2), big.NewInt(159), nil)) } + +// testUUID generates a UUID for testing. +func testUUID(t testing.T) string { + ret, err := uuid.GenerateUUID() + if err != nil { + t.Fatalf("Unable to generate a UUID, %s", err) + } + + return ret +} diff --git a/agent/connect/testing_test.go b/agent/connect/testing_ca_test.go similarity index 100% rename from agent/connect/testing_test.go rename to agent/connect/testing_ca_test.go diff --git a/agent/connect/testing_spiffe.go b/agent/connect/testing_spiffe.go new file mode 100644 index 000000000..e2e7a470f --- /dev/null +++ b/agent/connect/testing_spiffe.go @@ -0,0 +1,15 @@ +package connect + +import ( + "github.com/mitchellh/go-testing-interface" +) + +// TestSpiffeIDService returns a SPIFFE ID representing a service. +func TestSpiffeIDService(t testing.T, service string) *SpiffeIDService { + return &SpiffeIDService{ + Host: testClusterID + ".consul", + Namespace: "default", + Datacenter: "dc01", + Service: service, + } +} diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index f07fbd90f..2e26c4e2b 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -88,7 +88,12 @@ func (s *ConnectCA) Sign( return fmt.Errorf("SPIFFE ID in CSR must be a service ID") } - var root *structs.CARoot + // Get the currently active root + state := s.srv.fsm.State() + _, root, err := state.CARootActive(nil) + if err != nil { + return err + } // Determine the signing certificate. It is the set signing cert // unless that is empty, in which case it is identically to the public diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go new file mode 100644 index 000000000..a08e31e04 --- /dev/null +++ b/agent/consul/connect_ca_endpoint_test.go @@ -0,0 +1,42 @@ +package consul + +import ( + "os" + "testing" + + "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/testrpc" + "github.com/hashicorp/net-rpc-msgpackrpc" + "github.com/stretchr/testify/assert" +) + +// Test CA signing +// +// NOTE(mitchellh): Just testing the happy path and not all the other validation +// issues because the internals of this method will probably be gutted for the +// CA plugins then we can just test mocks. +func TestConnectCASign(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Insert a CA + state := s1.fsm.State() + assert.Nil(state.CARootSet(1, connect.TestCA(t, nil))) + + // Generate a CSR and request signing + args := &structs.CASignRequest{ + Datacenter: "dc01", + CSR: connect.TestCSR(t, connect.TestSpiffeIDService(t, "web")), + } + var reply interface{} + assert.Nil(msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", args, &reply)) +} diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go index 9e3195918..9c19b65c2 100644 --- a/agent/consul/state/connect_ca.go +++ b/agent/consul/state/connect_ca.go @@ -55,6 +55,24 @@ func (s *Store) CARoots(ws memdb.WatchSet) (uint64, structs.CARoots, error) { return idx, results, nil } +// CARootActive returns the currently active CARoot. +func (s *Store) CARootActive(ws memdb.WatchSet) (uint64, *structs.CARoot, error) { + // Get all the roots since there should never be that many and just + // do the filtering in this method. + var result *structs.CARoot + idx, roots, err := s.CARoots(ws) + if err == nil { + for _, r := range roots { + if r.Active { + result = r + break + } + } + } + + return idx, result, err +} + // CARootSet creates or updates a CA root. // // NOTE(mitchellh): I have a feeling we'll want a CARootMultiSetCAS to diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 992fce85a..045ebea90 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -33,6 +33,12 @@ type CARoot struct { SigningCert string SigningKey string + // Active is true if this is the current active CA. This must only + // be true for exactly one CA. For any method that modifies roots in the + // state store, tests should be written to verify that multiple roots + // cannot be active. + Active bool + RaftIndex } From 1928c07d0c4faaa64a24105557a25edc05e64323 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 19 Mar 2018 21:00:01 -0700 Subject: [PATCH 074/539] agent/consul: key the public key of the CSR, verify in test --- agent/connect/testing_ca.go | 3 ++- agent/consul/connect_ca_endpoint.go | 15 ++++++++++++--- agent/consul/connect_ca_endpoint_test.go | 16 ++++++++++++++-- agent/structs/connect_ca.go | 14 ++++++++++++++ 4 files changed, 42 insertions(+), 6 deletions(-) diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index b6140bb04..b7f436834 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -190,7 +190,8 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { // TestCSR returns a CSR to sign the given service. func TestCSR(t testing.T, id SpiffeID) string { template := &x509.CertificateRequest{ - URIs: []*url.URL{id.URI()}, + URIs: []*url.URL{id.URI()}, + SignatureAlgorithm: x509.ECDSAWithSHA256, } // Create the private key we'll use diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 2e26c4e2b..9e6b8a4b1 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -71,7 +71,7 @@ func (s *ConnectCA) Roots( // isn't right, we're not using enough of the CSR fields, etc. func (s *ConnectCA) Sign( args *structs.CASignRequest, - reply *structs.IndexedCARoots) error { + reply *structs.IssuedCert) error { // Parse the CSR csr, err := connect.ParseCSR(args.CSR) if err != nil { @@ -132,14 +132,17 @@ func (s *ConnectCA) Sign( SerialNumber: sn, Subject: pkix.Name{CommonName: serviceId.Service}, URIs: csr.URIs, - SignatureAlgorithm: x509.ECDSAWithSHA256, + Signature: csr.Signature, + SignatureAlgorithm: csr.SignatureAlgorithm, + PublicKeyAlgorithm: csr.PublicKeyAlgorithm, + PublicKey: csr.PublicKey, BasicConstraintsValid: true, KeyUsage: x509.KeyUsageDataEncipherment | x509.KeyUsageKeyAgreement, ExtKeyUsage: []x509.ExtKeyUsage{ x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth, }, - NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotAfter: time.Now().Add(3 * 24 * time.Hour), NotBefore: time.Now(), AuthorityKeyId: keyId, SubjectKeyId: keyId, @@ -157,5 +160,11 @@ func (s *ConnectCA) Sign( return fmt.Errorf("error encoding private key: %s", err) } + // Set the response + *reply = structs.IssuedCert{ + SerialNumber: template.SerialNumber, + Cert: buf.String(), + } + return nil } diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index a08e31e04..d658c7ade 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -1,6 +1,7 @@ package consul import ( + "crypto/x509" "os" "testing" @@ -30,13 +31,24 @@ func TestConnectCASign(t *testing.T) { // Insert a CA state := s1.fsm.State() - assert.Nil(state.CARootSet(1, connect.TestCA(t, nil))) + ca := connect.TestCA(t, nil) + assert.Nil(state.CARootSet(1, ca)) // Generate a CSR and request signing args := &structs.CASignRequest{ Datacenter: "dc01", CSR: connect.TestCSR(t, connect.TestSpiffeIDService(t, "web")), } - var reply interface{} + var reply structs.IssuedCert assert.Nil(msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", args, &reply)) + + // Verify that the cert is signed by the CA + roots := x509.NewCertPool() + assert.True(roots.AppendCertsFromPEM([]byte(ca.RootCert))) + leaf, err := connect.ParseCert(reply.Cert) + assert.Nil(err) + _, err = leaf.Verify(x509.VerifyOptions{ + Roots: roots, + }) + assert.Nil(err) } diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 045ebea90..6723d9b98 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -1,5 +1,9 @@ package structs +import ( + "math/big" +) + // IndexedCARoots is the list of currently trusted CA Roots. type IndexedCARoots struct { // ActiveRootID is the ID of a root in Roots that is the active CA root. @@ -62,3 +66,13 @@ type CASignRequest struct { func (q *CASignRequest) RequestDatacenter() string { return q.Datacenter } + +// IssuedCert is a certificate that has been issued by a Connect CA. +type IssuedCert struct { + // SerialNumber is the unique serial number for this certificate. + SerialNumber *big.Int + + // Cert is the PEM-encoded certificate. This should not be stored in the + // state store, but is present in the sign API response. + Cert string `json:",omitempty"` +} From 712888258b659d1f08ed9e90fe7efbe69adb6f74 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 20 Mar 2018 10:36:05 -0700 Subject: [PATCH 075/539] agent/consul: tests for CA endpoints --- agent/agent_endpoint.go | 9 +++--- agent/connect_ca_endpoint_test.go | 27 +++++++++++++++++ agent/consul/connect_ca_endpoint.go | 5 ++++ agent/consul/connect_ca_endpoint_test.go | 38 ++++++++++++++++++++++++ 4 files changed, 74 insertions(+), 5 deletions(-) create mode 100644 agent/connect_ca_endpoint_test.go diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index e3e8fcd51..97c52512a 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -839,9 +839,8 @@ func (s *HTTPServer) AgentToken(resp http.ResponseWriter, req *http.Request) (in // AgentConnectCARoots returns the trusted CA roots. func (s *HTTPServer) AgentConnectCARoots(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - if req.Method != "GET" { - return nil, MethodNotAllowedError{req.Method, []string{"GET"}} - } - - return nil, nil + // NOTE(mitchellh): for now this is identical to /v1/connect/ca/roots. + // In the future, we're going to do some agent-local caching and the + // behavior will differ. + return s.ConnectCARoots(resp, req) } diff --git a/agent/connect_ca_endpoint_test.go b/agent/connect_ca_endpoint_test.go new file mode 100644 index 000000000..ee30f57a9 --- /dev/null +++ b/agent/connect_ca_endpoint_test.go @@ -0,0 +1,27 @@ +package agent + +import ( + "net/http" + "net/http/httptest" + "testing" + + "github.com/hashicorp/consul/agent/structs" + "github.com/stretchr/testify/assert" +) + +func TestConnectCARoots_empty(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + req, _ := http.NewRequest("GET", "/v1/connect/ca/roots", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.ConnectCARoots(resp, req) + assert.Nil(err) + + value := obj.(structs.IndexedCARoots) + assert.Equal(value.ActiveRootID, "") + assert.Len(value.Roots, 0) +} diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 9e6b8a4b1..1702c8740 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -56,6 +56,11 @@ func (s *ConnectCA) Roots( Name: r.Name, RootCert: r.RootCert, RaftIndex: r.RaftIndex, + Active: r.Active, + } + + if r.Active { + reply.ActiveRootID = r.ID } } diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index d658c7ade..375d75115 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -12,6 +12,44 @@ import ( "github.com/stretchr/testify/assert" ) +// Test listing root CAs. +func TestConnectCARoots(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Insert some CAs + state := s1.fsm.State() + ca1 := connect.TestCA(t, nil) + ca2 := connect.TestCA(t, nil) + ca2.Active = false + assert.Nil(state.CARootSet(1, ca1)) + assert.Nil(state.CARootSet(2, ca2)) + + // Request + args := &structs.DCSpecificRequest{ + Datacenter: "dc1", + } + var reply structs.IndexedCARoots + assert.Nil(msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", args, &reply)) + + // Verify + assert.Equal(ca1.ID, reply.ActiveRootID) + assert.Len(reply.Roots, 2) + for _, r := range reply.Roots { + // These must never be set, for security + assert.Equal("", r.SigningCert) + assert.Equal("", r.SigningKey) + } +} + // Test CA signing // // NOTE(mitchellh): Just testing the happy path and not all the other validation From 80a058a573ee992123dd8292d2b0ce97023dfcbf Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 10:10:53 -0700 Subject: [PATCH 076/539] agent/consul: CAS operations for setting the CA root --- agent/agent_test.go | 9 +++ agent/connect_ca_endpoint_test.go | 21 ++++++ agent/consul/connect_ca_endpoint.go | 3 + agent/consul/connect_ca_endpoint_test.go | 9 ++- agent/consul/fsm/commands_oss.go | 26 +++++++ agent/consul/state/connect_ca.go | 88 +++++++++++++----------- agent/consul/testing.go | 26 +++++++ agent/consul/testing_endpoint.go | 43 ++++++++++++ agent/consul/testing_endpoint_test.go | 42 +++++++++++ agent/consul/testing_test.go | 13 ++++ agent/structs/connect_ca.go | 22 ++++++ agent/structs/structs.go | 1 + 12 files changed, 259 insertions(+), 44 deletions(-) create mode 100644 agent/consul/testing.go create mode 100644 agent/consul/testing_endpoint.go create mode 100644 agent/consul/testing_endpoint_test.go create mode 100644 agent/consul/testing_test.go diff --git a/agent/agent_test.go b/agent/agent_test.go index 58ada5561..df1593bd9 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -16,6 +16,7 @@ import ( "time" "github.com/hashicorp/consul/agent/checks" + "github.com/hashicorp/consul/agent/consul" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/testutil" @@ -25,6 +26,14 @@ import ( "github.com/pascaldekloe/goe/verify" ) +// TestMain is the main entrypoint for `go test`. +func TestMain(m *testing.M) { + // Enable the test RPC endpoints + consul.TestEndpoint() + + os.Exit(m.Run()) +} + func externalIP() (string, error) { addrs, err := net.InterfaceAddrs() if err != nil { diff --git a/agent/connect_ca_endpoint_test.go b/agent/connect_ca_endpoint_test.go index ee30f57a9..cec8382c0 100644 --- a/agent/connect_ca_endpoint_test.go +++ b/agent/connect_ca_endpoint_test.go @@ -5,6 +5,7 @@ import ( "net/http/httptest" "testing" + "github.com/hashicorp/consul/agent/consul" "github.com/hashicorp/consul/agent/structs" "github.com/stretchr/testify/assert" ) @@ -25,3 +26,23 @@ func TestConnectCARoots_empty(t *testing.T) { assert.Equal(value.ActiveRootID, "") assert.Len(value.Roots, 0) } + +func TestConnectCARoots_list(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + state := consul.TestServerState(a.Agent.delegate.(*consul.Server)) + t.Log(state.CARoots(nil)) + + req, _ := http.NewRequest("GET", "/v1/connect/ca/roots", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.ConnectCARoots(resp, req) + assert.Nil(err) + + value := obj.(structs.IndexedCARoots) + assert.Equal(value.ActiveRootID, "") + assert.Len(value.Roots, 0) +} diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 1702c8740..d6ddaef58 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -99,6 +99,9 @@ func (s *ConnectCA) Sign( if err != nil { return err } + if root == nil { + return fmt.Errorf("no active CA found") + } // Determine the signing certificate. It is the set signing cert // unless that is empty, in which case it is identically to the public diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index 375d75115..8a3f1b4f2 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -30,8 +30,9 @@ func TestConnectCARoots(t *testing.T) { ca1 := connect.TestCA(t, nil) ca2 := connect.TestCA(t, nil) ca2.Active = false - assert.Nil(state.CARootSet(1, ca1)) - assert.Nil(state.CARootSet(2, ca2)) + ok, err := state.CARootSetCAS(1, 0, []*structs.CARoot{ca1, ca2}) + assert.True(ok) + assert.Nil(err) // Request args := &structs.DCSpecificRequest{ @@ -70,7 +71,9 @@ func TestConnectCASign(t *testing.T) { // Insert a CA state := s1.fsm.State() ca := connect.TestCA(t, nil) - assert.Nil(state.CARootSet(1, ca)) + ok, err := state.CARootSetCAS(1, 0, []*structs.CARoot{ca}) + assert.True(ok) + assert.Nil(err) // Generate a CSR and request signing args := &structs.CASignRequest{ diff --git a/agent/consul/fsm/commands_oss.go b/agent/consul/fsm/commands_oss.go index 51f127899..2d2627748 100644 --- a/agent/consul/fsm/commands_oss.go +++ b/agent/consul/fsm/commands_oss.go @@ -21,6 +21,7 @@ func init() { registerCommand(structs.TxnRequestType, (*FSM).applyTxn) registerCommand(structs.AutopilotRequestType, (*FSM).applyAutopilotUpdate) registerCommand(structs.IntentionRequestType, (*FSM).applyIntentionOperation) + registerCommand(structs.ConnectCARequestType, (*FSM).applyConnectCAOperation) } func (c *FSM) applyRegister(buf []byte, index uint64) interface{} { @@ -269,3 +270,28 @@ func (c *FSM) applyIntentionOperation(buf []byte, index uint64) interface{} { return fmt.Errorf("Invalid Intention operation '%s'", req.Op) } } + +// applyConnectCAOperation applies the given CA operation to the state store. +func (c *FSM) applyConnectCAOperation(buf []byte, index uint64) interface{} { + var req structs.CARequest + if err := structs.Decode(buf, &req); err != nil { + panic(fmt.Errorf("failed to decode request: %v", err)) + } + + defer metrics.MeasureSinceWithLabels([]string{"consul", "fsm", "ca"}, time.Now(), + []metrics.Label{{Name: "op", Value: string(req.Op)}}) + defer metrics.MeasureSinceWithLabels([]string{"fsm", "ca"}, time.Now(), + []metrics.Label{{Name: "op", Value: string(req.Op)}}) + switch req.Op { + case structs.CAOpSet: + act, err := c.state.CARootSetCAS(index, req.Index, req.Roots) + if err != nil { + return err + } + + return act + default: + c.logger.Printf("[WARN] consul.fsm: Invalid CA operation '%s'", req.Op) + return fmt.Errorf("Invalid CA operation '%s'", req.Op) + } +} diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go index 9c19b65c2..3b66a07c6 100644 --- a/agent/consul/state/connect_ca.go +++ b/agent/consul/state/connect_ca.go @@ -73,52 +73,58 @@ func (s *Store) CARootActive(ws memdb.WatchSet) (uint64, *structs.CARoot, error) return idx, result, err } -// CARootSet creates or updates a CA root. +// CARootSetCAS sets the current CA root state using a check-and-set operation. +// On success, this will replace the previous set of CARoots completely with +// the given set of roots. // -// NOTE(mitchellh): I have a feeling we'll want a CARootMultiSetCAS to -// perform a check-and-set on the entire set of CARoots versus an individual -// set, since we'll want to modify them atomically during events such as -// rotation. -func (s *Store) CARootSet(idx uint64, v *structs.CARoot) error { +// The first boolean result returns whether the transaction succeeded or not. +func (s *Store) CARootSetCAS(idx, cidx uint64, rs []*structs.CARoot) (bool, error) { tx := s.db.Txn(true) defer tx.Abort() - if err := s.caRootSetTxn(tx, idx, v); err != nil { - return err + // Get the current max index + if midx := maxIndexTxn(tx, caRootTableName); midx != cidx { + return false, nil + } + + // Go through and find any existing matching CAs so we can preserve and + // update their Create/ModifyIndex values. + for _, r := range rs { + if r.ID == "" { + return false, ErrMissingCARootID + } + + existing, err := tx.First(caRootTableName, "id", r.ID) + if err != nil { + return false, fmt.Errorf("failed CA root lookup: %s", err) + } + + if existing != nil { + r.CreateIndex = existing.(*structs.CARoot).CreateIndex + } else { + r.CreateIndex = idx + } + r.ModifyIndex = idx + } + + // Delete all + _, err := tx.DeleteAll(caRootTableName, "id") + if err != nil { + return false, err + } + + // Insert all + for _, r := range rs { + if err := tx.Insert(caRootTableName, r); err != nil { + return false, err + } + } + + // Update the index + if err := tx.Insert("index", &IndexEntry{caRootTableName, idx}); err != nil { + return false, fmt.Errorf("failed updating index: %s", err) } tx.Commit() - return nil -} - -// caRootSetTxn is the inner method used to insert or update a CA root with -// the proper indexes into the state store. -func (s *Store) caRootSetTxn(tx *memdb.Txn, idx uint64, v *structs.CARoot) error { - // ID is required - if v.ID == "" { - return ErrMissingCARootID - } - - // Check for an existing value - existing, err := tx.First(caRootTableName, "id", v.ID) - if err != nil { - return fmt.Errorf("failed CA root lookup: %s", err) - } - if existing != nil { - old := existing.(*structs.CARoot) - v.CreateIndex = old.CreateIndex - } else { - v.CreateIndex = idx - } - v.ModifyIndex = idx - - // Insert - if err := tx.Insert(caRootTableName, v); err != nil { - return err - } - if err := tx.Insert("index", &IndexEntry{caRootTableName, idx}); err != nil { - return fmt.Errorf("failed updating index: %s", err) - } - - return nil + return true, nil } diff --git a/agent/consul/testing.go b/agent/consul/testing.go new file mode 100644 index 000000000..afae7c1a1 --- /dev/null +++ b/agent/consul/testing.go @@ -0,0 +1,26 @@ +package consul + +import ( + "sync" +) + +// testEndpointsOnce ensures that endpoints for testing are registered once. +var testEndpointsOnce sync.Once + +// TestEndpoints registers RPC endpoints specifically for testing. These +// endpoints enable some internal data access that we normally disallow, but +// are useful for modifying server state. +// +// To use this, modify TestMain to call this function prior to running tests. +// +// These should NEVER be registered outside of tests. +// +// NOTE(mitchellh): This was created so that the downstream agent tests can +// modify internal Connect CA state. When the CA plugin work comes in with +// a more complete CA API, this may no longer be necessary and we can remove it. +// That would be ideal. +func TestEndpoint() { + testEndpointsOnce.Do(func() { + registerEndpoint(func(s *Server) interface{} { return &Test{s} }) + }) +} diff --git a/agent/consul/testing_endpoint.go b/agent/consul/testing_endpoint.go new file mode 100644 index 000000000..e47e0e737 --- /dev/null +++ b/agent/consul/testing_endpoint.go @@ -0,0 +1,43 @@ +package consul + +import ( + "github.com/hashicorp/consul/agent/structs" +) + +// Test is an RPC endpoint that is only available during `go test` when +// `TestEndpoint` is called. This is not and must not ever be available +// during a real running Consul agent, since it this endpoint bypasses +// critical ACL checks. +type Test struct { + // srv is a pointer back to the server. + srv *Server +} + +// ConnectCASetRoots sets the current CA roots state. +func (s *Test) ConnectCASetRoots( + args []*structs.CARoot, + reply *interface{}) error { + + // Get the highest index + state := s.srv.fsm.State() + idx, _, err := state.CARoots(nil) + if err != nil { + return err + } + + // Commit + resp, err := s.srv.raftApply(structs.ConnectCARequestType, &structs.CARequest{ + Op: structs.CAOpSet, + Index: idx, + Roots: args, + }) + if err != nil { + s.srv.logger.Printf("[ERR] consul.test: Apply failed %v", err) + return err + } + if respErr, ok := resp.(error); ok { + return respErr + } + + return nil +} diff --git a/agent/consul/testing_endpoint_test.go b/agent/consul/testing_endpoint_test.go new file mode 100644 index 000000000..e20213695 --- /dev/null +++ b/agent/consul/testing_endpoint_test.go @@ -0,0 +1,42 @@ +package consul + +import ( + "os" + "testing" + + "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/testrpc" + "github.com/hashicorp/net-rpc-msgpackrpc" + "github.com/stretchr/testify/assert" +) + +// Test setting the CAs +func TestTestConnectCASetRoots(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Prepare + ca1 := connect.TestCA(t, nil) + ca2 := connect.TestCA(t, nil) + ca2.Active = false + + // Request + args := []*structs.CARoot{ca1, ca2} + var reply interface{} + assert.Nil(msgpackrpc.CallWithCodec(codec, "Test.ConnectCASetRoots", args, &reply)) + + // Verify they're there + state := s1.fsm.State() + _, actual, err := state.CARoots(nil) + assert.Nil(err) + assert.Len(actual, 2) +} diff --git a/agent/consul/testing_test.go b/agent/consul/testing_test.go new file mode 100644 index 000000000..98e8dd743 --- /dev/null +++ b/agent/consul/testing_test.go @@ -0,0 +1,13 @@ +package consul + +import ( + "os" + "testing" +) + +func TestMain(m *testing.M) { + // Register the test RPC endpoint + TestEndpoint() + + os.Exit(m.Run()) +} diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 6723d9b98..0437b27cf 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -76,3 +76,25 @@ type IssuedCert struct { // state store, but is present in the sign API response. Cert string `json:",omitempty"` } + +// CAOp is the operation for a request related to intentions. +type CAOp string + +const ( + CAOpSet CAOp = "set" +) + +// CARequest is used to modify connect CA data. This is used by the +// FSM (agent/consul/fsm) to apply changes. +type CARequest struct { + // Op is the type of operation being requested. This determines what + // other fields are required. + Op CAOp + + // Index is used by CAOpSet for a CAS operation. + Index uint64 + + // Roots is a list of roots. This is used for CAOpSet. One root must + // always be active. + Roots []*CARoot +} diff --git a/agent/structs/structs.go b/agent/structs/structs.go index 95c0ba069..a4e942230 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -41,6 +41,7 @@ const ( AreaRequestType = 10 ACLBootstrapRequestType = 11 // FSM snapshots only. IntentionRequestType = 12 + ConnectCARequestType = 13 ) const ( From 748a0bb82440919a6d122831f74c5acc048ac5f3 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 10:20:35 -0700 Subject: [PATCH 077/539] agent: CA root HTTP endpoints --- agent/agent_endpoint_test.go | 50 +++++++++++++++++++++++++++++++ agent/connect_ca_endpoint_test.go | 22 ++++++++++---- 2 files changed, 67 insertions(+), 5 deletions(-) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 566d397cf..4c2f9f1d6 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -16,6 +16,7 @@ import ( "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/checks" "github.com/hashicorp/consul/agent/config" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/logger" @@ -2024,3 +2025,52 @@ func TestAgent_Token(t *testing.T) { } }) } + +func TestAgentConnectCARoots_empty(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/roots", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentConnectCARoots(resp, req) + assert.Nil(err) + + value := obj.(structs.IndexedCARoots) + assert.Equal(value.ActiveRootID, "") + assert.Len(value.Roots, 0) +} + +func TestAgentConnectCARoots_list(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Set some CAs + var reply interface{} + ca1 := connect.TestCA(t, nil) + ca1.Active = false + ca2 := connect.TestCA(t, nil) + assert.Nil(a.RPC("Test.ConnectCASetRoots", + []*structs.CARoot{ca1, ca2}, &reply)) + + // List + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/roots", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentConnectCARoots(resp, req) + assert.Nil(err) + + value := obj.(structs.IndexedCARoots) + assert.Equal(value.ActiveRootID, ca2.ID) + assert.Len(value.Roots, 2) + + // We should never have the secret information + for _, r := range value.Roots { + assert.Equal("", r.SigningCert) + assert.Equal("", r.SigningKey) + } +} diff --git a/agent/connect_ca_endpoint_test.go b/agent/connect_ca_endpoint_test.go index cec8382c0..bcf209ffe 100644 --- a/agent/connect_ca_endpoint_test.go +++ b/agent/connect_ca_endpoint_test.go @@ -5,7 +5,7 @@ import ( "net/http/httptest" "testing" - "github.com/hashicorp/consul/agent/consul" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" "github.com/stretchr/testify/assert" ) @@ -34,15 +34,27 @@ func TestConnectCARoots_list(t *testing.T) { a := NewTestAgent(t.Name(), "") defer a.Shutdown() - state := consul.TestServerState(a.Agent.delegate.(*consul.Server)) - t.Log(state.CARoots(nil)) + // Set some CAs + var reply interface{} + ca1 := connect.TestCA(t, nil) + ca1.Active = false + ca2 := connect.TestCA(t, nil) + assert.Nil(a.RPC("Test.ConnectCASetRoots", + []*structs.CARoot{ca1, ca2}, &reply)) + // List req, _ := http.NewRequest("GET", "/v1/connect/ca/roots", nil) resp := httptest.NewRecorder() obj, err := a.srv.ConnectCARoots(resp, req) assert.Nil(err) value := obj.(structs.IndexedCARoots) - assert.Equal(value.ActiveRootID, "") - assert.Len(value.Roots, 0) + assert.Equal(value.ActiveRootID, ca2.ID) + assert.Len(value.Roots, 2) + + // We should never have the secret information + for _, r := range value.Roots { + assert.Equal("", r.SigningCert) + assert.Equal("", r.SigningKey) + } } From 58b6f476e877cafd62ef8f56955fd8ad7280d4b1 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 10:55:39 -0700 Subject: [PATCH 078/539] agent: /v1/connect/ca/leaf/:service_id --- agent/agent_endpoint.go | 41 +++++++++++++++++++++ agent/agent_endpoint_test.go | 56 +++++++++++++++++++++++++++++ agent/connect/testing_ca.go | 40 +++++++++++++++------ agent/consul/connect_ca_endpoint.go | 2 +- agent/http_oss.go | 1 + agent/structs/connect_ca.go | 19 ++++++++-- 6 files changed, 145 insertions(+), 14 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 97c52512a..bc684f115 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/checks" "github.com/hashicorp/consul/agent/config" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/ipaddr" @@ -21,6 +22,9 @@ import ( "github.com/hashicorp/serf/serf" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" + + // NOTE(mitcehllh): This is temporary while certs are stubbed out. + "github.com/mitchellh/go-testing-interface" ) type Self struct { @@ -844,3 +848,40 @@ func (s *HTTPServer) AgentConnectCARoots(resp http.ResponseWriter, req *http.Req // behavior will differ. return s.ConnectCARoots(resp, req) } + +// AgentConnectCALeafCert returns the certificate bundle for a service +// instance. This supports blocking queries to update the returned bundle. +func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Test the method + if req.Method != "GET" { + return nil, MethodNotAllowedError{req.Method, []string{"GET"}} + } + + // Get the service ID. Note that this is the ID of a service instance. + id := strings.TrimPrefix(req.URL.Path, "/v1/agent/connect/ca/leaf/") + + // Retrieve the service specified + service := s.agent.State.Service(id) + if service == nil { + return nil, fmt.Errorf("unknown service ID: %s", id) + } + + // Create a CSR. + // TODO(mitchellh): This is obviously not production ready! + csr, pk := connect.TestCSR(&testing.RuntimeT{}, &connect.SpiffeIDService{ + Host: "1234.consul", + Namespace: "default", + Datacenter: s.agent.config.Datacenter, + Service: service.Service, + }) + + // Request signing + var reply structs.IssuedCert + args := structs.CASignRequest{CSR: csr} + if err := s.agent.RPC("ConnectCA.Sign", &args, &reply); err != nil { + return nil, err + } + reply.PrivateKeyPEM = pk + + return &reply, nil +} diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 4c2f9f1d6..15267107a 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2,6 +2,7 @@ package agent import ( "bytes" + "crypto/x509" "fmt" "io" "io/ioutil" @@ -2074,3 +2075,58 @@ func TestAgentConnectCARoots_list(t *testing.T) { assert.Equal("", r.SigningKey) } } + +func TestAgentConnectCALeafCert_good(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Set CAs + var reply interface{} + ca1 := connect.TestCA(t, nil) + assert.Nil(a.RPC("Test.ConnectCASetRoots", []*structs.CARoot{ca1}, &reply)) + + { + // Register a local service + args := &structs.ServiceDefinition{ + ID: "foo", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + } + req, _ := http.NewRequest("PUT", "/v1/agent/service/register", jsonReader(args)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + assert.Nil(err) + if !assert.Equal(200, resp.Code) { + t.Log("Body: ", resp.Body.String()) + } + } + + // List + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/foo", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentConnectCALeafCert(resp, req) + assert.Nil(err) + + // Get the issued cert + issued, ok := obj.(*structs.IssuedCert) + assert.True(ok) + + // Verify that the cert is signed by the CA + roots := x509.NewCertPool() + assert.True(roots.AppendCertsFromPEM([]byte(ca1.RootCert))) + leaf, err := connect.ParseCert(issued.CertPEM) + assert.Nil(err) + _, err = leaf.Verify(x509.VerifyOptions{ + Roots: roots, + }) + assert.Nil(err) + + // TODO(mitchellh): verify the private key matches the cert +} diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index b7f436834..f79849016 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -187,29 +187,47 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { return buf.String() } -// TestCSR returns a CSR to sign the given service. -func TestCSR(t testing.T, id SpiffeID) string { +// TestCSR returns a CSR to sign the given service along with the PEM-encoded +// private key for this certificate. +func TestCSR(t testing.T, id SpiffeID) (string, string) { template := &x509.CertificateRequest{ URIs: []*url.URL{id.URI()}, SignatureAlgorithm: x509.ECDSAWithSHA256, } + // Result buffers + var csrBuf, pkBuf bytes.Buffer + // Create the private key we'll use signer := testPrivateKey(t, nil) - // Create the CSR itself - bs, err := x509.CreateCertificateRequest(rand.Reader, template, signer) - if err != nil { - t.Fatalf("error creating CSR: %s", err) + { + // Create the private key PEM + bs, err := x509.MarshalECPrivateKey(signer.(*ecdsa.PrivateKey)) + if err != nil { + t.Fatalf("error marshalling PK: %s", err) + } + + err = pem.Encode(&pkBuf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) + if err != nil { + t.Fatalf("error encoding PK: %s", err) + } } - var buf bytes.Buffer - err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: bs}) - if err != nil { - t.Fatalf("error encoding CSR: %s", err) + { + // Create the CSR itself + bs, err := x509.CreateCertificateRequest(rand.Reader, template, signer) + if err != nil { + t.Fatalf("error creating CSR: %s", err) + } + + err = pem.Encode(&csrBuf, &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: bs}) + if err != nil { + t.Fatalf("error encoding CSR: %s", err) + } } - return buf.String() + return csrBuf.String(), pkBuf.String() } // testKeyID returns a KeyID from the given public key. The "raw" must be diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index d6ddaef58..1f732490b 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -171,7 +171,7 @@ func (s *ConnectCA) Sign( // Set the response *reply = structs.IssuedCert{ SerialNumber: template.SerialNumber, - Cert: buf.String(), + CertPEM: buf.String(), } return nil diff --git a/agent/http_oss.go b/agent/http_oss.go index 3cb18b2e1..d2e86622f 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -30,6 +30,7 @@ func init() { registerEndpoint("/v1/agent/check/fail/", []string{"PUT"}, (*HTTPServer).AgentCheckFail) registerEndpoint("/v1/agent/check/update/", []string{"PUT"}, (*HTTPServer).AgentCheckUpdate) registerEndpoint("/v1/agent/connect/ca/roots", []string{"GET"}, (*HTTPServer).AgentConnectCARoots) + registerEndpoint("/v1/agent/connect/ca/leaf/", []string{"GET"}, (*HTTPServer).AgentConnectCALeafCert) registerEndpoint("/v1/agent/service/register", []string{"PUT"}, (*HTTPServer).AgentRegisterService) registerEndpoint("/v1/agent/service/deregister/", []string{"PUT"}, (*HTTPServer).AgentDeregisterService) registerEndpoint("/v1/agent/service/maintenance/", []string{"PUT"}, (*HTTPServer).AgentServiceMaintenance) diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 0437b27cf..6dc2dbf30 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -2,6 +2,7 @@ package structs import ( "math/big" + "time" ) // IndexedCARoots is the list of currently trusted CA Roots. @@ -72,9 +73,23 @@ type IssuedCert struct { // SerialNumber is the unique serial number for this certificate. SerialNumber *big.Int - // Cert is the PEM-encoded certificate. This should not be stored in the + // CertPEM and PrivateKeyPEM are the PEM-encoded certificate and private + // key for that cert, respectively. This should not be stored in the // state store, but is present in the sign API response. - Cert string `json:",omitempty"` + CertPEM string `json:",omitempty"` + PrivateKeyPEM string + + // Service is the name of the service for which the cert was issued. + // ServiceURI is the cert URI value. + Service string + ServiceURI string + + // ValidAfter and ValidBefore are the validity periods for the + // certificate. + ValidAfter time.Time + ValidBefore time.Time + + RaftIndex } // CAOp is the operation for a request related to intentions. From a8510f8224d9998312cfa540e31d7a9974518f8a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 11:00:46 -0700 Subject: [PATCH 079/539] agent/consul: set more fields on the issued cert --- agent/consul/connect_ca_endpoint.go | 4 ++++ agent/consul/connect_ca_endpoint_test.go | 10 ++++++++-- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 1f732490b..60e631cef 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -172,6 +172,10 @@ func (s *ConnectCA) Sign( *reply = structs.IssuedCert{ SerialNumber: template.SerialNumber, CertPEM: buf.String(), + Service: serviceId.Service, + ServiceURI: template.URIs[0].String(), + ValidAfter: template.NotBefore, + ValidBefore: template.NotAfter, } return nil diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index 8a3f1b4f2..f2404eb4c 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -76,9 +76,11 @@ func TestConnectCASign(t *testing.T) { assert.Nil(err) // Generate a CSR and request signing + spiffeId := connect.TestSpiffeIDService(t, "web") + csr, _ := connect.TestCSR(t, spiffeId) args := &structs.CASignRequest{ Datacenter: "dc01", - CSR: connect.TestCSR(t, connect.TestSpiffeIDService(t, "web")), + CSR: csr, } var reply structs.IssuedCert assert.Nil(msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", args, &reply)) @@ -86,10 +88,14 @@ func TestConnectCASign(t *testing.T) { // Verify that the cert is signed by the CA roots := x509.NewCertPool() assert.True(roots.AppendCertsFromPEM([]byte(ca.RootCert))) - leaf, err := connect.ParseCert(reply.Cert) + leaf, err := connect.ParseCert(reply.CertPEM) assert.Nil(err) _, err = leaf.Verify(x509.VerifyOptions{ Roots: roots, }) assert.Nil(err) + + // Verify other fields + assert.Equal("web", reply.Service) + assert.Equal(spiffeId.URI().String(), reply.ServiceURI) } From 17d6b437d2401ce63c59280c54271551825e0ebc Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 11:20:17 -0700 Subject: [PATCH 080/539] agent/consul/fsm,state: tests for CA root related changes --- agent/consul/fsm/commands_oss_test.go | 33 +++++ agent/consul/state/connect_ca_test.go | 204 ++++++++++++++++++++++++++ 2 files changed, 237 insertions(+) create mode 100644 agent/consul/state/connect_ca_test.go diff --git a/agent/consul/fsm/commands_oss_test.go b/agent/consul/fsm/commands_oss_test.go index acf67c5fb..81852a9c4 100644 --- a/agent/consul/fsm/commands_oss_test.go +++ b/agent/consul/fsm/commands_oss_test.go @@ -8,6 +8,7 @@ import ( "testing" "time" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/consul/autopilot" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/api" @@ -1217,3 +1218,35 @@ func TestFSM_Intention_CRUD(t *testing.T) { assert.Nil(actual) } } + +func TestFSM_CARoots(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + fsm, err := New(nil, os.Stderr) + assert.Nil(err) + + // Roots + ca1 := connect.TestCA(t, nil) + ca2 := connect.TestCA(t, nil) + ca2.Active = false + + // Create a new request. + req := structs.CARequest{ + Op: structs.CAOpSet, + Roots: []*structs.CARoot{ca1, ca2}, + } + + { + buf, err := structs.Encode(structs.ConnectCARequestType, req) + assert.Nil(err) + assert.True(fsm.Apply(makeLog(buf)).(bool)) + } + + // Verify it's in the state store. + { + _, roots, err := fsm.state.CARoots(nil) + assert.Nil(err) + assert.Len(roots, 2) + } +} diff --git a/agent/consul/state/connect_ca_test.go b/agent/consul/state/connect_ca_test.go new file mode 100644 index 000000000..14b5caf54 --- /dev/null +++ b/agent/consul/state/connect_ca_test.go @@ -0,0 +1,204 @@ +package state + +import ( + "testing" + + "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/go-memdb" + "github.com/stretchr/testify/assert" +) + +func TestStore_CARootSetList(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + // Call list to populate the watch set + ws := memdb.NewWatchSet() + _, _, err := s.CARoots(ws) + assert.Nil(err) + + // Build a valid value + ca1 := connect.TestCA(t, nil) + + // Set + ok, err := s.CARootSetCAS(1, 0, []*structs.CARoot{ca1}) + assert.Nil(err) + assert.True(ok) + + // Make sure the index got updated. + assert.Equal(s.maxIndex(caRootTableName), uint64(1)) + assert.True(watchFired(ws), "watch fired") + + // Read it back out and verify it. + expected := *ca1 + expected.RaftIndex = structs.RaftIndex{ + CreateIndex: 1, + ModifyIndex: 1, + } + + ws = memdb.NewWatchSet() + _, roots, err := s.CARoots(ws) + assert.Nil(err) + assert.Len(roots, 1) + actual := roots[0] + assert.Equal(&expected, actual) +} + +func TestStore_CARootSet_emptyID(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + // Call list to populate the watch set + ws := memdb.NewWatchSet() + _, _, err := s.CARoots(ws) + assert.Nil(err) + + // Build a valid value + ca1 := connect.TestCA(t, nil) + ca1.ID = "" + + // Set + ok, err := s.CARootSetCAS(1, 0, []*structs.CARoot{ca1}) + assert.NotNil(err) + assert.Contains(err.Error(), ErrMissingCARootID.Error()) + assert.False(ok) + + // Make sure the index got updated. + assert.Equal(s.maxIndex(caRootTableName), uint64(0)) + assert.False(watchFired(ws), "watch fired") + + // Read it back out and verify it. + ws = memdb.NewWatchSet() + _, roots, err := s.CARoots(ws) + assert.Nil(err) + assert.Len(roots, 0) +} + +func TestStore_CARootActive_valid(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + // Build a valid value + ca1 := connect.TestCA(t, nil) + ca1.Active = false + ca2 := connect.TestCA(t, nil) + ca3 := connect.TestCA(t, nil) + ca3.Active = false + + // Set + ok, err := s.CARootSetCAS(1, 0, []*structs.CARoot{ca1, ca2, ca3}) + assert.Nil(err) + assert.True(ok) + + // Query + ws := memdb.NewWatchSet() + idx, res, err := s.CARootActive(ws) + assert.Equal(idx, uint64(1)) + assert.Nil(err) + assert.NotNil(res) + assert.Equal(ca2.ID, res.ID) +} + +// Test that querying the active CA returns the correct value. +func TestStore_CARootActive_none(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + // Querying with no results returns nil. + ws := memdb.NewWatchSet() + idx, res, err := s.CARootActive(ws) + assert.Equal(idx, uint64(0)) + assert.Nil(res) + assert.Nil(err) +} + +/* +func TestStore_Intention_Snapshot_Restore(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + // Create some intentions. + ixns := structs.Intentions{ + &structs.Intention{ + DestinationName: "foo", + }, + &structs.Intention{ + DestinationName: "bar", + }, + &structs.Intention{ + DestinationName: "baz", + }, + } + + // Force the sort order of the UUIDs before we create them so the + // order is deterministic. + id := testUUID() + ixns[0].ID = "a" + id[1:] + ixns[1].ID = "b" + id[1:] + ixns[2].ID = "c" + id[1:] + + // Now create + for i, ixn := range ixns { + assert.Nil(s.IntentionSet(uint64(4+i), ixn)) + } + + // Snapshot the queries. + snap := s.Snapshot() + defer snap.Close() + + // Alter the real state store. + assert.Nil(s.IntentionDelete(7, ixns[0].ID)) + + // Verify the snapshot. + assert.Equal(snap.LastIndex(), uint64(6)) + expected := structs.Intentions{ + &structs.Intention{ + ID: ixns[0].ID, + DestinationName: "foo", + Meta: map[string]string{}, + RaftIndex: structs.RaftIndex{ + CreateIndex: 4, + ModifyIndex: 4, + }, + }, + &structs.Intention{ + ID: ixns[1].ID, + DestinationName: "bar", + Meta: map[string]string{}, + RaftIndex: structs.RaftIndex{ + CreateIndex: 5, + ModifyIndex: 5, + }, + }, + &structs.Intention{ + ID: ixns[2].ID, + DestinationName: "baz", + Meta: map[string]string{}, + RaftIndex: structs.RaftIndex{ + CreateIndex: 6, + ModifyIndex: 6, + }, + }, + } + dump, err := snap.Intentions() + assert.Nil(err) + assert.Equal(expected, dump) + + // Restore the values into a new state store. + func() { + s := testStateStore(t) + restore := s.Restore() + for _, ixn := range dump { + assert.Nil(restore.Intention(ixn)) + } + restore.Commit() + + // Read the restored values back out and verify that they match. + idx, actual, err := s.Intentions(nil) + assert.Nil(err) + assert.Equal(idx, uint64(6)) + assert.Equal(expected, actual) + }() +} +*/ From 2dfca5dbc281b8df0574e10b585852b218dd00b0 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 11:33:19 -0700 Subject: [PATCH 081/539] agent/consul/fsm,state: snapshot/restore for CA roots --- agent/consul/fsm/snapshot_oss.go | 33 +++++++++++ agent/consul/fsm/snapshot_oss_test.go | 18 ++++++ agent/consul/state/connect_ca.go | 28 +++++++++ agent/consul/state/connect_ca_test.go | 82 ++++++++------------------- 4 files changed, 104 insertions(+), 57 deletions(-) diff --git a/agent/consul/fsm/snapshot_oss.go b/agent/consul/fsm/snapshot_oss.go index 1dde3ab0b..b042c7831 100644 --- a/agent/consul/fsm/snapshot_oss.go +++ b/agent/consul/fsm/snapshot_oss.go @@ -21,6 +21,7 @@ func init() { registerRestorer(structs.PreparedQueryRequestType, restorePreparedQuery) registerRestorer(structs.AutopilotRequestType, restoreAutopilot) registerRestorer(structs.IntentionRequestType, restoreIntention) + registerRestorer(structs.ConnectCARequestType, restoreConnectCA) } func persistOSS(s *snapshot, sink raft.SnapshotSink, encoder *codec.Encoder) error { @@ -48,6 +49,9 @@ func persistOSS(s *snapshot, sink raft.SnapshotSink, encoder *codec.Encoder) err if err := s.persistIntentions(sink, encoder); err != nil { return err } + if err := s.persistConnectCA(sink, encoder); err != nil { + return err + } return nil } @@ -262,6 +266,24 @@ func (s *snapshot) persistAutopilot(sink raft.SnapshotSink, return nil } +func (s *snapshot) persistConnectCA(sink raft.SnapshotSink, + encoder *codec.Encoder) error { + roots, err := s.state.CARoots() + if err != nil { + return err + } + + for _, r := range roots { + if _, err := sink.Write([]byte{byte(structs.ConnectCARequestType)}); err != nil { + return err + } + if err := encoder.Encode(r); err != nil { + return err + } + } + return nil +} + func (s *snapshot) persistIntentions(sink raft.SnapshotSink, encoder *codec.Encoder) error { ixns, err := s.state.Intentions() @@ -397,3 +419,14 @@ func restoreIntention(header *snapshotHeader, restore *state.Restore, decoder *c } return nil } + +func restoreConnectCA(header *snapshotHeader, restore *state.Restore, decoder *codec.Decoder) error { + var req structs.CARoot + if err := decoder.Decode(&req); err != nil { + return err + } + if err := restore.CARoot(&req); err != nil { + return err + } + return nil +} diff --git a/agent/consul/fsm/snapshot_oss_test.go b/agent/consul/fsm/snapshot_oss_test.go index 63f1ab1d3..971e6bbf5 100644 --- a/agent/consul/fsm/snapshot_oss_test.go +++ b/agent/consul/fsm/snapshot_oss_test.go @@ -7,6 +7,7 @@ import ( "testing" "time" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/consul/autopilot" "github.com/hashicorp/consul/agent/consul/state" "github.com/hashicorp/consul/agent/structs" @@ -110,6 +111,18 @@ func TestFSM_SnapshotRestore_OSS(t *testing.T) { } assert.Nil(fsm.state.IntentionSet(14, ixn)) + // CA Roots + roots := []*structs.CARoot{ + connect.TestCA(t, nil), + connect.TestCA(t, nil), + } + for _, r := range roots[1:] { + r.Active = false + } + ok, err := fsm.state.CARootSetCAS(15, 0, roots) + assert.Nil(err) + assert.True(ok) + // Snapshot snap, err := fsm.Snapshot() if err != nil { @@ -278,6 +291,11 @@ func TestFSM_SnapshotRestore_OSS(t *testing.T) { assert.Len(ixns, 1) assert.Equal(ixn, ixns[0]) + // Verify CA roots are restored. + _, roots, err = fsm2.state.CARoots(nil) + assert.Nil(err) + assert.Len(roots, 2) + // Snapshot snap, err = fsm2.Snapshot() if err != nil { diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go index 3b66a07c6..05313ce2e 100644 --- a/agent/consul/state/connect_ca.go +++ b/agent/consul/state/connect_ca.go @@ -33,6 +33,34 @@ func init() { registerSchema(caRootTableSchema) } +// CARoots is used to pull all the CA roots for the snapshot. +func (s *Snapshot) CARoots() (structs.CARoots, error) { + ixns, err := s.tx.Get(caRootTableName, "id") + if err != nil { + return nil, err + } + + var ret structs.CARoots + for wrapped := ixns.Next(); wrapped != nil; wrapped = ixns.Next() { + ret = append(ret, wrapped.(*structs.CARoot)) + } + + return ret, nil +} + +// CARoots is used when restoring from a snapshot. +func (s *Restore) CARoot(r *structs.CARoot) error { + // Insert + if err := s.tx.Insert(caRootTableName, r); err != nil { + return fmt.Errorf("failed restoring CA root: %s", err) + } + if err := indexUpdateMaxTxn(s.tx, r.ModifyIndex, caRootTableName); err != nil { + return fmt.Errorf("failed updating index: %s", err) + } + + return nil +} + // CARoots returns the list of all CA roots. func (s *Store) CARoots(ws memdb.WatchSet) (uint64, structs.CARoots, error) { tx := s.db.Txn(false) diff --git a/agent/consul/state/connect_ca_test.go b/agent/consul/state/connect_ca_test.go index 14b5caf54..bbbac0f0f 100644 --- a/agent/consul/state/connect_ca_test.go +++ b/agent/consul/state/connect_ca_test.go @@ -113,92 +113,60 @@ func TestStore_CARootActive_none(t *testing.T) { assert.Nil(err) } -/* -func TestStore_Intention_Snapshot_Restore(t *testing.T) { +func TestStore_CARoot_Snapshot_Restore(t *testing.T) { assert := assert.New(t) s := testStateStore(t) // Create some intentions. - ixns := structs.Intentions{ - &structs.Intention{ - DestinationName: "foo", - }, - &structs.Intention{ - DestinationName: "bar", - }, - &structs.Intention{ - DestinationName: "baz", - }, + roots := structs.CARoots{ + connect.TestCA(t, nil), + connect.TestCA(t, nil), + connect.TestCA(t, nil), + } + for _, r := range roots[1:] { + r.Active = false } // Force the sort order of the UUIDs before we create them so the // order is deterministic. id := testUUID() - ixns[0].ID = "a" + id[1:] - ixns[1].ID = "b" + id[1:] - ixns[2].ID = "c" + id[1:] + roots[0].ID = "a" + id[1:] + roots[1].ID = "b" + id[1:] + roots[2].ID = "c" + id[1:] // Now create - for i, ixn := range ixns { - assert.Nil(s.IntentionSet(uint64(4+i), ixn)) - } + ok, err := s.CARootSetCAS(1, 0, roots) + assert.Nil(err) + assert.True(ok) // Snapshot the queries. snap := s.Snapshot() defer snap.Close() // Alter the real state store. - assert.Nil(s.IntentionDelete(7, ixns[0].ID)) + ok, err = s.CARootSetCAS(2, 1, roots[:1]) + assert.Nil(err) + assert.True(ok) // Verify the snapshot. - assert.Equal(snap.LastIndex(), uint64(6)) - expected := structs.Intentions{ - &structs.Intention{ - ID: ixns[0].ID, - DestinationName: "foo", - Meta: map[string]string{}, - RaftIndex: structs.RaftIndex{ - CreateIndex: 4, - ModifyIndex: 4, - }, - }, - &structs.Intention{ - ID: ixns[1].ID, - DestinationName: "bar", - Meta: map[string]string{}, - RaftIndex: structs.RaftIndex{ - CreateIndex: 5, - ModifyIndex: 5, - }, - }, - &structs.Intention{ - ID: ixns[2].ID, - DestinationName: "baz", - Meta: map[string]string{}, - RaftIndex: structs.RaftIndex{ - CreateIndex: 6, - ModifyIndex: 6, - }, - }, - } - dump, err := snap.Intentions() + assert.Equal(snap.LastIndex(), uint64(1)) + dump, err := snap.CARoots() assert.Nil(err) - assert.Equal(expected, dump) + assert.Equal(roots, dump) // Restore the values into a new state store. func() { s := testStateStore(t) restore := s.Restore() - for _, ixn := range dump { - assert.Nil(restore.Intention(ixn)) + for _, r := range dump { + assert.Nil(restore.CARoot(r)) } restore.Commit() // Read the restored values back out and verify that they match. - idx, actual, err := s.Intentions(nil) + idx, actual, err := s.CARoots(nil) assert.Nil(err) - assert.Equal(idx, uint64(6)) - assert.Equal(expected, actual) + assert.Equal(idx, uint64(2)) + assert.Equal(roots, actual) }() } -*/ From 746f80639adaab67004af1225349ea269f4b4369 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 12:42:42 -0700 Subject: [PATCH 082/539] agent: /v1/connect/ca/configuration PUT for setting configuration --- agent/connect_ca_endpoint.go | 28 ++++++++++ agent/consul/connect_ca_endpoint.go | 87 +++++++++++++++++++++++++++++ agent/http_oss.go | 1 + agent/structs/connect_ca.go | 11 ++++ 4 files changed, 127 insertions(+) diff --git a/agent/connect_ca_endpoint.go b/agent/connect_ca_endpoint.go index 1c7871015..7832ba36f 100644 --- a/agent/connect_ca_endpoint.go +++ b/agent/connect_ca_endpoint.go @@ -1,6 +1,7 @@ package agent import ( + "fmt" "net/http" "github.com/hashicorp/consul/agent/structs" @@ -26,3 +27,30 @@ func (s *HTTPServer) ConnectCARoots(resp http.ResponseWriter, req *http.Request) return reply, nil } + +// /v1/connect/ca/configuration +func (s *HTTPServer) ConnectCAConfiguration(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + switch req.Method { + case "PUT": + return s.ConnectCAConfigurationSet(resp, req) + + default: + return nil, MethodNotAllowedError{req.Method, []string{"GET", "POST"}} + } +} + +// PUT /v1/connect/ca/configuration +func (s *HTTPServer) ConnectCAConfigurationSet(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Method is tested in ConnectCAConfiguration + + var args structs.CAConfiguration + if err := decodeBody(req, &args, nil); err != nil { + resp.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(resp, "Request decode failed: %v", err) + return nil, nil + } + + var reply interface{} + err := s.agent.RPC("ConnectCA.ConfigurationSet", &args, &reply) + return nil, err +} diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 60e631cef..a4cb569d8 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -14,6 +14,9 @@ import ( "github.com/hashicorp/consul/agent/consul/state" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" + "github.com/hashicorp/go-uuid" + "github.com/mitchellh/go-testing-interface" + "github.com/mitchellh/mapstructure" ) // ConnectCA manages the Connect CA. @@ -22,6 +25,90 @@ type ConnectCA struct { srv *Server } +// ConfigurationSet updates the configuration for the CA. +// +// NOTE(mitchellh): This whole implementation is temporary until the real +// CA plugin work comes in. For now, this is only used to configure a single +// static CA root. +func (s *ConnectCA) ConfigurationSet( + args *structs.CAConfiguration, + reply *interface{}) error { + // NOTE(mitchellh): This is the temporary hardcoding of a static CA + // provider. This will allow us to test agent implementations and so on + // with an incomplete CA for now. + if args.Provider != "static" { + return fmt.Errorf("The CA provider can only be 'static' for now") + } + + // Config is the configuration allowed for our static provider + var config struct { + Name string + CertPEM string + PrivateKeyPEM string + Generate bool + } + if err := mapstructure.Decode(args.Config, &config); err != nil { + return fmt.Errorf("error decoding config: %s", err) + } + + // Basic validation so demos aren't super jank + if config.Name == "" { + return fmt.Errorf("Name must be set") + } + if config.CertPEM == "" || config.PrivateKeyPEM == "" { + if !config.Generate { + return fmt.Errorf( + "CertPEM and PrivateKeyPEM must be set, or Generate must be true") + } + } + + // Convenience to auto-generate the cert + if config.Generate { + ca := connect.TestCA(&testing.RuntimeT{}, nil) + config.CertPEM = ca.RootCert + config.PrivateKeyPEM = ca.SigningKey + } + + // TODO(mitchellh): verify that the private key is valid for the cert + + // Generate an ID for this + id, err := uuid.GenerateUUID() + if err != nil { + return err + } + + // Get the highest index + state := s.srv.fsm.State() + idx, _, err := state.CARoots(nil) + if err != nil { + return err + } + + // Commit + resp, err := s.srv.raftApply(structs.ConnectCARequestType, &structs.CARequest{ + Op: structs.CAOpSet, + Index: idx, + Roots: []*structs.CARoot{ + &structs.CARoot{ + ID: id, + Name: config.Name, + RootCert: config.CertPEM, + SigningKey: config.PrivateKeyPEM, + Active: true, + }, + }, + }) + if err != nil { + s.srv.logger.Printf("[ERR] consul.test: Apply failed %v", err) + return err + } + if respErr, ok := resp.(error); ok { + return respErr + } + + return nil +} + // Roots returns the currently trusted root certificates. func (s *ConnectCA) Roots( args *structs.DCSpecificRequest, diff --git a/agent/http_oss.go b/agent/http_oss.go index d2e86622f..6c2e697ea 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -42,6 +42,7 @@ func init() { registerEndpoint("/v1/catalog/services", []string{"GET"}, (*HTTPServer).CatalogServices) registerEndpoint("/v1/catalog/service/", []string{"GET"}, (*HTTPServer).CatalogServiceNodes) registerEndpoint("/v1/catalog/node/", []string{"GET"}, (*HTTPServer).CatalogNodeServices) + registerEndpoint("/v1/connect/ca/configuration", []string{"PUT"}, (*HTTPServer).ConnectCAConfiguration) registerEndpoint("/v1/connect/ca/roots", []string{"GET"}, (*HTTPServer).ConnectCARoots) registerEndpoint("/v1/connect/intentions", []string{"GET", "POST"}, (*HTTPServer).IntentionEndpoint) registerEndpoint("/v1/connect/intentions/match", []string{"GET"}, (*HTTPServer).IntentionMatch) diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 6dc2dbf30..8576a1b41 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -113,3 +113,14 @@ type CARequest struct { // always be active. Roots []*CARoot } + +// CAConfiguration is the configuration for the current CA plugin. +type CAConfiguration struct { + // Provider is the CA provider implementation to use. + Provider string + + // Configuration is arbitrary configuration for the provider. This + // should only contain primitive values and containers (such as lists + // and maps). + Config map[string]interface{} +} From deb55c436d356f75a44cc62e88da82f74066ab48 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 12:55:43 -0700 Subject: [PATCH 083/539] agent/structs: hide some fields from JSON --- agent/structs/connect_ca.go | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 8576a1b41..f75efed5c 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -35,8 +35,8 @@ type CARoot struct { // SigningCert is the PEM-encoded signing certificate and SigningKey // is the PEM-encoded private key for the signing certificate. These // may actually be empty if the CA plugin in use manages these for us. - SigningCert string - SigningKey string + SigningCert string `json:",omitempty"` + SigningKey string `json:",omitempty"` // Active is true if this is the current active CA. This must only // be true for exactly one CA. For any method that modifies roots in the @@ -77,7 +77,7 @@ type IssuedCert struct { // key for that cert, respectively. This should not be stored in the // state store, but is present in the sign API response. CertPEM string `json:",omitempty"` - PrivateKeyPEM string + PrivateKeyPEM string `json:",omitempty"` // Service is the name of the service for which the cert was issued. // ServiceURI is the cert URI value. From 2026cf3753364a2145f32de225001e3ecb507a48 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 12:54:51 -1000 Subject: [PATCH 084/539] agent/consul: encode issued cert serial number as hex encoded --- agent/connect/ca.go | 7 +++++++ agent/consul/connect_ca_endpoint.go | 2 +- agent/structs/connect_ca.go | 4 ++-- 3 files changed, 10 insertions(+), 3 deletions(-) diff --git a/agent/connect/ca.go b/agent/connect/ca.go index a0a65ece6..efe7c14f3 100644 --- a/agent/connect/ca.go +++ b/agent/connect/ca.go @@ -7,6 +7,7 @@ import ( "crypto/x509" "encoding/pem" "fmt" + "strings" ) // ParseCert parses the x509 certificate from a PEM-encoded value. @@ -72,3 +73,9 @@ func KeyId(raw interface{}) ([]byte, error) { h.Write(pub.Y.Bytes()) return h.Sum([]byte{}), nil } + +// HexString returns a standard colon-separated hex value for the input +// byte slice. This should be used with cert serial numbers and so on. +func HexString(input []byte) string { + return strings.Replace(fmt.Sprintf("% x", input), " ", ":", -1) +} diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index a4cb569d8..f7557578c 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -257,7 +257,7 @@ func (s *ConnectCA) Sign( // Set the response *reply = structs.IssuedCert{ - SerialNumber: template.SerialNumber, + SerialNumber: connect.HexString(template.SerialNumber.Bytes()), CertPEM: buf.String(), Service: serviceId.Service, ServiceURI: template.URIs[0].String(), diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index f75efed5c..5ac8a0fc2 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -1,7 +1,6 @@ package structs import ( - "math/big" "time" ) @@ -71,7 +70,8 @@ func (q *CASignRequest) RequestDatacenter() string { // IssuedCert is a certificate that has been issued by a Connect CA. type IssuedCert struct { // SerialNumber is the unique serial number for this certificate. - SerialNumber *big.Int + // This is encoded in standard hex separated by :. + SerialNumber string // CertPEM and PrivateKeyPEM are the PEM-encoded certificate and private // key for that cert, respectively. This should not be stored in the From e0562f1c21774b149fd39a6e760384173df3a565 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 21 Mar 2018 13:02:46 -1000 Subject: [PATCH 085/539] agent: implement an always-200 authorize endpoint --- agent/agent_endpoint.go | 13 +++++++++++++ agent/http_oss.go | 1 + 2 files changed, 14 insertions(+) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index bc684f115..1cbb4b1da 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -885,3 +885,16 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. return &reply, nil } + +// AgentConnectAuthorize +// +// POST /v1/agent/connect/authorize +func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Test the method + if req.Method != "POST" { + return nil, MethodNotAllowedError{req.Method, []string{"POST"}} + } + + // NOTE(mitchellh): return 200 for now. To be implemented later. + return nil, nil +} diff --git a/agent/http_oss.go b/agent/http_oss.go index 6c2e697ea..774388ad3 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -29,6 +29,7 @@ func init() { registerEndpoint("/v1/agent/check/warn/", []string{"PUT"}, (*HTTPServer).AgentCheckWarn) registerEndpoint("/v1/agent/check/fail/", []string{"PUT"}, (*HTTPServer).AgentCheckFail) registerEndpoint("/v1/agent/check/update/", []string{"PUT"}, (*HTTPServer).AgentCheckUpdate) + registerEndpoint("/v1/agent/connect/authorize", []string{"POST"}, (*HTTPServer).AgentConnectAuthorize) registerEndpoint("/v1/agent/connect/ca/roots", []string{"GET"}, (*HTTPServer).AgentConnectCARoots) registerEndpoint("/v1/agent/connect/ca/leaf/", []string{"GET"}, (*HTTPServer).AgentConnectCALeafCert) registerEndpoint("/v1/agent/service/register", []string{"PUT"}, (*HTTPServer).AgentRegisterService) From 434d8750ae25939a9928d00f35dcfcc61208ff38 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 24 Mar 2018 08:27:44 -1000 Subject: [PATCH 086/539] agent/connect: address PR feedback for the CA.go file --- agent/connect/ca.go | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/agent/connect/ca.go b/agent/connect/ca.go index efe7c14f3..bca9392d3 100644 --- a/agent/connect/ca.go +++ b/agent/connect/ca.go @@ -12,6 +12,7 @@ import ( // ParseCert parses the x509 certificate from a PEM-encoded value. func ParseCert(pemValue string) (*x509.Certificate, error) { + // The _ result below is not an error but the remaining PEM bytes. block, _ := pem.Decode([]byte(pemValue)) if block == nil { return nil, fmt.Errorf("no PEM-encoded data found") @@ -27,6 +28,7 @@ func ParseCert(pemValue string) (*x509.Certificate, error) { // ParseSigner parses a crypto.Signer from a PEM-encoded key. The private key // is expected to be the first block in the PEM value. func ParseSigner(pemValue string) (crypto.Signer, error) { + // The _ result below is not an error but the remaining PEM bytes. block, _ := pem.Decode([]byte(pemValue)) if block == nil { return nil, fmt.Errorf("no PEM-encoded data found") @@ -44,6 +46,7 @@ func ParseSigner(pemValue string) (crypto.Signer, error) { // ParseCSR parses a CSR from a PEM-encoded value. The certificate request // must be the the first block in the PEM value. func ParseCSR(pemValue string) (*x509.CertificateRequest, error) { + // The _ result below is not an error but the remaining PEM bytes. block, _ := pem.Decode([]byte(pemValue)) if block == nil { return nil, fmt.Errorf("no PEM-encoded data found") @@ -57,7 +60,7 @@ func ParseCSR(pemValue string) (*x509.CertificateRequest, error) { } // KeyId returns a x509 KeyId from the given signing key. The key must be -// an *ecdsa.PublicKey, but is an interface type to support crypto.Signer. +// an *ecdsa.PublicKey currently, but may support more types in the future. func KeyId(raw interface{}) ([]byte, error) { pub, ok := raw.(*ecdsa.PublicKey) if !ok { @@ -66,12 +69,15 @@ func KeyId(raw interface{}) ([]byte, error) { // This is not standard; RFC allows any unique identifier as long as they // match in subject/authority chains but suggests specific hashing of DER - // bytes of public key including DER tags. I can't be bothered to do esp. - // since ECDSA keys don't have a handy way to marshal the publick key alone. - h := sha256.New() - h.Write(pub.X.Bytes()) - h.Write(pub.Y.Bytes()) - return h.Sum([]byte{}), nil + // bytes of public key including DER tags. + bs, err := x509.MarshalPKIXPublicKey(pub) + if err != nil { + return nil, err + } + + // String formatted + kID := sha256.Sum256(bs) + return []byte(strings.Replace(fmt.Sprintf("% x", kID), " ", ":", -1)), nil } // HexString returns a standard colon-separated hex value for the input From b0315811b9547217abb4de382f2982c0e8414947 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 24 Mar 2018 08:32:42 -1000 Subject: [PATCH 087/539] agent/connect: use proper keyusage fields for CA and leaf --- agent/connect/testing_ca.go | 46 ++++++++++++++--------------- agent/consul/connect_ca_endpoint.go | 5 +++- 2 files changed, 27 insertions(+), 24 deletions(-) diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index f79849016..95115536e 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -6,7 +6,6 @@ import ( "crypto/ecdsa" "crypto/elliptic" "crypto/rand" - "crypto/sha256" "crypto/x509" "crypto/x509/pkix" "encoding/pem" @@ -67,12 +66,14 @@ func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { PermittedDNSDomainsCritical: true, PermittedDNSDomains: []string{uri.Hostname()}, BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign, - IsCA: true, - NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), - NotBefore: time.Now(), - AuthorityKeyId: testKeyID(t, signer.Public()), - SubjectKeyId: testKeyID(t, signer.Public()), + KeyUsage: x509.KeyUsageCertSign | + x509.KeyUsageCRLSign | + x509.KeyUsageDigitalSignature, + IsCA: true, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: testKeyID(t, signer.Public()), + SubjectKeyId: testKeyID(t, signer.Public()), } bs, err := x509.CreateCertificate( @@ -100,7 +101,11 @@ func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { t.Fatalf("error parsing signing key: %s", err) } - // Set the authority key to be the previous one + // Set the authority key to be the previous one. + // NOTE(mitchellh): From Paul Banks: if we have to cross-sign a cert + // that came from outside (e.g. vault) we can't rely on them using the + // same KeyID hashing algo we do so we'd need to actually copy this + // from the xc cert's subjectKeyIdentifier extension. template.AuthorityKeyId = testKeyID(t, xcsigner.Public()) // Create the new certificate where the parent is the previous @@ -161,7 +166,10 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { URIs: []*url.URL{spiffeId.URI()}, SignatureAlgorithm: x509.ECDSAWithSHA256, BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageDataEncipherment | x509.KeyUsageKeyAgreement, + KeyUsage: x509.KeyUsageDataEncipherment | + x509.KeyUsageKeyAgreement | + x509.KeyUsageDigitalSignature | + x509.KeyUsageKeyEncipherment, ExtKeyUsage: []x509.ExtKeyUsage{ x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth, @@ -230,23 +238,15 @@ func TestCSR(t testing.T, id SpiffeID) (string, string) { return csrBuf.String(), pkBuf.String() } -// testKeyID returns a KeyID from the given public key. The "raw" must be -// an *ecdsa.PublicKey, but is an interface type to suppot crypto.Signer.Public -// values. +// testKeyID returns a KeyID from the given public key. This just calls +// KeyId but handles errors for tests. func testKeyID(t testing.T, raw interface{}) []byte { - pub, ok := raw.(*ecdsa.PublicKey) - if !ok { - t.Fatalf("raw is type %T, expected *ecdsa.PublicKey", raw) + result, err := KeyId(raw) + if err != nil { + t.Fatalf("KeyId error: %s", err) } - // This is not standard; RFC allows any unique identifier as long as they - // match in subject/authority chains but suggests specific hashing of DER - // bytes of public key including DER tags. I can't be bothered to do esp. - // since ECDSA keys don't have a handy way to marshal the publick key alone. - h := sha256.New() - h.Write(pub.X.Bytes()) - h.Write(pub.Y.Bytes()) - return h.Sum([]byte{}) + return result } // testMemoizePK is the private key that we memoize once we generate it diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index f7557578c..b3aca757e 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -232,7 +232,10 @@ func (s *ConnectCA) Sign( PublicKeyAlgorithm: csr.PublicKeyAlgorithm, PublicKey: csr.PublicKey, BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageDataEncipherment | x509.KeyUsageKeyAgreement, + KeyUsage: x509.KeyUsageDataEncipherment | + x509.KeyUsageKeyAgreement | + x509.KeyUsageDigitalSignature | + x509.KeyUsageKeyEncipherment, ExtKeyUsage: []x509.ExtKeyUsage{ x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth, From da1bc48372ab0b5da86eb7492a504e4d2331945b Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 24 Mar 2018 08:39:43 -1000 Subject: [PATCH 088/539] agent/connect: rename SpiffeID to CertURI --- agent/connect/testing_ca.go | 4 ++-- agent/connect/{spiffe.go => uri.go} | 15 ++++++++++----- agent/connect/{spiffe_test.go => uri_test.go} | 10 +++++----- agent/consul/connect_ca_endpoint.go | 2 +- 4 files changed, 18 insertions(+), 13 deletions(-) rename agent/connect/{spiffe.go => uri.go} (64%) rename agent/connect/{spiffe_test.go => uri_test.go} (81%) diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index 95115536e..6ce5362ac 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -197,9 +197,9 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { // TestCSR returns a CSR to sign the given service along with the PEM-encoded // private key for this certificate. -func TestCSR(t testing.T, id SpiffeID) (string, string) { +func TestCSR(t testing.T, uri CertURI) (string, string) { template := &x509.CertificateRequest{ - URIs: []*url.URL{id.URI()}, + URIs: []*url.URL{uri.URI()}, SignatureAlgorithm: x509.ECDSAWithSHA256, } diff --git a/agent/connect/spiffe.go b/agent/connect/uri.go similarity index 64% rename from agent/connect/spiffe.go rename to agent/connect/uri.go index 58a6b83e3..b33fb10ef 100644 --- a/agent/connect/spiffe.go +++ b/agent/connect/uri.go @@ -6,9 +6,14 @@ import ( "regexp" ) -// SpiffeID represents a Connect-valid SPIFFE ID. The user should type switch -// on the various implementations in this package to determine the type of ID. -type SpiffeID interface { +// CertURI represents a Connect-valid URI value for a TLS certificate. +// The user should type switch on the various implementations in this +// package to determine the type of URI and the data encoded within it. +// +// Note that the current implementations of this are all also SPIFFE IDs. +// However, we anticipate that we may accept URIs that are also not SPIFFE +// compliant and therefore the interface is named as such. +type CertURI interface { URI() *url.URL } @@ -17,8 +22,8 @@ var ( `^/ns/(\w+)/dc/(\w+)/svc/(\w+)$`) ) -// ParseSpiffeID parses a SPIFFE ID from the input URI. -func ParseSpiffeID(input *url.URL) (SpiffeID, error) { +// ParseCertURI parses a the URI value from a TLS certificate. +func ParseCertURI(input *url.URL) (CertURI, error) { if input.Scheme != "spiffe" { return nil, fmt.Errorf("SPIFFE ID must have 'spiffe' scheme") } diff --git a/agent/connect/spiffe_test.go b/agent/connect/uri_test.go similarity index 81% rename from agent/connect/spiffe_test.go rename to agent/connect/uri_test.go index 861a4fa63..370e3c420 100644 --- a/agent/connect/spiffe_test.go +++ b/agent/connect/uri_test.go @@ -7,9 +7,9 @@ import ( "github.com/stretchr/testify/assert" ) -// testSpiffeIDCases contains the test cases for parsing and encoding +// testCertURICases contains the test cases for parsing and encoding // the SPIFFE IDs. This is a global since it is used in multiple test functions. -var testSpiffeIDCases = []struct { +var testCertURICases = []struct { Name string URI string Struct interface{} @@ -35,8 +35,8 @@ var testSpiffeIDCases = []struct { }, } -func TestParseSpiffeID(t *testing.T) { - for _, tc := range testSpiffeIDCases { +func TestParseCertURI(t *testing.T) { + for _, tc := range testCertURICases { t.Run(tc.Name, func(t *testing.T) { assert := assert.New(t) @@ -45,7 +45,7 @@ func TestParseSpiffeID(t *testing.T) { assert.Nil(err) // Parse the ID and check the error/return value - actual, err := ParseSpiffeID(uri) + actual, err := ParseCertURI(uri) assert.Equal(tc.ParseError != "", err != nil, "error value") if err != nil { assert.Contains(err.Error(), tc.ParseError) diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index b3aca757e..4efdafc06 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -171,7 +171,7 @@ func (s *ConnectCA) Sign( } // Parse the SPIFFE ID - spiffeId, err := connect.ParseSpiffeID(csr.URIs[0]) + spiffeId, err := connect.ParseCertURI(csr.URIs[0]) if err != nil { return err } From 8934f00d03746c3eeaa0d08f42e7c97e46bc16e1 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 24 Mar 2018 08:46:12 -1000 Subject: [PATCH 089/539] agent/connect: support SpiffeIDSigning --- agent/connect/testing_ca.go | 9 +++------ agent/connect/uri.go | 27 +++++++++++++++++++++++++++ agent/connect/uri_test.go | 10 ++++++++++ 3 files changed, 40 insertions(+), 6 deletions(-) diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index 6ce5362ac..a2f711763 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -53,18 +53,15 @@ func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { } // The URI (SPIFFE compatible) for the cert - uri, err := url.Parse(fmt.Sprintf("spiffe://%s.consul", testClusterID)) - if err != nil { - t.Fatalf("error parsing CA URI: %s", err) - } + id := &SpiffeIDSigning{ClusterID: testClusterID, Domain: "consul"} // Create the CA cert template := x509.Certificate{ SerialNumber: sn, Subject: pkix.Name{CommonName: result.Name}, - URIs: []*url.URL{uri}, + URIs: []*url.URL{id.URI()}, PermittedDNSDomainsCritical: true, - PermittedDNSDomains: []string{uri.Hostname()}, + PermittedDNSDomains: []string{id.URI().Hostname()}, BasicConstraintsValid: true, KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign | diff --git a/agent/connect/uri.go b/agent/connect/uri.go index b33fb10ef..3b56ec4ae 100644 --- a/agent/connect/uri.go +++ b/agent/connect/uri.go @@ -4,6 +4,7 @@ import ( "fmt" "net/url" "regexp" + "strings" ) // CertURI represents a Connect-valid URI value for a TLS certificate. @@ -38,6 +39,17 @@ func ParseCertURI(input *url.URL) (CertURI, error) { }, nil } + // Test for signing ID + if input.Path == "" { + idx := strings.Index(input.Host, ".") + if idx > 0 { + return &SpiffeIDSigning{ + ClusterID: input.Host[:idx], + Domain: input.Host[idx+1:], + }, nil + } + } + return nil, fmt.Errorf("SPIFFE ID is not in the expected format") } @@ -58,3 +70,18 @@ func (id *SpiffeIDService) URI() *url.URL { id.Namespace, id.Datacenter, id.Service) return &result } + +// SpiffeIDSigning is the structure to represent the SPIFFE ID for a +// signing certificate (not a leaf service). +type SpiffeIDSigning struct { + ClusterID string // Unique cluster ID + Domain string // The domain, usually "consul" +} + +// URI returns the *url.URL for this SPIFFE ID. +func (id *SpiffeIDSigning) URI() *url.URL { + var result url.URL + result.Scheme = "spiffe" + result.Host = fmt.Sprintf("%s.%s", id.ClusterID, id.Domain) + return &result +} diff --git a/agent/connect/uri_test.go b/agent/connect/uri_test.go index 370e3c420..247170f53 100644 --- a/agent/connect/uri_test.go +++ b/agent/connect/uri_test.go @@ -33,6 +33,16 @@ var testCertURICases = []struct { }, "", }, + + { + "signing ID", + "spiffe://1234.consul", + &SpiffeIDSigning{ + ClusterID: "1234", + Domain: "consul", + }, + "", + }, } func TestParseCertURI(t *testing.T) { From 9d93c520984b7c92cc748e4f346f1971b3b5f372 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 26 Mar 2018 20:31:17 -0700 Subject: [PATCH 090/539] agent/connect: support any values in the URL --- agent/connect/uri.go | 37 ++++++++++++++++++++++++++++++++----- agent/connect/uri_test.go | 15 +++++++++++++++ 2 files changed, 47 insertions(+), 5 deletions(-) diff --git a/agent/connect/uri.go b/agent/connect/uri.go index 3b56ec4ae..3562f2d6c 100644 --- a/agent/connect/uri.go +++ b/agent/connect/uri.go @@ -20,7 +20,7 @@ type CertURI interface { var ( spiffeIDServiceRegexp = regexp.MustCompile( - `^/ns/(\w+)/dc/(\w+)/svc/(\w+)$`) + `^/ns/([^/]+)/dc/([^/]+)/svc/([^/]+)$`) ) // ParseCertURI parses a the URI value from a TLS certificate. @@ -29,13 +29,40 @@ func ParseCertURI(input *url.URL) (CertURI, error) { return nil, fmt.Errorf("SPIFFE ID must have 'spiffe' scheme") } + // Path is the raw value of the path without url decoding values. + // RawPath is empty if there were no encoded values so we must + // check both. + path := input.Path + if input.RawPath != "" { + path = input.RawPath + } + // Test for service IDs - if v := spiffeIDServiceRegexp.FindStringSubmatch(input.Path); v != nil { + if v := spiffeIDServiceRegexp.FindStringSubmatch(path); v != nil { + // Determine the values. We assume they're sane to save cycles, + // but if the raw path is not empty that means that something is + // URL encoded so we go to the slow path. + ns := v[1] + dc := v[2] + service := v[3] + if input.RawPath != "" { + var err error + if ns, err = url.PathUnescape(v[1]); err != nil { + return nil, fmt.Errorf("Invalid namespace: %s", err) + } + if dc, err = url.PathUnescape(v[2]); err != nil { + return nil, fmt.Errorf("Invalid datacenter: %s", err) + } + if service, err = url.PathUnescape(v[3]); err != nil { + return nil, fmt.Errorf("Invalid service: %s", err) + } + } + return &SpiffeIDService{ Host: input.Host, - Namespace: v[1], - Datacenter: v[2], - Service: v[3], + Namespace: ns, + Datacenter: dc, + Service: service, }, nil } diff --git a/agent/connect/uri_test.go b/agent/connect/uri_test.go index 247170f53..2f28c940d 100644 --- a/agent/connect/uri_test.go +++ b/agent/connect/uri_test.go @@ -34,6 +34,18 @@ var testCertURICases = []struct { "", }, + { + "service with URL-encoded values", + "spiffe://1234.consul/ns/foo%2Fbar/dc/bar%2Fbaz/svc/baz%2Fqux", + &SpiffeIDService{ + Host: "1234.consul", + Namespace: "foo/bar", + Datacenter: "bar/baz", + Service: "baz/qux", + }, + "", + }, + { "signing ID", "spiffe://1234.consul", @@ -56,6 +68,9 @@ func TestParseCertURI(t *testing.T) { // Parse the ID and check the error/return value actual, err := ParseCertURI(uri) + if err != nil { + t.Logf("parse error: %s", err.Error()) + } assert.Equal(tc.ParseError != "", err != nil, "error value") if err != nil { assert.Contains(err.Error(), tc.ParseError) From 1985655dffd68bca5f16c816837a87a5532decc0 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 26 Mar 2018 20:38:39 -0700 Subject: [PATCH 091/539] agent/consul/state: ensure exactly one active CA exists when setting --- agent/consul/state/connect_ca.go | 11 +++++++ agent/consul/state/connect_ca_test.go | 42 +++++++++++++++++++++++++++ 2 files changed, 53 insertions(+) diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go index 05313ce2e..95e763b8b 100644 --- a/agent/consul/state/connect_ca.go +++ b/agent/consul/state/connect_ca.go @@ -110,6 +110,17 @@ func (s *Store) CARootSetCAS(idx, cidx uint64, rs []*structs.CARoot) (bool, erro tx := s.db.Txn(true) defer tx.Abort() + // There must be exactly one active CA root. + activeCount := 0 + for _, r := range rs { + if r.Active { + activeCount++ + } + } + if activeCount != 1 { + return false, fmt.Errorf("there must be exactly one active CA") + } + // Get the current max index if midx := maxIndexTxn(tx, caRootTableName); midx != cidx { return false, nil diff --git a/agent/consul/state/connect_ca_test.go b/agent/consul/state/connect_ca_test.go index bbbac0f0f..cd77eac7c 100644 --- a/agent/consul/state/connect_ca_test.go +++ b/agent/consul/state/connect_ca_test.go @@ -75,6 +75,48 @@ func TestStore_CARootSet_emptyID(t *testing.T) { assert.Len(roots, 0) } +func TestStore_CARootSet_noActive(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + // Call list to populate the watch set + ws := memdb.NewWatchSet() + _, _, err := s.CARoots(ws) + assert.Nil(err) + + // Build a valid value + ca1 := connect.TestCA(t, nil) + ca1.Active = false + ca2 := connect.TestCA(t, nil) + ca2.Active = false + + // Set + ok, err := s.CARootSetCAS(1, 0, []*structs.CARoot{ca1, ca2}) + assert.NotNil(err) + assert.Contains(err.Error(), "exactly one active") + assert.False(ok) +} + +func TestStore_CARootSet_multipleActive(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + // Call list to populate the watch set + ws := memdb.NewWatchSet() + _, _, err := s.CARoots(ws) + assert.Nil(err) + + // Build a valid value + ca1 := connect.TestCA(t, nil) + ca2 := connect.TestCA(t, nil) + + // Set + ok, err := s.CARootSetCAS(1, 0, []*structs.CARoot{ca1, ca2}) + assert.NotNil(err) + assert.Contains(err.Error(), "exactly one active") + assert.False(ok) +} + func TestStore_CARootActive_valid(t *testing.T) { assert := assert.New(t) s := testStateStore(t) From 894ee3c5b043cf9ec9ea8c0ade0afa2b2e9e5753 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Mon, 26 Mar 2018 16:51:43 +0100 Subject: [PATCH 092/539] Add Connect agent, catalog and health endpoints to api Client --- agent/structs/structs.go | 2 +- api/agent.go | 20 +++++++ api/agent_test.go | 45 ++++++++++++++++ api/catalog.go | 15 +++++- api/catalog_test.go | 113 +++++++++++++++++++++++++++++++++++++++ api/health.go | 19 ++++++- api/health_test.go | 51 ++++++++++++++++++ 7 files changed, 262 insertions(+), 3 deletions(-) diff --git a/agent/structs/structs.go b/agent/structs/structs.go index a4e942230..4f25e50f0 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -464,7 +464,7 @@ type ServiceKind string const ( // ServiceKindTypical is a typical, classic Consul service. This is - // represented by the absense of a value. This was chosen for ease of + // represented by the absence of a value. This was chosen for ease of // backwards compatibility: existing services in the catalog would // default to the typical service. ServiceKindTypical ServiceKind = "" diff --git a/api/agent.go b/api/agent.go index 23690d48a..359206c54 100644 --- a/api/agent.go +++ b/api/agent.go @@ -5,6 +5,22 @@ import ( "fmt" ) +// ServiceKind is the kind of service being registered. +type ServiceKind string + +const ( + // ServiceKindTypical is a typical, classic Consul service. This is + // represented by the absence of a value. This was chosen for ease of + // backwards compatibility: existing services in the catalog would + // default to the typical service. + ServiceKindTypical ServiceKind = "" + + // ServiceKindConnectProxy is a proxy for the Connect feature. This + // service proxies another service within Consul and speaks the connect + // protocol. + ServiceKindConnectProxy ServiceKind = "connect-proxy" +) + // AgentCheck represents a check known to the agent type AgentCheck struct { Node string @@ -20,6 +36,7 @@ type AgentCheck struct { // AgentService represents a service known to the agent type AgentService struct { + Kind ServiceKind ID string Service string Tags []string @@ -29,6 +46,7 @@ type AgentService struct { EnableTagOverride bool CreateIndex uint64 ModifyIndex uint64 + ProxyDestination string } // AgentMember represents a cluster member known to the agent @@ -61,6 +79,7 @@ type MembersOpts struct { // AgentServiceRegistration is used to register a new service type AgentServiceRegistration struct { + Kind ServiceKind `json:",omitempty"` ID string `json:",omitempty"` Name string `json:",omitempty"` Tags []string `json:",omitempty"` @@ -70,6 +89,7 @@ type AgentServiceRegistration struct { Meta map[string]string `json:",omitempty"` Check *AgentServiceCheck Checks AgentServiceChecks + ProxyDestination string `json:",omitempty"` } // AgentCheckRegistration is used to register a new check diff --git a/api/agent_test.go b/api/agent_test.go index b195fed29..d45a9a131 100644 --- a/api/agent_test.go +++ b/api/agent_test.go @@ -185,6 +185,51 @@ func TestAPI_AgentServices(t *testing.T) { } } +func TestAPI_AgentServices_ConnectProxy(t *testing.T) { + t.Parallel() + c, s := makeClient(t) + defer s.Stop() + + agent := c.Agent() + + // Register service + reg := &AgentServiceRegistration{ + Name: "foo", + Port: 8000, + } + if err := agent.ServiceRegister(reg); err != nil { + t.Fatalf("err: %v", err) + } + // Register proxy + reg = &AgentServiceRegistration{ + Kind: ServiceKindConnectProxy, + Name: "foo-proxy", + Port: 8001, + ProxyDestination: "foo", + } + if err := agent.ServiceRegister(reg); err != nil { + t.Fatalf("err: %v", err) + } + + services, err := agent.Services() + if err != nil { + t.Fatalf("err: %v", err) + } + if _, ok := services["foo"]; !ok { + t.Fatalf("missing service: %v", services) + } + if _, ok := services["foo-proxy"]; !ok { + t.Fatalf("missing proxy service: %v", services) + } + + if err := agent.ServiceDeregister("foo"); err != nil { + t.Fatalf("err: %v", err) + } + if err := agent.ServiceDeregister("foo-proxy"); err != nil { + t.Fatalf("err: %v", err) + } +} + func TestAPI_AgentServices_CheckPassing(t *testing.T) { t.Parallel() c, s := makeClient(t) diff --git a/api/catalog.go b/api/catalog.go index 80ce1bc81..1a6bbc3b3 100644 --- a/api/catalog.go +++ b/api/catalog.go @@ -156,7 +156,20 @@ func (c *Catalog) Services(q *QueryOptions) (map[string][]string, *QueryMeta, er // Service is used to query catalog entries for a given service func (c *Catalog) Service(service, tag string, q *QueryOptions) ([]*CatalogService, *QueryMeta, error) { - r := c.c.newRequest("GET", "/v1/catalog/service/"+service) + return c.service(service, tag, q, false) +} + +// Connect is used to query catalog entries for a given Connect-enabled service +func (c *Catalog) Connect(service, tag string, q *QueryOptions) ([]*CatalogService, *QueryMeta, error) { + return c.service(service, tag, q, true) +} + +func (c *Catalog) service(service, tag string, q *QueryOptions, connect bool) ([]*CatalogService, *QueryMeta, error) { + path := "/v1/catalog/service/" + service + if connect { + path = "/v1/catalog/connect/" + service + } + r := c.c.newRequest("GET", path) r.setQueryOptions(q) if tag != "" { r.params.Set("tag", tag) diff --git a/api/catalog_test.go b/api/catalog_test.go index 11f50a919..9db640b9d 100644 --- a/api/catalog_test.go +++ b/api/catalog_test.go @@ -186,6 +186,7 @@ func TestAPI_CatalogService(t *testing.T) { defer s.Stop() catalog := c.Catalog() + retry.Run(t, func(r *retry.R) { services, meta, err := catalog.Service("consul", "", nil) if err != nil { @@ -235,6 +236,80 @@ func TestAPI_CatalogService_NodeMetaFilter(t *testing.T) { }) } +func TestAPI_CatalogConnect(t *testing.T) { + t.Parallel() + c, s := makeClient(t) + defer s.Stop() + + catalog := c.Catalog() + + // Register service and proxy instances to test against. + service := &AgentService{ + ID: "redis1", + Service: "redis", + Port: 8000, + } + proxy := &AgentService{ + Kind: ServiceKindConnectProxy, + ProxyDestination: "redis", + ID: "redis-proxy1", + Service: "redis-proxy", + Port: 8001, + } + check := &AgentCheck{ + Node: "foobar", + CheckID: "service:redis1", + Name: "Redis health check", + Notes: "Script based health check", + Status: HealthPassing, + ServiceID: "redis1", + } + + reg := &CatalogRegistration{ + Datacenter: "dc1", + Node: "foobar", + Address: "192.168.10.10", + Service: service, + Check: check, + } + proxyReg := &CatalogRegistration{ + Datacenter: "dc1", + Node: "foobar", + Address: "192.168.10.10", + Service: proxy, + } + + retry.Run(t, func(r *retry.R) { + if _, err := catalog.Register(reg, nil); err != nil { + r.Fatal(err) + } + if _, err := catalog.Register(proxyReg, nil); err != nil { + r.Fatal(err) + } + + services, meta, err := catalog.Connect("redis", "", nil) + if err != nil { + r.Fatal(err) + } + + if meta.LastIndex == 0 { + r.Fatalf("Bad: %v", meta) + } + + if len(services) == 0 { + r.Fatalf("Bad: %v", services) + } + + if services[0].Datacenter != "dc1" { + r.Fatalf("Bad datacenter: %v", services[0]) + } + + if services[0].ServicePort != proxy.Port { + r.Fatalf("Returned port should be for proxy: %v", services[0]) + } + }) +} + func TestAPI_CatalogNode(t *testing.T) { t.Parallel() c, s := makeClient(t) @@ -297,10 +372,28 @@ func TestAPI_CatalogRegistration(t *testing.T) { Service: service, Check: check, } + // Register a connect proxy for that service too + proxy := &AgentService{ + ID: "redis-proxy1", + Service: "redis-proxy", + Port: 8001, + Kind: ServiceKindConnectProxy, + ProxyDestination: service.ID, + } + proxyReg := &CatalogRegistration{ + Datacenter: "dc1", + Node: "foobar", + Address: "192.168.10.10", + NodeMeta: map[string]string{"somekey": "somevalue"}, + Service: proxy, + } retry.Run(t, func(r *retry.R) { if _, err := catalog.Register(reg, nil); err != nil { r.Fatal(err) } + if _, err := catalog.Register(proxyReg, nil); err != nil { + r.Fatal(err) + } node, _, err := catalog.Node("foobar", nil) if err != nil { @@ -311,6 +404,10 @@ func TestAPI_CatalogRegistration(t *testing.T) { r.Fatal("missing service: redis1") } + if _, ok := node.Services["redis-proxy1"]; !ok { + r.Fatal("missing service: redis-proxy1") + } + health, _, err := c.Health().Node("foobar", nil) if err != nil { r.Fatal(err) @@ -333,10 +430,22 @@ func TestAPI_CatalogRegistration(t *testing.T) { ServiceID: "redis1", } + // ... and proxy + deregProxy := &CatalogDeregistration{ + Datacenter: "dc1", + Node: "foobar", + Address: "192.168.10.10", + ServiceID: "redis-proxy1", + } + if _, err := catalog.Deregister(dereg, nil); err != nil { t.Fatalf("err: %v", err) } + if _, err := catalog.Deregister(deregProxy, nil); err != nil { + t.Fatalf("err: %v", err) + } + retry.Run(t, func(r *retry.R) { node, _, err := catalog.Node("foobar", nil) if err != nil { @@ -346,6 +455,10 @@ func TestAPI_CatalogRegistration(t *testing.T) { if _, ok := node.Services["redis1"]; ok { r.Fatal("ServiceID:redis1 is not deregistered") } + + if _, ok := node.Services["redis-proxy1"]; ok { + r.Fatal("ServiceID:redis-proxy1 is not deregistered") + } }) // Test deregistration of the previously registered check diff --git a/api/health.go b/api/health.go index 53f3de4f7..5fcb39b5c 100644 --- a/api/health.go +++ b/api/health.go @@ -159,7 +159,24 @@ func (h *Health) Checks(service string, q *QueryOptions) (HealthChecks, *QueryMe // for a given service. It can optionally do server-side filtering on a tag // or nodes with passing health checks only. func (h *Health) Service(service, tag string, passingOnly bool, q *QueryOptions) ([]*ServiceEntry, *QueryMeta, error) { - r := h.c.newRequest("GET", "/v1/health/service/"+service) + return h.service(service, tag, passingOnly, q, false) +} + +// Connect is equivalent to Service except that it will only return services +// which are Connect-enabled and will returns the connection address for Connect +// client's to use which may be a proxy in front of the named service. TODO: If +// passingOnly is true only instances where both the service and any proxy are +// healthy will be returned. +func (h *Health) Connect(service, tag string, passingOnly bool, q *QueryOptions) ([]*ServiceEntry, *QueryMeta, error) { + return h.service(service, tag, passingOnly, q, true) +} + +func (h *Health) service(service, tag string, passingOnly bool, q *QueryOptions, connect bool) ([]*ServiceEntry, *QueryMeta, error) { + path := "/v1/health/service/" + service + if connect { + path = "/v1/health/connect/" + service + } + r := h.c.newRequest("GET", path) r.setQueryOptions(q) if tag != "" { r.params.Set("tag", tag) diff --git a/api/health_test.go b/api/health_test.go index c4ef11651..5c3c2b6a2 100644 --- a/api/health_test.go +++ b/api/health_test.go @@ -7,6 +7,7 @@ import ( "github.com/hashicorp/consul/testutil" "github.com/hashicorp/consul/testutil/retry" "github.com/pascaldekloe/goe/verify" + "github.com/stretchr/testify/require" ) func TestAPI_HealthNode(t *testing.T) { @@ -282,6 +283,56 @@ func TestAPI_HealthService(t *testing.T) { }) } +func TestAPI_HealthConnect(t *testing.T) { + t.Parallel() + c, s := makeClient(t) + defer s.Stop() + + agent := c.Agent() + health := c.Health() + + // Make a service with a proxy + reg := &AgentServiceRegistration{ + Name: "foo", + Port: 8000, + } + err := agent.ServiceRegister(reg) + require.Nil(t, err) + defer agent.ServiceDeregister("foo") + + // Register the proxy + proxyReg := &AgentServiceRegistration{ + Name: "foo-proxy", + Port: 8001, + Kind: ServiceKindConnectProxy, + ProxyDestination: "foo", + } + err = agent.ServiceRegister(proxyReg) + require.Nil(t, err) + defer agent.ServiceDeregister("foo-proxy") + + retry.Run(t, func(r *retry.R) { + services, meta, err := health.Connect("foo", "", true, nil) + if err != nil { + r.Fatal(err) + } + if meta.LastIndex == 0 { + r.Fatalf("bad: %v", meta) + } + // Should be exactly 1 service - the original shouldn't show up as a connect + // endpoint, only it's proxy. + if len(services) != 1 { + r.Fatalf("Bad: %v", services) + } + if services[0].Node.Datacenter != "dc1" { + r.Fatalf("Bad datacenter: %v", services[0].Node) + } + if services[0].Service.Port != proxyReg.Port { + r.Fatalf("Bad port: %v", services[0]) + } + }) +} + func TestAPI_HealthService_NodeMetaFilter(t *testing.T) { t.Parallel() meta := map[string]string{"somekey": "somevalue"} From 3efe3f8affd03fde42433353144ca30bb5aa2dfa Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Tue, 27 Mar 2018 10:49:27 +0100 Subject: [PATCH 093/539] require -> assert until rebase --- api/health_test.go | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/api/health_test.go b/api/health_test.go index 5c3c2b6a2..78867909f 100644 --- a/api/health_test.go +++ b/api/health_test.go @@ -7,7 +7,7 @@ import ( "github.com/hashicorp/consul/testutil" "github.com/hashicorp/consul/testutil/retry" "github.com/pascaldekloe/goe/verify" - "github.com/stretchr/testify/require" + "github.com/stretchr/testify/assert" ) func TestAPI_HealthNode(t *testing.T) { @@ -297,7 +297,10 @@ func TestAPI_HealthConnect(t *testing.T) { Port: 8000, } err := agent.ServiceRegister(reg) - require.Nil(t, err) + // TODO replace with require.Nil when we have it vendored in OSS and rebased + if !assert.Nil(t, err) { + return + } defer agent.ServiceDeregister("foo") // Register the proxy @@ -308,7 +311,10 @@ func TestAPI_HealthConnect(t *testing.T) { ProxyDestination: "foo", } err = agent.ServiceRegister(proxyReg) - require.Nil(t, err) + // TODO replace with require.Nil when we have it vendored in OSS and rebased + if !assert.Nil(t, err) { + return + } defer agent.ServiceDeregister("foo-proxy") retry.Run(t, func(r *retry.R) { From 68fa4a83b1c84340fd845dcc9d4318b431661787 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 27 Mar 2018 09:52:06 -0700 Subject: [PATCH 094/539] agent: get rid of method checks since they're done in the http layer --- agent/agent_endpoint.go | 10 ---------- agent/connect_ca_endpoint.go | 5 ----- agent/health_endpoint.go | 4 ---- agent/intentions_endpoint.go | 5 ----- 4 files changed, 24 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 1cbb4b1da..89bf16b62 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -852,11 +852,6 @@ func (s *HTTPServer) AgentConnectCARoots(resp http.ResponseWriter, req *http.Req // AgentConnectCALeafCert returns the certificate bundle for a service // instance. This supports blocking queries to update the returned bundle. func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - // Test the method - if req.Method != "GET" { - return nil, MethodNotAllowedError{req.Method, []string{"GET"}} - } - // Get the service ID. Note that this is the ID of a service instance. id := strings.TrimPrefix(req.URL.Path, "/v1/agent/connect/ca/leaf/") @@ -890,11 +885,6 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. // // POST /v1/agent/connect/authorize func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - // Test the method - if req.Method != "POST" { - return nil, MethodNotAllowedError{req.Method, []string{"POST"}} - } - // NOTE(mitchellh): return 200 for now. To be implemented later. return nil, nil } diff --git a/agent/connect_ca_endpoint.go b/agent/connect_ca_endpoint.go index 7832ba36f..43eeb8644 100644 --- a/agent/connect_ca_endpoint.go +++ b/agent/connect_ca_endpoint.go @@ -9,11 +9,6 @@ import ( // GET /v1/connect/ca/roots func (s *HTTPServer) ConnectCARoots(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - // Test the method - if req.Method != "GET" { - return nil, MethodNotAllowedError{req.Method, []string{"GET"}} - } - var args structs.DCSpecificRequest if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { return nil, nil diff --git a/agent/health_endpoint.go b/agent/health_endpoint.go index e57b5f48b..a0fb177b6 100644 --- a/agent/health_endpoint.go +++ b/agent/health_endpoint.go @@ -152,10 +152,6 @@ func (s *HTTPServer) HealthServiceNodes(resp http.ResponseWriter, req *http.Requ } func (s *HTTPServer) healthServiceNodes(resp http.ResponseWriter, req *http.Request, connect bool) (interface{}, error) { - if req.Method != "GET" { - return nil, MethodNotAllowedError{req.Method, []string{"GET"}} - } - // Set default DC args := structs.ServiceSpecificRequest{Connect: connect} s.parseSource(req, &args.Source) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index 5196f06c5..5a2e0e809 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -65,11 +65,6 @@ func (s *HTTPServer) IntentionCreate(resp http.ResponseWriter, req *http.Request // GET /v1/connect/intentions/match func (s *HTTPServer) IntentionMatch(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - // Test the method - if req.Method != "GET" { - return nil, MethodNotAllowedError{req.Method, []string{"GET"}} - } - // Prepare args args := &structs.IntentionQueryRequest{Match: &structs.IntentionQueryMatch{}} if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { From 7af99667b64331c4a32b9a29170c7b28c005e7ac Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 25 Mar 2018 14:39:18 -1000 Subject: [PATCH 095/539] agent/connect: Authorize for CertURI --- agent/connect/uri.go | 43 +++--------- agent/connect/uri_service.go | 42 ++++++++++++ agent/connect/uri_service_test.go | 104 ++++++++++++++++++++++++++++++ agent/connect/uri_signing.go | 29 +++++++++ agent/connect/uri_signing_test.go | 15 +++++ 5 files changed, 200 insertions(+), 33 deletions(-) create mode 100644 agent/connect/uri_service.go create mode 100644 agent/connect/uri_service_test.go create mode 100644 agent/connect/uri_signing.go create mode 100644 agent/connect/uri_signing_test.go diff --git a/agent/connect/uri.go b/agent/connect/uri.go index 3562f2d6c..48bfd3686 100644 --- a/agent/connect/uri.go +++ b/agent/connect/uri.go @@ -5,6 +5,8 @@ import ( "net/url" "regexp" "strings" + + "github.com/hashicorp/consul/agent/structs" ) // CertURI represents a Connect-valid URI value for a TLS certificate. @@ -15,6 +17,14 @@ import ( // However, we anticipate that we may accept URIs that are also not SPIFFE // compliant and therefore the interface is named as such. type CertURI interface { + // Authorize tests the authorization for this URI as a client + // for the given intention. The return value `auth` is only valid if + // the second value `match` is true. If the second value `match` is + // false, then the intention doesn't match this client and any + // result should be ignored. + Authorize(*structs.Intention) (auth bool, match bool) + + // URI is the valid URI value used in the cert. URI() *url.URL } @@ -79,36 +89,3 @@ func ParseCertURI(input *url.URL) (CertURI, error) { return nil, fmt.Errorf("SPIFFE ID is not in the expected format") } - -// SpiffeIDService is the structure to represent the SPIFFE ID for a service. -type SpiffeIDService struct { - Host string - Namespace string - Datacenter string - Service string -} - -// URI returns the *url.URL for this SPIFFE ID. -func (id *SpiffeIDService) URI() *url.URL { - var result url.URL - result.Scheme = "spiffe" - result.Host = id.Host - result.Path = fmt.Sprintf("/ns/%s/dc/%s/svc/%s", - id.Namespace, id.Datacenter, id.Service) - return &result -} - -// SpiffeIDSigning is the structure to represent the SPIFFE ID for a -// signing certificate (not a leaf service). -type SpiffeIDSigning struct { - ClusterID string // Unique cluster ID - Domain string // The domain, usually "consul" -} - -// URI returns the *url.URL for this SPIFFE ID. -func (id *SpiffeIDSigning) URI() *url.URL { - var result url.URL - result.Scheme = "spiffe" - result.Host = fmt.Sprintf("%s.%s", id.ClusterID, id.Domain) - return &result -} diff --git a/agent/connect/uri_service.go b/agent/connect/uri_service.go new file mode 100644 index 000000000..3e53e8e36 --- /dev/null +++ b/agent/connect/uri_service.go @@ -0,0 +1,42 @@ +package connect + +import ( + "fmt" + "net/url" + + "github.com/hashicorp/consul/agent/structs" +) + +// SpiffeIDService is the structure to represent the SPIFFE ID for a service. +type SpiffeIDService struct { + Host string + Namespace string + Datacenter string + Service string +} + +// URI returns the *url.URL for this SPIFFE ID. +func (id *SpiffeIDService) URI() *url.URL { + var result url.URL + result.Scheme = "spiffe" + result.Host = id.Host + result.Path = fmt.Sprintf("/ns/%s/dc/%s/svc/%s", + id.Namespace, id.Datacenter, id.Service) + return &result +} + +// CertURI impl. +func (id *SpiffeIDService) Authorize(ixn *structs.Intention) (bool, bool) { + if ixn.SourceNS != structs.IntentionWildcard && ixn.SourceNS != id.Namespace { + // Non-matching namespace + return false, false + } + + if ixn.SourceName != structs.IntentionWildcard && ixn.SourceName != id.Service { + // Non-matching name + return false, false + } + + // Match, return allow value + return ixn.Action == structs.IntentionActionAllow, true +} diff --git a/agent/connect/uri_service_test.go b/agent/connect/uri_service_test.go new file mode 100644 index 000000000..ac21bca28 --- /dev/null +++ b/agent/connect/uri_service_test.go @@ -0,0 +1,104 @@ +package connect + +import ( + "testing" + + "github.com/hashicorp/consul/agent/structs" + "github.com/stretchr/testify/assert" +) + +func TestSpiffeIDServiceAuthorize(t *testing.T) { + ns := structs.IntentionDefaultNamespace + serviceWeb := &SpiffeIDService{ + Host: "1234.consul", + Namespace: structs.IntentionDefaultNamespace, + Datacenter: "dc01", + Service: "web", + } + + cases := []struct { + Name string + URI *SpiffeIDService + Ixn *structs.Intention + Auth bool + Match bool + }{ + { + "exact source, not matching namespace", + serviceWeb, + &structs.Intention{ + SourceNS: "different", + SourceName: "db", + }, + false, + false, + }, + + { + "exact source, not matching name", + serviceWeb, + &structs.Intention{ + SourceNS: ns, + SourceName: "db", + }, + false, + false, + }, + + { + "exact source, allow", + serviceWeb, + &structs.Intention{ + SourceNS: serviceWeb.Namespace, + SourceName: serviceWeb.Service, + Action: structs.IntentionActionAllow, + }, + true, + true, + }, + + { + "exact source, deny", + serviceWeb, + &structs.Intention{ + SourceNS: serviceWeb.Namespace, + SourceName: serviceWeb.Service, + Action: structs.IntentionActionDeny, + }, + false, + true, + }, + + { + "exact namespace, wildcard service, deny", + serviceWeb, + &structs.Intention{ + SourceNS: serviceWeb.Namespace, + SourceName: structs.IntentionWildcard, + Action: structs.IntentionActionDeny, + }, + false, + true, + }, + + { + "exact namespace, wildcard service, allow", + serviceWeb, + &structs.Intention{ + SourceNS: serviceWeb.Namespace, + SourceName: structs.IntentionWildcard, + Action: structs.IntentionActionAllow, + }, + true, + true, + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + auth, match := tc.URI.Authorize(tc.Ixn) + assert.Equal(t, tc.Auth, auth) + assert.Equal(t, tc.Match, match) + }) + } +} diff --git a/agent/connect/uri_signing.go b/agent/connect/uri_signing.go new file mode 100644 index 000000000..213f744d1 --- /dev/null +++ b/agent/connect/uri_signing.go @@ -0,0 +1,29 @@ +package connect + +import ( + "fmt" + "net/url" + + "github.com/hashicorp/consul/agent/structs" +) + +// SpiffeIDSigning is the structure to represent the SPIFFE ID for a +// signing certificate (not a leaf service). +type SpiffeIDSigning struct { + ClusterID string // Unique cluster ID + Domain string // The domain, usually "consul" +} + +// URI returns the *url.URL for this SPIFFE ID. +func (id *SpiffeIDSigning) URI() *url.URL { + var result url.URL + result.Scheme = "spiffe" + result.Host = fmt.Sprintf("%s.%s", id.ClusterID, id.Domain) + return &result +} + +// CertURI impl. +func (id *SpiffeIDSigning) Authorize(ixn *structs.Intention) (bool, bool) { + // Never authorize as a client. + return false, true +} diff --git a/agent/connect/uri_signing_test.go b/agent/connect/uri_signing_test.go new file mode 100644 index 000000000..a9be3c5e2 --- /dev/null +++ b/agent/connect/uri_signing_test.go @@ -0,0 +1,15 @@ +package connect + +import ( + "testing" + + "github.com/stretchr/testify/assert" +) + +// Signing ID should never authorize +func TestSpiffeIDSigningAuthorize(t *testing.T) { + var id SpiffeIDSigning + auth, ok := id.Authorize(nil) + assert.False(t, auth) + assert.True(t, ok) +} From 5364a8cd90cb482f3cab58cef4fcadd381ea1e94 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 25 Mar 2018 14:52:26 -1000 Subject: [PATCH 096/539] agent: /v1/agent/connect/authorize is functional, with tests --- agent/agent_endpoint.go | 84 +++++++++++++++++- agent/agent_endpoint_test.go | 162 +++++++++++++++++++++++++++++++++++ agent/structs/connect.go | 17 ++++ 3 files changed, 261 insertions(+), 2 deletions(-) create mode 100644 agent/structs/connect.go diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 89bf16b62..cb4d06c59 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -4,6 +4,7 @@ import ( "fmt" "log" "net/http" + "net/url" "strconv" "strings" @@ -885,6 +886,85 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. // // POST /v1/agent/connect/authorize func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - // NOTE(mitchellh): return 200 for now. To be implemented later. - return nil, nil + // Decode the request from the request body + var authReq structs.ConnectAuthorizeRequest + if err := decodeBody(req, &authReq, nil); err != nil { + resp.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(resp, "Request decode failed: %v", err) + return nil, nil + } + + // We need to have a target to check intentions + if authReq.Target == "" { + resp.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(resp, "Target service must be specified") + return nil, nil + } + + // Parse the certificate URI from the client ID + uriRaw, err := url.Parse(authReq.ClientID) + if err != nil { + return &connectAuthorizeResp{ + Authorized: false, + Reason: fmt.Sprintf("Client ID must be a URI: %s", err), + }, nil + } + uri, err := connect.ParseCertURI(uriRaw) + if err != nil { + return &connectAuthorizeResp{ + Authorized: false, + Reason: fmt.Sprintf("Invalid client ID: %s", err), + }, nil + } + + uriService, ok := uri.(*connect.SpiffeIDService) + if !ok { + return &connectAuthorizeResp{ + Authorized: false, + Reason: fmt.Sprintf("Client ID must be a valid SPIFFE service URI"), + }, nil + } + + // Get the intentions for this target service + args := &structs.IntentionQueryRequest{ + Datacenter: s.agent.config.Datacenter, + Match: &structs.IntentionQueryMatch{ + Type: structs.IntentionMatchDestination, + Entries: []structs.IntentionMatchEntry{ + { + Namespace: structs.IntentionDefaultNamespace, + Name: authReq.Target, + }, + }, + }, + } + var reply structs.IndexedIntentionMatches + if err := s.agent.RPC("Intention.Match", args, &reply); err != nil { + return nil, err + } + if len(reply.Matches) != 1 { + return nil, fmt.Errorf("Internal error loading matches") + } + + // Test the authorization for each match + for _, ixn := range reply.Matches[0] { + if auth, ok := uriService.Authorize(ixn); ok { + return &connectAuthorizeResp{ + Authorized: auth, + Reason: fmt.Sprintf("Matched intention %s", ixn.ID), + }, nil + } + } + + // TODO(mitchellh): default behavior here for now is "deny" but we + // should consider how this is determined. + return &connectAuthorizeResp{ + Authorized: false, + Reason: "No matching intention, using default behavior", + }, nil +} + +type connectAuthorizeResp struct { + Authorized bool + Reason string } diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 15267107a..cae7a4ccc 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2130,3 +2130,165 @@ func TestAgentConnectCALeafCert_good(t *testing.T) { // TODO(mitchellh): verify the private key matches the cert } + +func TestAgentConnectAuthorize_badBody(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + args := []string{} + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(400, resp.Code) + assert.Contains(resp.Body.String(), "decode") +} + +func TestAgentConnectAuthorize_noTarget(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + args := &structs.ConnectAuthorizeRequest{} + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(400, resp.Code) + assert.Contains(resp.Body.String(), "Target service") +} + +// Client ID is not in the valid URI format +func TestAgentConnectAuthorize_idInvalidFormat(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + args := &structs.ConnectAuthorizeRequest{ + Target: "web", + ClientID: "tubes", + } + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + assert.False(obj.Authorized) + assert.Contains(obj.Reason, "Invalid client") +} + +// Client ID is a valid URI but its not a service URI +func TestAgentConnectAuthorize_idNotService(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + args := &structs.ConnectAuthorizeRequest{ + Target: "web", + ClientID: "spiffe://1234.consul", + } + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + assert.False(obj.Authorized) + assert.Contains(obj.Reason, "must be a valid") +} + +// Test when there is an intention allowing the connection +func TestAgentConnectAuthorize_allow(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + target := "db" + + // Create some intentions + { + req := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + req.Intention.SourceNS = structs.IntentionDefaultNamespace + req.Intention.SourceName = "web" + req.Intention.DestinationNS = structs.IntentionDefaultNamespace + req.Intention.DestinationName = target + req.Intention.Action = structs.IntentionActionAllow + + var reply string + assert.Nil(a.RPC("Intention.Apply", &req, &reply)) + } + + args := &structs.ConnectAuthorizeRequest{ + Target: target, + ClientID: connect.TestSpiffeIDService(t, "web").URI().String(), + } + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + assert.True(obj.Authorized) + assert.Contains(obj.Reason, "Matched") +} + +// Test when there is an intention denying the connection +func TestAgentConnectAuthorize_deny(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + target := "db" + + // Create some intentions + { + req := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + req.Intention.SourceNS = structs.IntentionDefaultNamespace + req.Intention.SourceName = "web" + req.Intention.DestinationNS = structs.IntentionDefaultNamespace + req.Intention.DestinationName = target + req.Intention.Action = structs.IntentionActionDeny + + var reply string + assert.Nil(a.RPC("Intention.Apply", &req, &reply)) + } + + args := &structs.ConnectAuthorizeRequest{ + Target: target, + ClientID: connect.TestSpiffeIDService(t, "web").URI().String(), + } + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + assert.False(obj.Authorized) + assert.Contains(obj.Reason, "Matched") +} diff --git a/agent/structs/connect.go b/agent/structs/connect.go new file mode 100644 index 000000000..1a2e03da8 --- /dev/null +++ b/agent/structs/connect.go @@ -0,0 +1,17 @@ +package structs + +// ConnectAuthorizeRequest is the structure of a request to authorize +// a connection. +type ConnectAuthorizeRequest struct { + // Target is the name of the service that is being requested. + Target string + + // ClientID is a unique identifier for the requesting client. This + // is currently the URI SAN from the TLS client certificate. + // + // ClientCertSerial is a colon-hex-encoded of the serial number for + // the requesting client cert. This is used to check against revocation + // lists. + ClientID string + ClientCertSerial string +} From c6269cda371c876163e0198e76bbafe48d38b1ae Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 25 Mar 2018 15:00:59 -1000 Subject: [PATCH 097/539] agent: default deny on connect authorize endpoint --- agent/agent_endpoint.go | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index cb4d06c59..0f9ccb852 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -956,11 +956,15 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R } } - // TODO(mitchellh): default behavior here for now is "deny" but we - // should consider how this is determined. + // If there was no matching intention, we always deny. Connect does + // support a blacklist (default allow) mode, but this works by appending + // */* => */* ALLOW intention to all Match requests. This means that + // the above should've matched. Therefore, if we reached here, something + // strange has happened and we should just deny the connection and err + // on the side of safety. return &connectAuthorizeResp{ Authorized: false, - Reason: "No matching intention, using default behavior", + Reason: "No matching intention, denying", }, nil } From 3f80808379a8ce4bba5e54e49d65d95bfbe73d58 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 25 Mar 2018 15:02:25 -1000 Subject: [PATCH 098/539] agent: bolster commenting for clearer understandability --- agent/agent_endpoint.go | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 0f9ccb852..7d6a19470 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -968,7 +968,9 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R }, nil } +// connectAuthorizeResp is the response format/structure for the +// /v1/agent/connect/authorize endpoint. type connectAuthorizeResp struct { - Authorized bool - Reason string + Authorized bool // True if authorized, false if not + Reason string // Reason for the Authorized value (whether true or false) } From 3e0e0a94a7f6903654ef98a7c1584be8bcf3e07a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 25 Mar 2018 15:06:10 -1000 Subject: [PATCH 099/539] agent/structs: String format for Intention, used for logging --- agent/agent_endpoint.go | 4 ++-- agent/structs/intention.go | 9 +++++++++ 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 7d6a19470..02682e592 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -921,7 +921,7 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R if !ok { return &connectAuthorizeResp{ Authorized: false, - Reason: fmt.Sprintf("Client ID must be a valid SPIFFE service URI"), + Reason: "Client ID must be a valid SPIFFE service URI", }, nil } @@ -951,7 +951,7 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R if auth, ok := uriService.Authorize(ixn); ok { return &connectAuthorizeResp{ Authorized: auth, - Reason: fmt.Sprintf("Matched intention %s", ixn.ID), + Reason: fmt.Sprintf("Matched intention: %s", ixn.String()), }, nil } } diff --git a/agent/structs/intention.go b/agent/structs/intention.go index fb83f85da..d801635c9 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -164,6 +164,15 @@ func (x *Intention) GetACLPrefix() (string, bool) { return x.DestinationName, x.DestinationName != "" } +// String returns a human-friendly string for this intention. +func (x *Intention) String() string { + return fmt.Sprintf("%s %s/%s => %s/%s (ID: %s", + strings.ToUpper(string(x.Action)), + x.SourceNS, x.SourceName, + x.DestinationNS, x.DestinationName, + x.ID) +} + // IntentionAction is the action that the intention represents. This // can be "allow" or "deny" to whitelist or blacklist intentions. type IntentionAction string From b3584b63555248d5c3817ccb72ccf655da47bb1d Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 25 Mar 2018 15:50:05 -1000 Subject: [PATCH 100/539] agent: ACL checks for authorize, default behavior --- acl/acl.go | 15 ++++++ acl/acl_test.go | 19 ++++++++ agent/agent_endpoint.go | 44 ++++++++++++++---- agent/agent_endpoint_test.go | 90 ++++++++++++++++++++++++++++++++++++ 4 files changed, 159 insertions(+), 9 deletions(-) diff --git a/acl/acl.go b/acl/acl.go index 4ac88f01c..49dc569b9 100644 --- a/acl/acl.go +++ b/acl/acl.go @@ -60,6 +60,10 @@ type ACL interface { // EventWrite determines if a specific event may be fired. EventWrite(string) bool + // IntentionDefault determines the default authorized behavior + // when no intentions match a Connect request. + IntentionDefault() bool + // IntentionRead determines if a specific intention can be read. IntentionRead(string) bool @@ -161,6 +165,10 @@ func (s *StaticACL) EventWrite(string) bool { return s.defaultAllow } +func (s *StaticACL) IntentionDefault() bool { + return s.defaultAllow +} + func (s *StaticACL) IntentionRead(string) bool { return s.defaultAllow } @@ -493,6 +501,13 @@ func (p *PolicyACL) EventWrite(name string) bool { return p.parent.EventWrite(name) } +// IntentionDefault returns whether the default behavior when there are +// no matching intentions is to allow or deny. +func (p *PolicyACL) IntentionDefault() bool { + // We always go up, this can't be determined by a policy. + return p.parent.IntentionDefault() +} + // IntentionRead checks if writing (creating, updating, or deleting) of an // intention is allowed. func (p *PolicyACL) IntentionRead(prefix string) bool { diff --git a/acl/acl_test.go b/acl/acl_test.go index 85f35f606..263af0656 100644 --- a/acl/acl_test.go +++ b/acl/acl_test.go @@ -53,6 +53,9 @@ func TestStaticACL(t *testing.T) { if !all.EventWrite("foobar") { t.Fatalf("should allow") } + if !all.IntentionDefault() { + t.Fatalf("should allow") + } if !all.IntentionWrite("foobar") { t.Fatalf("should allow") } @@ -126,6 +129,9 @@ func TestStaticACL(t *testing.T) { if none.EventWrite("") { t.Fatalf("should not allow") } + if none.IntentionDefault() { + t.Fatalf("should not allow") + } if none.IntentionWrite("foo") { t.Fatalf("should not allow") } @@ -193,6 +199,9 @@ func TestStaticACL(t *testing.T) { if !manage.EventWrite("foobar") { t.Fatalf("should allow") } + if !manage.IntentionDefault() { + t.Fatalf("should allow") + } if !manage.IntentionWrite("foobar") { t.Fatalf("should allow") } @@ -454,6 +463,11 @@ func TestPolicyACL(t *testing.T) { t.Fatalf("Prepared query fail: %#v", c) } } + + // Check default intentions bubble up + if !acl.IntentionDefault() { + t.Fatal("should allow") + } } func TestPolicyACL_Parent(t *testing.T) { @@ -607,6 +621,11 @@ func TestPolicyACL_Parent(t *testing.T) { if acl.Snapshot() { t.Fatalf("should not allow") } + + // Check default intentions + if acl.IntentionDefault() { + t.Fatal("should not allow") + } } func TestPolicyACL_Agent(t *testing.T) { diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 02682e592..5a9218c37 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -886,6 +886,10 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. // // POST /v1/agent/connect/authorize func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Fetch the token + var token string + s.parseToken(req, &token) + // Decode the request from the request body var authReq structs.ConnectAuthorizeRequest if err := decodeBody(req, &authReq, nil); err != nil { @@ -925,7 +929,18 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R }, nil } - // Get the intentions for this target service + // We need to verify service:write permissions for the given token. + // We do this manually here since the RPC request below only verifies + // service:read. + rule, err := s.agent.resolveToken(token) + if err != nil { + return nil, err + } + if rule != nil && !rule.ServiceWrite(authReq.Target, nil) { + return nil, acl.ErrPermissionDenied + } + + // Get the intentions for this target service. args := &structs.IntentionQueryRequest{ Datacenter: s.agent.config.Datacenter, Match: &structs.IntentionQueryMatch{ @@ -938,6 +953,7 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R }, }, } + args.Token = token var reply structs.IndexedIntentionMatches if err := s.agent.RPC("Intention.Match", args, &reply); err != nil { return nil, err @@ -956,15 +972,25 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R } } - // If there was no matching intention, we always deny. Connect does - // support a blacklist (default allow) mode, but this works by appending - // */* => */* ALLOW intention to all Match requests. This means that - // the above should've matched. Therefore, if we reached here, something - // strange has happened and we should just deny the connection and err - // on the side of safety. + // No match, we need to determine the default behavior. We do this by + // specifying the anonymous token token, which will get that behavior. + // The default behavior if ACLs are disabled is to allow connections + // to mimic the behavior of Consul itself: everything is allowed if + // ACLs are disabled. + rule, err = s.agent.resolveToken("") + if err != nil { + return nil, err + } + authz := true + reason := "ACLs disabled, access is allowed by default" + if rule != nil { + authz = rule.IntentionDefault() + reason = "Default behavior configured by ACLs" + } + return &connectAuthorizeResp{ - Authorized: false, - Reason: "No matching intention, denying", + Authorized: authz, + Reason: reason, }, nil } diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index cae7a4ccc..bc59f3700 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2292,3 +2292,93 @@ func TestAgentConnectAuthorize_deny(t *testing.T) { assert.False(obj.Authorized) assert.Contains(obj.Reason, "Matched") } + +// Test that authorize fails without service:write for the target service. +func TestAgentConnectAuthorize_serviceWrite(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()) + defer a.Shutdown() + + // Create an ACL + var token string + { + args := map[string]interface{}{ + "Name": "User Token", + "Type": "client", + "Rules": `service "foo" { policy = "read" }`, + } + req, _ := http.NewRequest("PUT", "/v1/acl/create?token=root", jsonReader(args)) + resp := httptest.NewRecorder() + obj, err := a.srv.ACLCreate(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + aclResp := obj.(aclCreateResponse) + token = aclResp.ID + } + + args := &structs.ConnectAuthorizeRequest{ + Target: "foo", + ClientID: connect.TestSpiffeIDService(t, "web").URI().String(), + } + req, _ := http.NewRequest("POST", + "/v1/agent/connect/authorize?token="+token, jsonReader(args)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentConnectAuthorize(resp, req) + assert.True(acl.IsErrPermissionDenied(err)) +} + +// Test when no intentions match w/ a default deny policy +func TestAgentConnectAuthorize_defaultDeny(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()) + defer a.Shutdown() + + args := &structs.ConnectAuthorizeRequest{ + Target: "foo", + ClientID: connect.TestSpiffeIDService(t, "web").URI().String(), + } + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize?token=root", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + assert.False(obj.Authorized) + assert.Contains(obj.Reason, "Default behavior") +} + +// Test when no intentions match w/ a default allow policy +func TestAgentConnectAuthorize_defaultAllow(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), ` + acl_datacenter = "dc1" + acl_default_policy = "allow" + acl_master_token = "root" + acl_agent_token = "root" + acl_agent_master_token = "towel" + acl_enforce_version_8 = true + `) + defer a.Shutdown() + + args := &structs.ConnectAuthorizeRequest{ + Target: "foo", + ClientID: connect.TestSpiffeIDService(t, "web").URI().String(), + } + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize?token=root", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + assert.True(obj.Authorized) + assert.Contains(obj.Reason, "Default behavior") +} From f983978fb821fec7120814d30e85f8d6457f7fab Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 27 Mar 2018 10:08:20 -0700 Subject: [PATCH 101/539] acl: IntentionDefault => IntentionDefaultAllow --- acl/acl.go | 12 ++++++------ acl/acl_test.go | 10 +++++----- agent/agent_endpoint.go | 2 +- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/acl/acl.go b/acl/acl.go index 49dc569b9..a8ad0de96 100644 --- a/acl/acl.go +++ b/acl/acl.go @@ -60,9 +60,9 @@ type ACL interface { // EventWrite determines if a specific event may be fired. EventWrite(string) bool - // IntentionDefault determines the default authorized behavior + // IntentionDefaultAllow determines the default authorized behavior // when no intentions match a Connect request. - IntentionDefault() bool + IntentionDefaultAllow() bool // IntentionRead determines if a specific intention can be read. IntentionRead(string) bool @@ -165,7 +165,7 @@ func (s *StaticACL) EventWrite(string) bool { return s.defaultAllow } -func (s *StaticACL) IntentionDefault() bool { +func (s *StaticACL) IntentionDefaultAllow() bool { return s.defaultAllow } @@ -501,11 +501,11 @@ func (p *PolicyACL) EventWrite(name string) bool { return p.parent.EventWrite(name) } -// IntentionDefault returns whether the default behavior when there are +// IntentionDefaultAllow returns whether the default behavior when there are // no matching intentions is to allow or deny. -func (p *PolicyACL) IntentionDefault() bool { +func (p *PolicyACL) IntentionDefaultAllow() bool { // We always go up, this can't be determined by a policy. - return p.parent.IntentionDefault() + return p.parent.IntentionDefaultAllow() } // IntentionRead checks if writing (creating, updating, or deleting) of an diff --git a/acl/acl_test.go b/acl/acl_test.go index 263af0656..faf6f092f 100644 --- a/acl/acl_test.go +++ b/acl/acl_test.go @@ -53,7 +53,7 @@ func TestStaticACL(t *testing.T) { if !all.EventWrite("foobar") { t.Fatalf("should allow") } - if !all.IntentionDefault() { + if !all.IntentionDefaultAllow() { t.Fatalf("should allow") } if !all.IntentionWrite("foobar") { @@ -129,7 +129,7 @@ func TestStaticACL(t *testing.T) { if none.EventWrite("") { t.Fatalf("should not allow") } - if none.IntentionDefault() { + if none.IntentionDefaultAllow() { t.Fatalf("should not allow") } if none.IntentionWrite("foo") { @@ -199,7 +199,7 @@ func TestStaticACL(t *testing.T) { if !manage.EventWrite("foobar") { t.Fatalf("should allow") } - if !manage.IntentionDefault() { + if !manage.IntentionDefaultAllow() { t.Fatalf("should allow") } if !manage.IntentionWrite("foobar") { @@ -465,7 +465,7 @@ func TestPolicyACL(t *testing.T) { } // Check default intentions bubble up - if !acl.IntentionDefault() { + if !acl.IntentionDefaultAllow() { t.Fatal("should allow") } } @@ -623,7 +623,7 @@ func TestPolicyACL_Parent(t *testing.T) { } // Check default intentions - if acl.IntentionDefault() { + if acl.IntentionDefaultAllow() { t.Fatal("should not allow") } } diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 5a9218c37..20cb047b2 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -984,7 +984,7 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R authz := true reason := "ACLs disabled, access is allowed by default" if rule != nil { - authz = rule.IntentionDefault() + authz = rule.IntentionDefaultAllow() reason = "Default behavior configured by ACLs" } From 94e7a0a3c106b1ee1b90a0f45fee582096788d42 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 27 Mar 2018 10:09:13 -0700 Subject: [PATCH 102/539] agent: add TODO for verification --- agent/agent_endpoint.go | 3 +++ 1 file changed, 3 insertions(+) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 20cb047b2..a6b67816d 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -940,6 +940,9 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R return nil, acl.ErrPermissionDenied } + // TODO(mitchellh): we need to verify more things here, such as the + // trust domain, blacklist lookup of the serial, etc. + // Get the intentions for this target service. args := &structs.IntentionQueryRequest{ Datacenter: s.agent.config.Datacenter, From b5b301aa2a698dec4280a14820e01c66ebcbbed3 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 27 Mar 2018 16:50:17 -0700 Subject: [PATCH 103/539] api: endpoints for working with CA roots, agent authorize, etc. --- api/agent.go | 82 +++++++++++++++++++++++++++++++++++++++++++++ api/agent_test.go | 15 +++++++++ api/connect.go | 65 +++++++++++++++++++++++++++++++++++ api/connect_test.go | 26 ++++++++++++++ 4 files changed, 188 insertions(+) create mode 100644 api/connect.go create mode 100644 api/connect_test.go diff --git a/api/agent.go b/api/agent.go index 359206c54..860483671 100644 --- a/api/agent.go +++ b/api/agent.go @@ -172,6 +172,19 @@ type SampledValue struct { Labels map[string]string } +// AgentAuthorizeParams are the request parameters for authorizing a request. +type AgentAuthorizeParams struct { + Target string + ClientID string + ClientCertSerial string +} + +// AgentAuthorize is the response structure for Connect authorization. +type AgentAuthorize struct { + Authorized bool + Reason string +} + // Agent can be used to query the Agent endpoints type Agent struct { c *Client @@ -505,6 +518,75 @@ func (a *Agent) ForceLeave(node string) error { return nil } +// ConnectAuthorize is used to authorize an incoming connection +// to a natively integrated Connect service. +// +// TODO(mitchellh): we need to test this better once we have a way to +// configure CAs from the API package (when the CA work is done). +func (a *Agent) ConnectAuthorize(auth *AgentAuthorizeParams) (*AgentAuthorize, error) { + r := a.c.newRequest("POST", "/v1/agent/connect/authorize") + r.obj = auth + _, resp, err := requireOK(a.c.doRequest(r)) + if err != nil { + return nil, err + } + resp.Body.Close() + + var out AgentAuthorize + if err := decodeBody(resp, &out); err != nil { + return nil, err + } + return &out, nil +} + +// ConnectCARoots returns the list of roots. +// +// TODO(mitchellh): we need to test this better once we have a way to +// configure CAs from the API package (when the CA work is done). +func (a *Agent) ConnectCARoots(q *QueryOptions) (*CARootList, *QueryMeta, error) { + r := a.c.newRequest("GET", "/v1/agent/connect/ca/roots") + r.setQueryOptions(q) + rtt, resp, err := requireOK(a.c.doRequest(r)) + if err != nil { + return nil, nil, err + } + defer resp.Body.Close() + + qm := &QueryMeta{} + parseQueryMeta(resp, qm) + qm.RequestTime = rtt + + var out CARootList + if err := decodeBody(resp, &out); err != nil { + return nil, nil, err + } + return &out, qm, nil +} + +// ConnectCALeaf gets the leaf certificate for the given service ID. +// +// TODO(mitchellh): we need to test this better once we have a way to +// configure CAs from the API package (when the CA work is done). +func (a *Agent) ConnectCALeaf(serviceID string, q *QueryOptions) (*IssuedCert, *QueryMeta, error) { + r := a.c.newRequest("GET", "/v1/agent/connect/ca/leaf/"+serviceID) + r.setQueryOptions(q) + rtt, resp, err := requireOK(a.c.doRequest(r)) + if err != nil { + return nil, nil, err + } + defer resp.Body.Close() + + qm := &QueryMeta{} + parseQueryMeta(resp, qm) + qm.RequestTime = rtt + + var out IssuedCert + if err := decodeBody(resp, &out); err != nil { + return nil, nil, err + } + return &out, qm, nil +} + // EnableServiceMaintenance toggles service maintenance mode on // for the given service ID. func (a *Agent) EnableServiceMaintenance(serviceID, reason string) error { diff --git a/api/agent_test.go b/api/agent_test.go index d45a9a131..653512be9 100644 --- a/api/agent_test.go +++ b/api/agent_test.go @@ -11,6 +11,7 @@ import ( "github.com/hashicorp/consul/testutil" "github.com/hashicorp/consul/testutil/retry" "github.com/hashicorp/serf/serf" + "github.com/stretchr/testify/require" ) func TestAPI_AgentSelf(t *testing.T) { @@ -981,3 +982,17 @@ func TestAPI_AgentUpdateToken(t *testing.T) { t.Fatalf("err: %v", err) } } + +func TestAPI_AgentConnectCARoots_empty(t *testing.T) { + t.Parallel() + + require := require.New(t) + c, s := makeClient(t) + defer s.Stop() + + agent := c.Agent() + list, meta, err := agent.ConnectCARoots(nil) + require.Nil(err) + require.Equal(uint64(0), meta.LastIndex) + require.Len(list.Roots, 0) +} diff --git a/api/connect.go b/api/connect.go new file mode 100644 index 000000000..0f75a45fa --- /dev/null +++ b/api/connect.go @@ -0,0 +1,65 @@ +package api + +import ( + "time" +) + +// CARootList is the structure for the results of listing roots. +type CARootList struct { + ActiveRootID string + Roots []*CARoot +} + +// CARoot is a single CA within Connect. +type CARoot struct { + ID string + Name string + RootCert string + Active bool + CreateIndex uint64 + ModifyIndex uint64 +} + +type IssuedCert struct { + SerialNumber string + CertPEM string + PrivateKeyPEM string + Service string + ServiceURI string + ValidAfter time.Time + ValidBefore time.Time + CreateIndex uint64 + ModifyIndex uint64 +} + +// Connect can be used to work with endpoints related to Connect, the +// feature for securely connecting services within Consul. +type Connect struct { + c *Client +} + +// Health returns a handle to the health endpoints +func (c *Client) Connect() *Connect { + return &Connect{c} +} + +// CARoots queries the list of available roots. +func (h *Connect) CARoots(q *QueryOptions) (*CARootList, *QueryMeta, error) { + r := h.c.newRequest("GET", "/v1/connect/ca/roots") + r.setQueryOptions(q) + rtt, resp, err := requireOK(h.c.doRequest(r)) + if err != nil { + return nil, nil, err + } + defer resp.Body.Close() + + qm := &QueryMeta{} + parseQueryMeta(resp, qm) + qm.RequestTime = rtt + + var out CARootList + if err := decodeBody(resp, &out); err != nil { + return nil, nil, err + } + return &out, qm, nil +} diff --git a/api/connect_test.go b/api/connect_test.go new file mode 100644 index 000000000..3ad7cb078 --- /dev/null +++ b/api/connect_test.go @@ -0,0 +1,26 @@ +package api + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +// NOTE(mitchellh): we don't have a way to test CA roots yet since there +// is no API public way to configure the root certs. This wll be resolved +// in the future and we can write tests then. This is tested in agent and +// agent/consul which do have internal access to manually create roots. + +func TestAPI_ConnectCARoots_empty(t *testing.T) { + t.Parallel() + + require := require.New(t) + c, s := makeClient(t) + defer s.Stop() + + connect := c.Connect() + list, meta, err := connect.CARoots(nil) + require.Nil(err) + require.Equal(uint64(0), meta.LastIndex) + require.Len(list.Roots, 0) +} From 9c33068394bba45890cd440538154759cfb590e5 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 27 Mar 2018 21:33:05 -0700 Subject: [PATCH 104/539] api: starting intention endpoints, reorganize files slightly --- api/connect.go | 53 --------- api/connect_ca.go | 80 ++++++++++++++ api/{connect_test.go => connect_ca_test.go} | 0 api/connect_intention.go | 112 ++++++++++++++++++++ api/connect_intention_test.go | 48 +++++++++ 5 files changed, 240 insertions(+), 53 deletions(-) create mode 100644 api/connect_ca.go rename api/{connect_test.go => connect_ca_test.go} (100%) create mode 100644 api/connect_intention.go create mode 100644 api/connect_intention_test.go diff --git a/api/connect.go b/api/connect.go index 0f75a45fa..4b4e06900 100644 --- a/api/connect.go +++ b/api/connect.go @@ -1,37 +1,5 @@ package api -import ( - "time" -) - -// CARootList is the structure for the results of listing roots. -type CARootList struct { - ActiveRootID string - Roots []*CARoot -} - -// CARoot is a single CA within Connect. -type CARoot struct { - ID string - Name string - RootCert string - Active bool - CreateIndex uint64 - ModifyIndex uint64 -} - -type IssuedCert struct { - SerialNumber string - CertPEM string - PrivateKeyPEM string - Service string - ServiceURI string - ValidAfter time.Time - ValidBefore time.Time - CreateIndex uint64 - ModifyIndex uint64 -} - // Connect can be used to work with endpoints related to Connect, the // feature for securely connecting services within Consul. type Connect struct { @@ -42,24 +10,3 @@ type Connect struct { func (c *Client) Connect() *Connect { return &Connect{c} } - -// CARoots queries the list of available roots. -func (h *Connect) CARoots(q *QueryOptions) (*CARootList, *QueryMeta, error) { - r := h.c.newRequest("GET", "/v1/connect/ca/roots") - r.setQueryOptions(q) - rtt, resp, err := requireOK(h.c.doRequest(r)) - if err != nil { - return nil, nil, err - } - defer resp.Body.Close() - - qm := &QueryMeta{} - parseQueryMeta(resp, qm) - qm.RequestTime = rtt - - var out CARootList - if err := decodeBody(resp, &out); err != nil { - return nil, nil, err - } - return &out, qm, nil -} diff --git a/api/connect_ca.go b/api/connect_ca.go new file mode 100644 index 000000000..19046c2ab --- /dev/null +++ b/api/connect_ca.go @@ -0,0 +1,80 @@ +package api + +import ( + "time" +) + +// CARootList is the structure for the results of listing roots. +type CARootList struct { + ActiveRootID string + Roots []*CARoot +} + +// CARoot represents a root CA certificate that is trusted. +type CARoot struct { + // ID is a globally unique ID (UUID) representing this CA root. + ID string + + // Name is a human-friendly name for this CA root. This value is + // opaque to Consul and is not used for anything internally. + Name string + + // RootCert is the PEM-encoded public certificate. + RootCert string + + // Active is true if this is the current active CA. This must only + // be true for exactly one CA. For any method that modifies roots in the + // state store, tests should be written to verify that multiple roots + // cannot be active. + Active bool + + CreateIndex uint64 + ModifyIndex uint64 +} + +// IssuedCert is a certificate that has been issued by a Connect CA. +type IssuedCert struct { + // SerialNumber is the unique serial number for this certificate. + // This is encoded in standard hex separated by :. + SerialNumber string + + // CertPEM and PrivateKeyPEM are the PEM-encoded certificate and private + // key for that cert, respectively. This should not be stored in the + // state store, but is present in the sign API response. + CertPEM string `json:",omitempty"` + PrivateKeyPEM string `json:",omitempty"` + + // Service is the name of the service for which the cert was issued. + // ServiceURI is the cert URI value. + Service string + ServiceURI string + + // ValidAfter and ValidBefore are the validity periods for the + // certificate. + ValidAfter time.Time + ValidBefore time.Time + + CreateIndex uint64 + ModifyIndex uint64 +} + +// CARoots queries the list of available roots. +func (h *Connect) CARoots(q *QueryOptions) (*CARootList, *QueryMeta, error) { + r := h.c.newRequest("GET", "/v1/connect/ca/roots") + r.setQueryOptions(q) + rtt, resp, err := requireOK(h.c.doRequest(r)) + if err != nil { + return nil, nil, err + } + defer resp.Body.Close() + + qm := &QueryMeta{} + parseQueryMeta(resp, qm) + qm.RequestTime = rtt + + var out CARootList + if err := decodeBody(resp, &out); err != nil { + return nil, nil, err + } + return &out, qm, nil +} diff --git a/api/connect_test.go b/api/connect_ca_test.go similarity index 100% rename from api/connect_test.go rename to api/connect_ca_test.go diff --git a/api/connect_intention.go b/api/connect_intention.go new file mode 100644 index 000000000..b138dd4ae --- /dev/null +++ b/api/connect_intention.go @@ -0,0 +1,112 @@ +package api + +import ( + "time" +) + +// Intention defines an intention for the Connect Service Graph. This defines +// the allowed or denied behavior of a connection between two services using +// Connect. +type Intention struct { + // ID is the UUID-based ID for the intention, always generated by Consul. + ID string + + // Description is a human-friendly description of this intention. + // It is opaque to Consul and is only stored and transferred in API + // requests. + Description string + + // SourceNS, SourceName are the namespace and name, respectively, of + // the source service. Either of these may be the wildcard "*", but only + // the full value can be a wildcard. Partial wildcards are not allowed. + // The source may also be a non-Consul service, as specified by SourceType. + // + // DestinationNS, DestinationName is the same, but for the destination + // service. The same rules apply. The destination is always a Consul + // service. + SourceNS, SourceName string + DestinationNS, DestinationName string + + // SourceType is the type of the value for the source. + SourceType IntentionSourceType + + // Action is whether this is a whitelist or blacklist intention. + Action IntentionAction + + // DefaultAddr, DefaultPort of the local listening proxy (if any) to + // make this connection. + DefaultAddr string + DefaultPort int + + // Meta is arbitrary metadata associated with the intention. This is + // opaque to Consul but is served in API responses. + Meta map[string]string + + // CreatedAt and UpdatedAt keep track of when this record was created + // or modified. + CreatedAt, UpdatedAt time.Time + + CreateIndex uint64 + ModifyIndex uint64 +} + +// IntentionAction is the action that the intention represents. This +// can be "allow" or "deny" to whitelist or blacklist intentions. +type IntentionAction string + +const ( + IntentionActionAllow IntentionAction = "allow" + IntentionActionDeny IntentionAction = "deny" +) + +// IntentionSourceType is the type of the source within an intention. +type IntentionSourceType string + +const ( + // IntentionSourceConsul is a service within the Consul catalog. + IntentionSourceConsul IntentionSourceType = "consul" +) + +// Intentions returns the list of intentions. +func (h *Connect) Intentions(q *QueryOptions) ([]*Intention, *QueryMeta, error) { + r := h.c.newRequest("GET", "/v1/connect/intentions") + r.setQueryOptions(q) + rtt, resp, err := requireOK(h.c.doRequest(r)) + if err != nil { + return nil, nil, err + } + defer resp.Body.Close() + + qm := &QueryMeta{} + parseQueryMeta(resp, qm) + qm.RequestTime = rtt + + var out []*Intention + if err := decodeBody(resp, &out); err != nil { + return nil, nil, err + } + return out, qm, nil +} + +// IntentionCreate will create a new intention. The ID in the given +// structure must be empty and a generate ID will be returned on +// success. +func (c *Connect) IntentionCreate(ixn *Intention, q *WriteOptions) (string, *WriteMeta, error) { + r := c.c.newRequest("POST", "/v1/connect/intentions") + r.setWriteOptions(q) + r.obj = ixn + rtt, resp, err := requireOK(c.c.doRequest(r)) + if err != nil { + return "", nil, err + } + defer resp.Body.Close() + + wm := &WriteMeta{} + wm.RequestTime = rtt + + var out struct{ ID string } + if err := decodeBody(resp, &out); err != nil { + return "", nil, err + } + return out.ID, wm, nil +} diff --git a/api/connect_intention_test.go b/api/connect_intention_test.go new file mode 100644 index 000000000..2fc742602 --- /dev/null +++ b/api/connect_intention_test.go @@ -0,0 +1,48 @@ +package api + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestAPI_ConnectIntentionCreate(t *testing.T) { + t.Parallel() + + require := require.New(t) + c, s := makeClient(t) + defer s.Stop() + + connect := c.Connect() + + // Create + ixn := testIntention() + id, _, err := connect.IntentionCreate(ixn, nil) + require.Nil(err) + require.NotEmpty(id) + + // List it + list, _, err := connect.Intentions(nil) + require.Nil(err) + require.Len(list, 1) + + actual := list[0] + ixn.ID = id + ixn.CreatedAt = actual.CreatedAt + ixn.UpdatedAt = actual.UpdatedAt + ixn.CreateIndex = actual.CreateIndex + ixn.ModifyIndex = actual.ModifyIndex + require.Equal(ixn, actual) +} + +func testIntention() *Intention { + return &Intention{ + SourceNS: "eng", + SourceName: "api", + DestinationNS: "eng", + DestinationName: "db", + Action: IntentionActionAllow, + SourceType: IntentionSourceConsul, + Meta: map[string]string{}, + } +} From c0894f0f50a961ec0a0395ab0bb11d7c9dd4aee3 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Mar 2018 10:14:32 -0700 Subject: [PATCH 105/539] api: IntentionMatch --- api/connect_intention.go | 68 +++++++++++++++++++++++++++++++++++ api/connect_intention_test.go | 54 +++++++++++++++++++++++++++- 2 files changed, 121 insertions(+), 1 deletion(-) diff --git a/api/connect_intention.go b/api/connect_intention.go index b138dd4ae..aa2f82d3d 100644 --- a/api/connect_intention.go +++ b/api/connect_intention.go @@ -67,6 +67,22 @@ const ( IntentionSourceConsul IntentionSourceType = "consul" ) +// IntentionMatch are the arguments for the intention match API. +type IntentionMatch struct { + By IntentionMatchType + Names []string +} + +// IntentionMatchType is the target for a match request. For example, +// matching by source will look for all intentions that match the given +// source value. +type IntentionMatchType string + +const ( + IntentionMatchSource IntentionMatchType = "source" + IntentionMatchDestination IntentionMatchType = "destination" +) + // Intentions returns the list of intentions. func (h *Connect) Intentions(q *QueryOptions) ([]*Intention, *QueryMeta, error) { r := h.c.newRequest("GET", "/v1/connect/intentions") @@ -88,6 +104,58 @@ func (h *Connect) Intentions(q *QueryOptions) ([]*Intention, *QueryMeta, error) return out, qm, nil } +// IntentionGet retrieves a single intention. +func (h *Connect) IntentionGet(id string, q *QueryOptions) (*Intention, *QueryMeta, error) { + r := h.c.newRequest("GET", "/v1/connect/intentions/"+id) + r.setQueryOptions(q) + rtt, resp, err := requireOK(h.c.doRequest(r)) + if err != nil { + return nil, nil, err + } + defer resp.Body.Close() + + qm := &QueryMeta{} + parseQueryMeta(resp, qm) + qm.RequestTime = rtt + + var out Intention + if err := decodeBody(resp, &out); err != nil { + return nil, nil, err + } + return &out, qm, nil +} + +// IntentionMatch returns the list of intentions that match a given source +// or destination. The returned intentions are ordered by precedence where +// result[0] is the highest precedence (if that matches, then that rule overrides +// all other rules). +// +// Matching can be done for multiple names at the same time. The resulting +// map is keyed by the given names. Casing is preserved. +func (h *Connect) IntentionMatch(args *IntentionMatch, q *QueryOptions) (map[string][]*Intention, *QueryMeta, error) { + r := h.c.newRequest("GET", "/v1/connect/intentions/match") + r.setQueryOptions(q) + r.params.Set("by", string(args.By)) + for _, name := range args.Names { + r.params.Add("name", name) + } + rtt, resp, err := requireOK(h.c.doRequest(r)) + if err != nil { + return nil, nil, err + } + defer resp.Body.Close() + + qm := &QueryMeta{} + parseQueryMeta(resp, qm) + qm.RequestTime = rtt + + var out map[string][]*Intention + if err := decodeBody(resp, &out); err != nil { + return nil, nil, err + } + return out, qm, nil +} + // IntentionCreate will create a new intention. The ID in the given // structure must be empty and a generate ID will be returned on // success. diff --git a/api/connect_intention_test.go b/api/connect_intention_test.go index 2fc742602..0edcf4c49 100644 --- a/api/connect_intention_test.go +++ b/api/connect_intention_test.go @@ -6,7 +6,7 @@ import ( "github.com/stretchr/testify/require" ) -func TestAPI_ConnectIntentionCreate(t *testing.T) { +func TestAPI_ConnectIntentionCreateListGet(t *testing.T) { t.Parallel() require := require.New(t) @@ -33,6 +33,58 @@ func TestAPI_ConnectIntentionCreate(t *testing.T) { ixn.CreateIndex = actual.CreateIndex ixn.ModifyIndex = actual.ModifyIndex require.Equal(ixn, actual) + + // Get it + actual, _, err = connect.IntentionGet(id, nil) + require.Nil(err) + require.Equal(ixn, actual) +} + +func TestAPI_ConnectIntentionMatch(t *testing.T) { + t.Parallel() + + require := require.New(t) + c, s := makeClient(t) + defer s.Stop() + + connect := c.Connect() + + // Create + { + insert := [][]string{ + {"foo", "*"}, + {"foo", "bar"}, + {"foo", "baz"}, // shouldn't match + {"bar", "bar"}, // shouldn't match + {"bar", "*"}, // shouldn't match + {"*", "*"}, + } + + for _, v := range insert { + ixn := testIntention() + ixn.DestinationNS = v[0] + ixn.DestinationName = v[1] + id, _, err := connect.IntentionCreate(ixn, nil) + require.Nil(err) + require.NotEmpty(id) + } + } + + // Match it + result, _, err := connect.IntentionMatch(&IntentionMatch{ + By: IntentionMatchDestination, + Names: []string{"foo/bar"}, + }, nil) + require.Nil(err) + require.Len(result, 1) + + var actual [][]string + expected := [][]string{{"foo", "bar"}, {"foo", "*"}, {"*", "*"}} + for _, ixn := range result["foo/bar"] { + actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) + } + + require.Equal(expected, actual) } func testIntention() *Intention { From 9de861d722e4b04feddbe51d10cafd289561a3e5 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Mar 2018 14:16:41 -0700 Subject: [PATCH 106/539] api: fix up some comments and rename IssuedCert to LeafCert --- api/agent.go | 4 ++-- api/connect.go | 2 +- api/connect_ca.go | 8 ++++---- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/api/agent.go b/api/agent.go index 860483671..7a810bcab 100644 --- a/api/agent.go +++ b/api/agent.go @@ -567,7 +567,7 @@ func (a *Agent) ConnectCARoots(q *QueryOptions) (*CARootList, *QueryMeta, error) // // TODO(mitchellh): we need to test this better once we have a way to // configure CAs from the API package (when the CA work is done). -func (a *Agent) ConnectCALeaf(serviceID string, q *QueryOptions) (*IssuedCert, *QueryMeta, error) { +func (a *Agent) ConnectCALeaf(serviceID string, q *QueryOptions) (*LeafCert, *QueryMeta, error) { r := a.c.newRequest("GET", "/v1/agent/connect/ca/leaf/"+serviceID) r.setQueryOptions(q) rtt, resp, err := requireOK(a.c.doRequest(r)) @@ -580,7 +580,7 @@ func (a *Agent) ConnectCALeaf(serviceID string, q *QueryOptions) (*IssuedCert, * parseQueryMeta(resp, qm) qm.RequestTime = rtt - var out IssuedCert + var out LeafCert if err := decodeBody(resp, &out); err != nil { return nil, nil, err } diff --git a/api/connect.go b/api/connect.go index 4b4e06900..a40d1e232 100644 --- a/api/connect.go +++ b/api/connect.go @@ -6,7 +6,7 @@ type Connect struct { c *Client } -// Health returns a handle to the health endpoints +// Connect returns a handle to the connect-related endpoints func (c *Client) Connect() *Connect { return &Connect{c} } diff --git a/api/connect_ca.go b/api/connect_ca.go index 19046c2ab..00951c75d 100644 --- a/api/connect_ca.go +++ b/api/connect_ca.go @@ -19,8 +19,8 @@ type CARoot struct { // opaque to Consul and is not used for anything internally. Name string - // RootCert is the PEM-encoded public certificate. - RootCert string + // RootCertPEM is the PEM-encoded public certificate. + RootCertPEM string `json:"RootCert"` // Active is true if this is the current active CA. This must only // be true for exactly one CA. For any method that modifies roots in the @@ -32,8 +32,8 @@ type CARoot struct { ModifyIndex uint64 } -// IssuedCert is a certificate that has been issued by a Connect CA. -type IssuedCert struct { +// LeafCert is a certificate that has been issued by a Connect CA. +type LeafCert struct { // SerialNumber is the unique serial number for this certificate. // This is encoded in standard hex separated by :. SerialNumber string From 26f254fac0fbb1e22fb69b1fde710573498ed19a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Mar 2018 14:20:25 -0700 Subject: [PATCH 107/539] api: rename Authorize field to ClientCertURI --- api/agent.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/api/agent.go b/api/agent.go index 7a810bcab..50d334d71 100644 --- a/api/agent.go +++ b/api/agent.go @@ -175,7 +175,7 @@ type SampledValue struct { // AgentAuthorizeParams are the request parameters for authorizing a request. type AgentAuthorizeParams struct { Target string - ClientID string + ClientCertURI string ClientCertSerial string } From 62b746c380f7f94158610dc72b0baac52439d1f2 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 28 Mar 2018 14:29:35 -0700 Subject: [PATCH 108/539] agent: rename authorize param ClientID to ClientCertURI --- agent/agent_endpoint.go | 2 +- agent/agent_endpoint_test.go | 28 ++++++++++++++-------------- agent/structs/connect.go | 4 ++-- 3 files changed, 17 insertions(+), 17 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index a6b67816d..722909467 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -906,7 +906,7 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R } // Parse the certificate URI from the client ID - uriRaw, err := url.Parse(authReq.ClientID) + uriRaw, err := url.Parse(authReq.ClientCertURI) if err != nil { return &connectAuthorizeResp{ Authorized: false, diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index bc59f3700..1b017fa78 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2172,8 +2172,8 @@ func TestAgentConnectAuthorize_idInvalidFormat(t *testing.T) { defer a.Shutdown() args := &structs.ConnectAuthorizeRequest{ - Target: "web", - ClientID: "tubes", + Target: "web", + ClientCertURI: "tubes", } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -2195,8 +2195,8 @@ func TestAgentConnectAuthorize_idNotService(t *testing.T) { defer a.Shutdown() args := &structs.ConnectAuthorizeRequest{ - Target: "web", - ClientID: "spiffe://1234.consul", + Target: "web", + ClientCertURI: "spiffe://1234.consul", } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -2237,8 +2237,8 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { } args := &structs.ConnectAuthorizeRequest{ - Target: target, - ClientID: connect.TestSpiffeIDService(t, "web").URI().String(), + Target: target, + ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -2279,8 +2279,8 @@ func TestAgentConnectAuthorize_deny(t *testing.T) { } args := &structs.ConnectAuthorizeRequest{ - Target: target, - ClientID: connect.TestSpiffeIDService(t, "web").URI().String(), + Target: target, + ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -2320,8 +2320,8 @@ func TestAgentConnectAuthorize_serviceWrite(t *testing.T) { } args := &structs.ConnectAuthorizeRequest{ - Target: "foo", - ClientID: connect.TestSpiffeIDService(t, "web").URI().String(), + Target: "foo", + ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize?token="+token, jsonReader(args)) @@ -2339,8 +2339,8 @@ func TestAgentConnectAuthorize_defaultDeny(t *testing.T) { defer a.Shutdown() args := &structs.ConnectAuthorizeRequest{ - Target: "foo", - ClientID: connect.TestSpiffeIDService(t, "web").URI().String(), + Target: "foo", + ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize?token=root", jsonReader(args)) resp := httptest.NewRecorder() @@ -2369,8 +2369,8 @@ func TestAgentConnectAuthorize_defaultAllow(t *testing.T) { defer a.Shutdown() args := &structs.ConnectAuthorizeRequest{ - Target: "foo", - ClientID: connect.TestSpiffeIDService(t, "web").URI().String(), + Target: "foo", + ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize?token=root", jsonReader(args)) resp := httptest.NewRecorder() diff --git a/agent/structs/connect.go b/agent/structs/connect.go index 1a2e03da8..7f08615d3 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -6,12 +6,12 @@ type ConnectAuthorizeRequest struct { // Target is the name of the service that is being requested. Target string - // ClientID is a unique identifier for the requesting client. This + // ClientCertURI is a unique identifier for the requesting client. This // is currently the URI SAN from the TLS client certificate. // // ClientCertSerial is a colon-hex-encoded of the serial number for // the requesting client cert. This is used to check against revocation // lists. - ClientID string + ClientCertURI string ClientCertSerial string } From 800deb693c542af3413c0bba353a2f92b9b7b0f4 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 21 Mar 2018 22:35:00 +0000 Subject: [PATCH 109/539] Original proxy and connect.Client implementation. Working end to end. --- command/commands_oss.go | 4 + command/connect/connect.go | 40 +++ command/connect/connect_test.go | 13 + command/connect/proxy/proxy.go | 166 +++++++++++ command/connect/proxy/proxy_test.go | 1 + connect/auth.go | 43 +++ connect/client.go | 256 +++++++++++++++++ connect/client_test.go | 148 ++++++++++ .../testdata/ca1-ca-consul-internal.cert.pem | 14 + .../testdata/ca1-ca-consul-internal.key.pem | 5 + connect/testdata/ca1-svc-cache.cert.pem | 14 + connect/testdata/ca1-svc-cache.key.pem | 5 + connect/testdata/ca1-svc-db.cert.pem | 13 + connect/testdata/ca1-svc-db.key.pem | 5 + connect/testdata/ca1-svc-web.cert.pem | 13 + connect/testdata/ca1-svc-web.key.pem | 5 + connect/testdata/ca2-ca-vault.cert.pem | 14 + connect/testdata/ca2-ca-vault.key.pem | 5 + connect/testdata/ca2-svc-cache.cert.pem | 13 + connect/testdata/ca2-svc-cache.key.pem | 5 + connect/testdata/ca2-svc-db.cert.pem | 13 + connect/testdata/ca2-svc-db.key.pem | 5 + connect/testdata/ca2-svc-web.cert.pem | 13 + connect/testdata/ca2-svc-web.key.pem | 5 + connect/testdata/ca2-xc-by-ca1.cert.pem | 14 + connect/testdata/mkcerts.go | 243 ++++++++++++++++ connect/testing.go | 88 ++++++ connect/tls.go | 124 +++++++++ connect/tls_test.go | 45 +++ proxy/config.go | 111 ++++++++ proxy/config_test.go | 46 +++ proxy/conn.go | 48 ++++ proxy/conn_test.go | 119 ++++++++ proxy/manager.go | 140 ++++++++++ proxy/manager_test.go | 76 +++++ proxy/proxier.go | 32 +++ proxy/proxy.go | 112 ++++++++ proxy/public_listener.go | 119 ++++++++ proxy/public_listener_test.go | 38 +++ proxy/runner.go | 118 ++++++++ proxy/testdata/config-kitchensink.hcl | 36 +++ proxy/testing.go | 170 ++++++++++++ proxy/upstream.go | 261 ++++++++++++++++++ proxy/upstream_test.go | 75 +++++ 44 files changed, 2833 insertions(+) create mode 100644 command/connect/connect.go create mode 100644 command/connect/connect_test.go create mode 100644 command/connect/proxy/proxy.go create mode 100644 command/connect/proxy/proxy_test.go create mode 100644 connect/auth.go create mode 100644 connect/client.go create mode 100644 connect/client_test.go create mode 100644 connect/testdata/ca1-ca-consul-internal.cert.pem create mode 100644 connect/testdata/ca1-ca-consul-internal.key.pem create mode 100644 connect/testdata/ca1-svc-cache.cert.pem create mode 100644 connect/testdata/ca1-svc-cache.key.pem create mode 100644 connect/testdata/ca1-svc-db.cert.pem create mode 100644 connect/testdata/ca1-svc-db.key.pem create mode 100644 connect/testdata/ca1-svc-web.cert.pem create mode 100644 connect/testdata/ca1-svc-web.key.pem create mode 100644 connect/testdata/ca2-ca-vault.cert.pem create mode 100644 connect/testdata/ca2-ca-vault.key.pem create mode 100644 connect/testdata/ca2-svc-cache.cert.pem create mode 100644 connect/testdata/ca2-svc-cache.key.pem create mode 100644 connect/testdata/ca2-svc-db.cert.pem create mode 100644 connect/testdata/ca2-svc-db.key.pem create mode 100644 connect/testdata/ca2-svc-web.cert.pem create mode 100644 connect/testdata/ca2-svc-web.key.pem create mode 100644 connect/testdata/ca2-xc-by-ca1.cert.pem create mode 100644 connect/testdata/mkcerts.go create mode 100644 connect/testing.go create mode 100644 connect/tls.go create mode 100644 connect/tls_test.go create mode 100644 proxy/config.go create mode 100644 proxy/config_test.go create mode 100644 proxy/conn.go create mode 100644 proxy/conn_test.go create mode 100644 proxy/manager.go create mode 100644 proxy/manager_test.go create mode 100644 proxy/proxier.go create mode 100644 proxy/proxy.go create mode 100644 proxy/public_listener.go create mode 100644 proxy/public_listener_test.go create mode 100644 proxy/runner.go create mode 100644 proxy/testdata/config-kitchensink.hcl create mode 100644 proxy/testing.go create mode 100644 proxy/upstream.go create mode 100644 proxy/upstream_test.go diff --git a/command/commands_oss.go b/command/commands_oss.go index 43fbeb29c..c1e3e794a 100644 --- a/command/commands_oss.go +++ b/command/commands_oss.go @@ -6,6 +6,8 @@ import ( catlistdc "github.com/hashicorp/consul/command/catalog/list/dc" catlistnodes "github.com/hashicorp/consul/command/catalog/list/nodes" catlistsvc "github.com/hashicorp/consul/command/catalog/list/services" + "github.com/hashicorp/consul/command/connect" + "github.com/hashicorp/consul/command/connect/proxy" "github.com/hashicorp/consul/command/event" "github.com/hashicorp/consul/command/exec" "github.com/hashicorp/consul/command/forceleave" @@ -58,6 +60,8 @@ func init() { Register("catalog datacenters", func(ui cli.Ui) (cli.Command, error) { return catlistdc.New(ui), nil }) Register("catalog nodes", func(ui cli.Ui) (cli.Command, error) { return catlistnodes.New(ui), nil }) Register("catalog services", func(ui cli.Ui) (cli.Command, error) { return catlistsvc.New(ui), nil }) + Register("connect", func(ui cli.Ui) (cli.Command, error) { return connect.New(), nil }) + Register("connect proxy", func(ui cli.Ui) (cli.Command, error) { return proxy.New(ui, MakeShutdownCh()), nil }) Register("event", func(ui cli.Ui) (cli.Command, error) { return event.New(ui), nil }) Register("exec", func(ui cli.Ui) (cli.Command, error) { return exec.New(ui, MakeShutdownCh()), nil }) Register("force-leave", func(ui cli.Ui) (cli.Command, error) { return forceleave.New(ui), nil }) diff --git a/command/connect/connect.go b/command/connect/connect.go new file mode 100644 index 000000000..60c238876 --- /dev/null +++ b/command/connect/connect.go @@ -0,0 +1,40 @@ +package connect + +import ( + "github.com/hashicorp/consul/command/flags" + "github.com/mitchellh/cli" +) + +func New() *cmd { + return &cmd{} +} + +type cmd struct{} + +func (c *cmd) Run(args []string) int { + return cli.RunResultHelp +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return flags.Usage(help, nil) +} + +const synopsis = "Interact with Consul Connect" +const help = ` +Usage: consul connect [options] [args] + + This command has subcommands for interacting with Consul Connect. + + Here are some simple examples, and more detailed examples are available + in the subcommands or the documentation. + + Run the built-in Connect mTLS proxy + + $ consul connect proxy + + For more examples, ask for subcommand help or view the documentation. +` diff --git a/command/connect/connect_test.go b/command/connect/connect_test.go new file mode 100644 index 000000000..95c8ebd58 --- /dev/null +++ b/command/connect/connect_test.go @@ -0,0 +1,13 @@ +package connect + +import ( + "strings" + "testing" +) + +func TestCatalogCommand_noTabs(t *testing.T) { + t.Parallel() + if strings.ContainsRune(New().Help(), '\t') { + t.Fatal("help has tabs") + } +} diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go new file mode 100644 index 000000000..237f4b7e2 --- /dev/null +++ b/command/connect/proxy/proxy.go @@ -0,0 +1,166 @@ +package proxy + +import ( + "context" + "flag" + "fmt" + "io" + "log" + "net/http" + // Expose pprof if configured + _ "net/http/pprof" + + "github.com/hashicorp/consul/command/flags" + proxyImpl "github.com/hashicorp/consul/proxy" + + "github.com/hashicorp/consul/logger" + "github.com/hashicorp/logutils" + "github.com/mitchellh/cli" +) + +func New(ui cli.Ui, shutdownCh <-chan struct{}) *cmd { + c := &cmd{UI: ui, shutdownCh: shutdownCh} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + shutdownCh <-chan struct{} + + logFilter *logutils.LevelFilter + logOutput io.Writer + logger *log.Logger + + // flags + logLevel string + cfgFile string + proxyID string + pprofAddr string +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + + c.flags.StringVar(&c.cfgFile, "insecure-dev-config", "", + "If set, proxy config is read on startup from this file (in HCL or JSON"+ + "format). If a config file is given, the proxy will use that instead of "+ + "querying the local agent for it's configuration. It will not reload it "+ + "except on startup. In this mode the proxy WILL NOT authorize incoming "+ + "connections with the local agent which is totally insecure. This is "+ + "ONLY for development and testing.") + + c.flags.StringVar(&c.proxyID, "proxy-id", "", + "The proxy's ID on the local agent.") + + c.flags.StringVar(&c.logLevel, "log-level", "INFO", + "Specifies the log level.") + + c.flags.StringVar(&c.pprofAddr, "pprof-addr", "", + "Enable debugging via pprof. Providing a host:port (or just ':port') "+ + "enables profiling HTTP endpoints on that address.") + + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.ServerFlags()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 1 + } + + // Setup the log outputs + logConfig := &logger.Config{ + LogLevel: c.logLevel, + } + logFilter, logGate, _, logOutput, ok := logger.Setup(logConfig, c.UI) + if !ok { + return 1 + } + c.logFilter = logFilter + c.logOutput = logOutput + c.logger = log.New(logOutput, "", log.LstdFlags) + + // Enable Pprof if needed + if c.pprofAddr != "" { + go func() { + c.UI.Output(fmt.Sprintf("Starting pprof HTTP endpoints on "+ + "http://%s/debug/pprof", c.pprofAddr)) + log.Fatal(http.ListenAndServe(c.pprofAddr, nil)) + }() + } + + // Setup Consul client + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err)) + return 1 + } + + var p *proxyImpl.Proxy + if c.cfgFile != "" { + c.UI.Info("Configuring proxy locally from " + c.cfgFile) + + p, err = proxyImpl.NewFromConfigFile(client, c.cfgFile, c.logger) + if err != nil { + c.UI.Error(fmt.Sprintf("Failed configuring from file: %s", err)) + return 1 + } + + } else { + p, err = proxyImpl.New(client, c.proxyID, c.logger) + if err != nil { + c.UI.Error(fmt.Sprintf("Failed configuring from agent: %s", err)) + return 1 + } + } + + ctx, cancel := context.WithCancel(context.Background()) + go func() { + err := p.Run(ctx) + if err != nil { + c.UI.Error(fmt.Sprintf("Failed running proxy: %s", err)) + } + // If we exited early due to a fatal error, need to unblock the main + // routine. But we can't close shutdownCh since it might already be closed + // by a signal and there is no way to tell. We also can't send on it to + // unblock main routine since it's typed as receive only. So the best thing + // we can do is cancel the context and have the main routine select on both. + cancel() + }() + + c.UI.Output("Consul Connect proxy running!") + + c.UI.Output("Log data will now stream in as it occurs:\n") + logGate.Flush() + + // Wait for shutdown or context cancel (see Run() goroutine above) + select { + case <-c.shutdownCh: + cancel() + case <-ctx.Done(): + } + c.UI.Output("Consul Connect proxy shutdown") + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return c.help +} + +const synopsis = "Runs a Consul Connect proxy" +const help = ` +Usage: consul proxy [options] + + Starts a Consul Connect proxy and runs until an interrupt is received. +` diff --git a/command/connect/proxy/proxy_test.go b/command/connect/proxy/proxy_test.go new file mode 100644 index 000000000..943b369ff --- /dev/null +++ b/command/connect/proxy/proxy_test.go @@ -0,0 +1 @@ +package proxy diff --git a/connect/auth.go b/connect/auth.go new file mode 100644 index 000000000..73c16f0bf --- /dev/null +++ b/connect/auth.go @@ -0,0 +1,43 @@ +package connect + +import "crypto/x509" + +// Auther is the interface that provides both Authentication and Authorization +// for an mTLS connection. It's only method is compatible with +// tls.Config.VerifyPeerCertificate. +type Auther interface { + // Auth is called during tls Connection establishment to Authenticate and + // Authorize the presented peer. Note that verifiedChains must not be relied + // upon as we typically have to skip Go's internal verification so the + // implementation takes full responsibility to validating the certificate + // against known roots. It is also up to the user of the interface to ensure + // appropriate validation is performed for client or server end by arranging + // for an appropriate implementation to be hooked into the tls.Config used. + Auth(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error +} + +// ClientAuther is used to auth Clients connecting to a Server. +type ClientAuther struct{} + +// Auth implements Auther +func (a *ClientAuther) Auth(rawCerts [][]byte, + verifiedChains [][]*x509.Certificate) error { + + // TODO(banks): implement path validation and AuthZ + return nil +} + +// ServerAuther is used to auth the Server identify from a connecting Client. +type ServerAuther struct { + // TODO(banks): We'll need a way to pass the expected service identity (name, + // namespace, dc, cluster) here based on discovery result. +} + +// Auth implements Auther +func (a *ServerAuther) Auth(rawCerts [][]byte, + verifiedChains [][]*x509.Certificate) error { + + // TODO(banks): implement path validation and verify URI matches the target + // service we intended to connect to. + return nil +} diff --git a/connect/client.go b/connect/client.go new file mode 100644 index 000000000..867bf0db5 --- /dev/null +++ b/connect/client.go @@ -0,0 +1,256 @@ +package connect + +import ( + "context" + "crypto/tls" + "fmt" + "math/rand" + "net" + + "github.com/hashicorp/consul/api" +) + +// CertStatus indicates whether the Client currently has valid certificates for +// incoming and outgoing connections. +type CertStatus int + +const ( + // CertStatusUnknown is the zero value for CertStatus which may be returned + // when a watch channel is closed on shutdown. It has no other meaning. + CertStatusUnknown CertStatus = iota + + // CertStatusOK indicates the client has valid certificates and trust roots to + // Authenticate incoming and outgoing connections. + CertStatusOK + + // CertStatusPending indicates the client is waiting to be issued initial + // certificates, or that it's certificates have expired and it's waiting to be + // issued new ones. In this state all incoming and outgoing connections will + // fail. + CertStatusPending +) + +func (s CertStatus) String() string { + switch s { + case CertStatusOK: + return "OK" + case CertStatusPending: + return "pending" + case CertStatusUnknown: + fallthrough + default: + return "unknown" + } +} + +// Client is the interface a basic client implementation must support. +type Client interface { + // TODO(banks): build this and test it + // CertStatus returns the current status of the client's certificates. It can + // be used to determine if the Client is able to service requests at the + // current time. + //CertStatus() CertStatus + + // TODO(banks): build this and test it + // WatchCertStatus returns a channel that is notified on all status changes. + // Note that a message on the channel isn't guaranteed to be different so it's + // value should be inspected. During Client shutdown the channel will be + // closed returning a zero type which is equivalent to CertStatusUnknown. + //WatchCertStatus() <-chan CertStatus + + // ServerTLSConfig returns the *tls.Config to be used when creating a TCP + // listener that should accept Connect connections. It is likely that at + // startup the tlsCfg returned will not be immediately usable since + // certificates are typically fetched from the agent asynchronously. In this + // case it's still safe to listen with the provided config, but auth failures + // will occur until initial certificate discovery is complete. In general at + // any time it is possible for certificates to expire before new replacements + // have been issued due to local network errors so the server may not actually + // have a working certificate configuration at any time, however as soon as + // valid certs can be issued it will automatically start working again so + // should take no action. + ServerTLSConfig() (*tls.Config, error) + + // DialService opens a new connection to the named service registered in + // Consul. It will perform service discovery to find healthy instances. If + // there is an error during connection it is returned and the caller may call + // again. The client implementation makes a best effort to make consecutive + // Dials against different instances either by randomising the list and/or + // maintaining a local memory of which instances recently failed. If the + // context passed times out before connection is established and verified an + // error is returned. + DialService(ctx context.Context, namespace, name string) (net.Conn, error) + + // DialPreparedQuery opens a new connection by executing the named Prepared + // Query against the local Consul agent, and picking one of the returned + // instances to connect to. It will perform service discovery with the same + // semantics as DialService. + DialPreparedQuery(ctx context.Context, namespace, name string) (net.Conn, error) +} + +/* + +Maybe also convenience wrappers for: + - listening TLS conn with right config + - http.ListenAndServeTLS equivalent + +*/ + +// AgentClient is the primary implementation of a connect.Client which +// communicates with the local Consul agent. +type AgentClient struct { + agent *api.Client + tlsCfg *ReloadableTLSConfig +} + +// NewClient returns an AgentClient to allow consuming and providing +// Connect-enabled network services. +func NewClient(agent *api.Client) Client { + // TODO(banks): hook up fetching certs from Agent and updating tlsCfg on cert + // delivery/change. Perhaps need to make + return &AgentClient{ + agent: agent, + tlsCfg: NewReloadableTLSConfig(defaultTLSConfig()), + } +} + +// NewInsecureDevClientWithLocalCerts returns an AgentClient that will still do +// service discovery via the local agent but will use externally provided +// certificates and skip authorization. This is intended just for development +// and must not be used ever in production. +func NewInsecureDevClientWithLocalCerts(agent *api.Client, caFile, certFile, + keyFile string) (Client, error) { + + cfg, err := devTLSConfigFromFiles(caFile, certFile, keyFile) + if err != nil { + return nil, err + } + + return &AgentClient{ + agent: agent, + tlsCfg: NewReloadableTLSConfig(cfg), + }, nil +} + +// ServerTLSConfig implements Client +func (c *AgentClient) ServerTLSConfig() (*tls.Config, error) { + return c.tlsCfg.ServerTLSConfig(), nil +} + +// DialService implements Client +func (c *AgentClient) DialService(ctx context.Context, namespace, + name string) (net.Conn, error) { + return c.dial(ctx, "service", namespace, name) +} + +// DialPreparedQuery implements Client +func (c *AgentClient) DialPreparedQuery(ctx context.Context, namespace, + name string) (net.Conn, error) { + return c.dial(ctx, "prepared_query", namespace, name) +} + +func (c *AgentClient) dial(ctx context.Context, discoveryType, namespace, + name string) (net.Conn, error) { + + svcs, err := c.discoverInstances(ctx, discoveryType, namespace, name) + if err != nil { + return nil, err + } + + svc, err := c.pickInstance(svcs) + if err != nil { + return nil, err + } + if svc == nil { + return nil, fmt.Errorf("no healthy services discovered") + } + + // OK we have a service we can dial! We need a ClientAuther that will validate + // the connection is legit. + + // TODO(banks): implement ClientAuther properly to actually verify connected + // cert matches the expected service/cluster etc. based on svc. + auther := &ClientAuther{} + tlsConfig := c.tlsCfg.TLSConfig(auther) + + // Resolve address TODO(banks): I expected this to happen magically in the + // agent at registration time if I register with no explicit address but + // apparently doesn't. This is a quick hack to make it work for now, need to + // see if there is a better shared code path for doing this. + addr := svc.Service.Address + if addr == "" { + addr = svc.Node.Address + } + var dialer net.Dialer + tcpConn, err := dialer.DialContext(ctx, "tcp", + fmt.Sprintf("%s:%d", addr, svc.Service.Port)) + if err != nil { + return nil, err + } + + tlsConn := tls.Client(tcpConn, tlsConfig) + err = tlsConn.Handshake() + if err != nil { + tlsConn.Close() + return nil, err + } + + return tlsConn, nil +} + +// pickInstance returns an instance from the given list to try to connect to. It +// may be made pluggable later, for now it just picks a random one regardless of +// whether the list is already shuffled. +func (c *AgentClient) pickInstance(svcs []*api.ServiceEntry) (*api.ServiceEntry, error) { + if len(svcs) < 1 { + return nil, nil + } + idx := rand.Intn(len(svcs)) + return svcs[idx], nil +} + +// discoverInstances returns all instances for the given discoveryType, +// namespace and name. The returned service entries may or may not be shuffled +func (c *AgentClient) discoverInstances(ctx context.Context, discoverType, + namespace, name string) ([]*api.ServiceEntry, error) { + + q := &api.QueryOptions{ + // TODO(banks): make this configurable? + AllowStale: true, + } + q = q.WithContext(ctx) + + switch discoverType { + case "service": + svcs, _, err := c.agent.Health().Connect(name, "", true, q) + if err != nil { + return nil, err + } + return svcs, err + + case "prepared_query": + // TODO(banks): it's not super clear to me how this should work eventually. + // How do we distinguise between a PreparedQuery for the actual services and + // one that should return the connect proxies where that differs? If we + // can't then we end up with a janky UX where user specifies a reasonable + // prepared query but we try to connect to non-connect services and fail + // with a confusing TLS error. Maybe just a way to filter PreparedQuery + // results by connect-enabled would be sufficient (or even metadata to do + // that ourselves in the response although less efficient). + resp, _, err := c.agent.PreparedQuery().Execute(name, q) + if err != nil { + return nil, err + } + + // Awkward, we have a slice of api.ServiceEntry here but want a slice of + // *api.ServiceEntry for compat with Connect/Service APIs. Have to convert + // them to keep things type-happy. + svcs := make([]*api.ServiceEntry, len(resp.Nodes)) + for idx, se := range resp.Nodes { + svcs[idx] = &se + } + return svcs, err + default: + return nil, fmt.Errorf("unsupported discovery type: %s", discoverType) + } +} diff --git a/connect/client_test.go b/connect/client_test.go new file mode 100644 index 000000000..fcb18e600 --- /dev/null +++ b/connect/client_test.go @@ -0,0 +1,148 @@ +package connect + +import ( + "context" + "crypto/x509" + "crypto/x509/pkix" + "encoding/asn1" + "io/ioutil" + "net" + "net/http" + "net/http/httptest" + "net/url" + "strconv" + "testing" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/testutil" + "github.com/stretchr/testify/require" +) + +func TestNewInsecureDevClientWithLocalCerts(t *testing.T) { + + agent, err := api.NewClient(api.DefaultConfig()) + require.Nil(t, err) + + got, err := NewInsecureDevClientWithLocalCerts(agent, + "testdata/ca1-ca-consul-internal.cert.pem", + "testdata/ca1-svc-web.cert.pem", + "testdata/ca1-svc-web.key.pem", + ) + require.Nil(t, err) + + // Sanity check correct certs were loaded + serverCfg, err := got.ServerTLSConfig() + require.Nil(t, err) + caSubjects := serverCfg.RootCAs.Subjects() + require.Len(t, caSubjects, 1) + caSubject, err := testNameFromRawDN(caSubjects[0]) + require.Nil(t, err) + require.Equal(t, "Consul Internal", caSubject.CommonName) + + require.Len(t, serverCfg.Certificates, 1) + cert, err := x509.ParseCertificate(serverCfg.Certificates[0].Certificate[0]) + require.Nil(t, err) + require.Equal(t, "web", cert.Subject.CommonName) +} + +func testNameFromRawDN(raw []byte) (*pkix.Name, error) { + var seq pkix.RDNSequence + if _, err := asn1.Unmarshal(raw, &seq); err != nil { + return nil, err + } + + var name pkix.Name + name.FillFromRDNSequence(&seq) + return &name, nil +} + +func testAgent(t *testing.T) (*testutil.TestServer, *api.Client) { + t.Helper() + + // Make client config + conf := api.DefaultConfig() + + // Create server + server, err := testutil.NewTestServerConfigT(t, nil) + require.Nil(t, err) + + conf.Address = server.HTTPAddr + + // Create client + agent, err := api.NewClient(conf) + require.Nil(t, err) + + return server, agent +} + +func testService(t *testing.T, ca, name string, client *api.Client) *httptest.Server { + t.Helper() + + // Run a test service to discover + server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Write([]byte("svc: " + name)) + })) + server.TLS = TestTLSConfig(t, ca, name) + server.StartTLS() + + u, err := url.Parse(server.URL) + require.Nil(t, err) + + port, err := strconv.Atoi(u.Port()) + require.Nil(t, err) + + // If client is passed, register the test service instance + if client != nil { + svc := &api.AgentServiceRegistration{ + // TODO(banks): we don't really have a good way to represent + // connect-native apps yet so we have to pretend out little server is a + // proxy for now. + Kind: api.ServiceKindConnectProxy, + ProxyDestination: name, + Name: name + "-proxy", + Address: u.Hostname(), + Port: port, + } + err := client.Agent().ServiceRegister(svc) + require.Nil(t, err) + } + + return server +} + +func TestDialService(t *testing.T) { + consulServer, agent := testAgent(t) + defer consulServer.Stop() + + svc := testService(t, "ca1", "web", agent) + defer svc.Close() + + c, err := NewInsecureDevClientWithLocalCerts(agent, + "testdata/ca1-ca-consul-internal.cert.pem", + "testdata/ca1-svc-web.cert.pem", + "testdata/ca1-svc-web.key.pem", + ) + require.Nil(t, err) + + conn, err := c.DialService(context.Background(), "default", "web") + require.Nilf(t, err, "err: %s", err) + + // Inject our conn into http.Transport + httpClient := &http.Client{ + Transport: &http.Transport{ + DialTLS: func(network, addr string) (net.Conn, error) { + return conn, nil + }, + }, + } + + // Don't be fooled the hostname here is ignored since we did the dialling + // ourselves + resp, err := httpClient.Get("https://web.connect.consul/") + require.Nil(t, err) + defer resp.Body.Close() + body, err := ioutil.ReadAll(resp.Body) + require.Nil(t, err) + + require.Equal(t, "svc: web", string(body)) +} diff --git a/connect/testdata/ca1-ca-consul-internal.cert.pem b/connect/testdata/ca1-ca-consul-internal.cert.pem new file mode 100644 index 000000000..6a557775f --- /dev/null +++ b/connect/testdata/ca1-ca-consul-internal.cert.pem @@ -0,0 +1,14 @@ +-----BEGIN CERTIFICATE----- +MIICIDCCAcagAwIBAgIBATAKBggqhkjOPQQDAjAaMRgwFgYDVQQDEw9Db25zdWwg +SW50ZXJuYWwwHhcNMTgwMzIzMjIwNDI1WhcNMjgwMzIwMjIwNDI1WjAaMRgwFgYD +VQQDEw9Db25zdWwgSW50ZXJuYWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAT3 +IPiDHugKYEVaSpIzBjqU5lQrmirC6N1XHyOAhF2psGGxcxezpf8Vgy5Iv6XbmeHr +cttyzUYtUKhrFBhxkPYRo4H8MIH5MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8E +BTADAQH/MCkGA1UdDgQiBCCrnNQy2IQS73Co9WbrPXtq/YP9SvIBOJ8iYRWTOxjC +qTArBgNVHSMEJDAigCCrnNQy2IQS73Co9WbrPXtq/YP9SvIBOJ8iYRWTOxjCqTA/ +BgNVHREEODA2hjRzcGlmZmU6Ly8xMTExMTExMS0yMjIyLTMzMzMtNDQ0NC01NTU1 +NTU1NTU1NTUuY29uc3VsMD0GA1UdHgEB/wQzMDGgLzAtgisxMTExMTExMS0yMjIy +LTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsMAoGCCqGSM49BAMCA0gAMEUC +IQDwWL6ZuszKrZjSJwDzdhRQtj1ppezJrKaDTJx+4F/tyQIgEaQCR935ztIqZzgO +Ka6ozcH2Ubd4j4cDC1XswVMW6zs= +-----END CERTIFICATE----- diff --git a/connect/testdata/ca1-ca-consul-internal.key.pem b/connect/testdata/ca1-ca-consul-internal.key.pem new file mode 100644 index 000000000..8c40fd26b --- /dev/null +++ b/connect/testdata/ca1-ca-consul-internal.key.pem @@ -0,0 +1,5 @@ +-----BEGIN EC PRIVATE KEY----- +MHcCAQEEIDUDO3I7WKbLTTWkNKA4unB2RLq/RX+L+XIFssDE/AD7oAoGCCqGSM49 +AwEHoUQDQgAE9yD4gx7oCmBFWkqSMwY6lOZUK5oqwujdVx8jgIRdqbBhsXMXs6X/ +FYMuSL+l25nh63Lbcs1GLVCoaxQYcZD2EQ== +-----END EC PRIVATE KEY----- diff --git a/connect/testdata/ca1-svc-cache.cert.pem b/connect/testdata/ca1-svc-cache.cert.pem new file mode 100644 index 000000000..097a2b6a6 --- /dev/null +++ b/connect/testdata/ca1-svc-cache.cert.pem @@ -0,0 +1,14 @@ +-----BEGIN CERTIFICATE----- +MIICEDCCAbagAwIBAgIBBTAKBggqhkjOPQQDAjAaMRgwFgYDVQQDEw9Db25zdWwg +SW50ZXJuYWwwHhcNMTgwMzIzMjIwNDI1WhcNMjgwMzIwMjIwNDI1WjAQMQ4wDAYD +VQQDEwVjYWNoZTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABOWw8369v4DHJAI6 +k061hU8rxaQs87mZFQ52JfleJjRoDUuZIPLhZHMFbvbI8pDWi7YdjluNbzNNh6nu +fAivylujgfYwgfMwDgYDVR0PAQH/BAQDAgO4MB0GA1UdJQQWMBQGCCsGAQUFBwMC +BggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCkGA1UdDgQiBCCHhMqV2/R8meSsXtwh +OLC9hP7WQfuvwJ6V6uwKZdEofTArBgNVHSMEJDAigCCrnNQy2IQS73Co9WbrPXtq +/YP9SvIBOJ8iYRWTOxjCqTBcBgNVHREEVTBThlFzcGlmZmU6Ly8xMTExMTExMS0y +MjIyLTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsL25zL2RlZmF1bHQvZGMv +ZGMwMS9zdmMvY2FjaGUwCgYIKoZIzj0EAwIDSAAwRQIgPfekKBd/ltpVkdjnB0Hp +cV9HPwy12tXp4suR2nspSNkCIQD1Th/hvxuBKkRYy9Bl+jgTbrFdd4fLCWPeFbaM +sgLK7g== +-----END CERTIFICATE----- diff --git a/connect/testdata/ca1-svc-cache.key.pem b/connect/testdata/ca1-svc-cache.key.pem new file mode 100644 index 000000000..f780f63db --- /dev/null +++ b/connect/testdata/ca1-svc-cache.key.pem @@ -0,0 +1,5 @@ +-----BEGIN EC PRIVATE KEY----- +MHcCAQEEIPTSPV2cWNnO69f+vYyCg5frpoBtK6L+kZVLrGCv3TdnoAoGCCqGSM49 +AwEHoUQDQgAE5bDzfr2/gMckAjqTTrWFTyvFpCzzuZkVDnYl+V4mNGgNS5kg8uFk +cwVu9sjykNaLth2OW41vM02Hqe58CK/KWw== +-----END EC PRIVATE KEY----- diff --git a/connect/testdata/ca1-svc-db.cert.pem b/connect/testdata/ca1-svc-db.cert.pem new file mode 100644 index 000000000..d00a38ea0 --- /dev/null +++ b/connect/testdata/ca1-svc-db.cert.pem @@ -0,0 +1,13 @@ +-----BEGIN CERTIFICATE----- +MIICCjCCAbCgAwIBAgIBBDAKBggqhkjOPQQDAjAaMRgwFgYDVQQDEw9Db25zdWwg +SW50ZXJuYWwwHhcNMTgwMzIzMjIwNDI1WhcNMjgwMzIwMjIwNDI1WjANMQswCQYD +VQQDEwJkYjBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABEcTyr2l7yYWZuh++02M +usR20QrZtHdd7goKmYrIpQ3ekmHuLLgJWgTTaIhCj8fzbryep+s8oM7EiPhRQ14l +uSujgfMwgfAwDgYDVR0PAQH/BAQDAgO4MB0GA1UdJQQWMBQGCCsGAQUFBwMCBggr +BgEFBQcDATAMBgNVHRMBAf8EAjAAMCkGA1UdDgQiBCAy6jHCBBT2bii+aMJCDJ33 +bFJtR72bxDBUi5b+YWyWwDArBgNVHSMEJDAigCCrnNQy2IQS73Co9WbrPXtq/YP9 +SvIBOJ8iYRWTOxjCqTBZBgNVHREEUjBQhk5zcGlmZmU6Ly8xMTExMTExMS0yMjIy +LTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsL25zL2RlZmF1bHQvZGMvZGMw +MS9zdmMvZGIwCgYIKoZIzj0EAwIDSAAwRQIhALCW4cOEpuYfLJ0NGwEmYG5Fko0N +WMccL0gEQzKUbIWrAiAIw8wkTSf1O8vTHeKdR1fCmdVoDRFRKB643PaofUzFxA== +-----END CERTIFICATE----- diff --git a/connect/testdata/ca1-svc-db.key.pem b/connect/testdata/ca1-svc-db.key.pem new file mode 100644 index 000000000..3ec23a33b --- /dev/null +++ b/connect/testdata/ca1-svc-db.key.pem @@ -0,0 +1,5 @@ +-----BEGIN EC PRIVATE KEY----- +MHcCAQEEIMHv1pjt75IjKXzl8l4rBtEFS1pEuOM4WNgeHg5Qn1RroAoGCCqGSM49 +AwEHoUQDQgAERxPKvaXvJhZm6H77TYy6xHbRCtm0d13uCgqZisilDd6SYe4suAla +BNNoiEKPx/NuvJ6n6zygzsSI+FFDXiW5Kw== +-----END EC PRIVATE KEY----- diff --git a/connect/testdata/ca1-svc-web.cert.pem b/connect/testdata/ca1-svc-web.cert.pem new file mode 100644 index 000000000..a786a2c06 --- /dev/null +++ b/connect/testdata/ca1-svc-web.cert.pem @@ -0,0 +1,13 @@ +-----BEGIN CERTIFICATE----- +MIICDDCCAbKgAwIBAgIBAzAKBggqhkjOPQQDAjAaMRgwFgYDVQQDEw9Db25zdWwg +SW50ZXJuYWwwHhcNMTgwMzIzMjIwNDI1WhcNMjgwMzIwMjIwNDI1WjAOMQwwCgYD +VQQDEwN3ZWIwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARF47lERGXziNBC74Kh +U3W29/M7JO9LIUaJgK0LJbhgf0MuPxf7gX+PnxH5ImI5yfXRv0SSxeCq7377IkXP +XS6Fo4H0MIHxMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcDAgYI +KwYBBQUHAwEwDAYDVR0TAQH/BAIwADApBgNVHQ4EIgQg26hfNYiVwYRm7CQJvdOd +NIOmG3t8vNwXCtktC782cf8wKwYDVR0jBCQwIoAgq5zUMtiEEu9wqPVm6z17av2D +/UryATifImEVkzsYwqkwWgYDVR0RBFMwUYZPc3BpZmZlOi8vMTExMTExMTEtMjIy +Mi0zMzMzLTQ0NDQtNTU1NTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2Rj +MDEvc3ZjL3dlYjAKBggqhkjOPQQDAgNIADBFAiAzi8uBs+ApPfAZZm5eO/hhVZiv +E8p84VKCqPeF3tFfoAIhANVkdSnp2AKU5T7SlJHmieu3DFNWCVpajlHJvf286J94 +-----END CERTIFICATE----- diff --git a/connect/testdata/ca1-svc-web.key.pem b/connect/testdata/ca1-svc-web.key.pem new file mode 100644 index 000000000..8ed82c13c --- /dev/null +++ b/connect/testdata/ca1-svc-web.key.pem @@ -0,0 +1,5 @@ +-----BEGIN EC PRIVATE KEY----- +MHcCAQEEIPOIj4BFS0fknG+uAVKZIWRpnzp7O3OKpBDgEmuml7lcoAoGCCqGSM49 +AwEHoUQDQgAEReO5RERl84jQQu+CoVN1tvfzOyTvSyFGiYCtCyW4YH9DLj8X+4F/ +j58R+SJiOcn10b9EksXgqu9++yJFz10uhQ== +-----END EC PRIVATE KEY----- diff --git a/connect/testdata/ca2-ca-vault.cert.pem b/connect/testdata/ca2-ca-vault.cert.pem new file mode 100644 index 000000000..a7f617468 --- /dev/null +++ b/connect/testdata/ca2-ca-vault.cert.pem @@ -0,0 +1,14 @@ +-----BEGIN CERTIFICATE----- +MIICDDCCAbKgAwIBAgIBAjAKBggqhkjOPQQDAjAQMQ4wDAYDVQQDEwVWYXVsdDAe +Fw0xODAzMjMyMjA0MjVaFw0yODAzMjAyMjA0MjVaMBAxDjAMBgNVBAMTBVZhdWx0 +MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEAjGVnRy/7Q2SU4ePbKbsurRAHKYA +CuA3r9QrowgZOr7yptF54shiobMIORpfKYkoYkhzL1lhWKI06BUJ4xuPd6OB/DCB ++TAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zApBgNVHQ4EIgQgqEc5 +ZrELD5ySxapbU+eRb+aEv1MEoCvjC0mCA1uJecMwKwYDVR0jBCQwIoAgqEc5ZrEL +D5ySxapbU+eRb+aEv1MEoCvjC0mCA1uJecMwPwYDVR0RBDgwNoY0c3BpZmZlOi8v +MTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1NTU1NTU1NTU1LmNvbnN1bDA9BgNV +HR4BAf8EMzAxoC8wLYIrMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1NTU1NTU1 +NTU1LmNvbnN1bDAKBggqhkjOPQQDAgNIADBFAiEA6pBdeglhq//A7sYnYk85XL+3 +4IDrXrGN3KjC9qo3J9ICIDS9pEoTPWAWDfn1ccPafKVBrJm6KrmljcvymQ2QUDIZ +-----END CERTIFICATE----- +---- diff --git a/connect/testdata/ca2-ca-vault.key.pem b/connect/testdata/ca2-ca-vault.key.pem new file mode 100644 index 000000000..43534b961 --- /dev/null +++ b/connect/testdata/ca2-ca-vault.key.pem @@ -0,0 +1,5 @@ +-----BEGIN EC PRIVATE KEY----- +MHcCAQEEIKnuCctuvtyzf+M6B8jGqejG4T5o7NMRYjO2M3dZITCboAoGCCqGSM49 +AwEHoUQDQgAEAjGVnRy/7Q2SU4ePbKbsurRAHKYACuA3r9QrowgZOr7yptF54shi +obMIORpfKYkoYkhzL1lhWKI06BUJ4xuPdw== +-----END EC PRIVATE KEY----- diff --git a/connect/testdata/ca2-svc-cache.cert.pem b/connect/testdata/ca2-svc-cache.cert.pem new file mode 100644 index 000000000..32110e232 --- /dev/null +++ b/connect/testdata/ca2-svc-cache.cert.pem @@ -0,0 +1,13 @@ +-----BEGIN CERTIFICATE----- +MIICBzCCAaygAwIBAgIBCDAKBggqhkjOPQQDAjAQMQ4wDAYDVQQDEwVWYXVsdDAe +Fw0xODAzMjMyMjA0MjVaFw0yODAzMjAyMjA0MjVaMBAxDjAMBgNVBAMTBWNhY2hl +MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEyB6D+Eqi/71EhUrBWlcZOV2vjS9Y +xnUQ3jfH+QUZur7WOuGLnO7eArbAHcDbqKGyDWxlkZH04sGYOXaEW7UUd6OB9jCB +8zAOBgNVHQ8BAf8EBAMCA7gwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMB +MAwGA1UdEwEB/wQCMAAwKQYDVR0OBCIEIGapiHFxlbYbNKFlwdPMpKhIypvNZXo8 +k/OZLki/vurQMCsGA1UdIwQkMCKAIKhHOWaxCw+cksWqW1PnkW/mhL9TBKAr4wtJ +ggNbiXnDMFwGA1UdEQRVMFOGUXNwaWZmZTovLzExMTExMTExLTIyMjItMzMzMy00 +NDQ0LTU1NTU1NTU1NTU1NS5jb25zdWwvbnMvZGVmYXVsdC9kYy9kYzAxL3N2Yy9j +YWNoZTAKBggqhkjOPQQDAgNJADBGAiEA/vRLXbkigS6l89MxFk0RFE7Zo4vorv7s +E1juCOsVJBICIQDXlpmYH9fPon6DYMyOxQttNjkuWbJgnPv7rPg+CllRyA== +-----END CERTIFICATE----- diff --git a/connect/testdata/ca2-svc-cache.key.pem b/connect/testdata/ca2-svc-cache.key.pem new file mode 100644 index 000000000..cabad8179 --- /dev/null +++ b/connect/testdata/ca2-svc-cache.key.pem @@ -0,0 +1,5 @@ +-----BEGIN EC PRIVATE KEY----- +MHcCAQEEIEbQOv4odF2Tu8ZnJTJuytvOd2HOF9HxgGw5ei1pkP4moAoGCCqGSM49 +AwEHoUQDQgAEyB6D+Eqi/71EhUrBWlcZOV2vjS9YxnUQ3jfH+QUZur7WOuGLnO7e +ArbAHcDbqKGyDWxlkZH04sGYOXaEW7UUdw== +-----END EC PRIVATE KEY----- diff --git a/connect/testdata/ca2-svc-db.cert.pem b/connect/testdata/ca2-svc-db.cert.pem new file mode 100644 index 000000000..33273058a --- /dev/null +++ b/connect/testdata/ca2-svc-db.cert.pem @@ -0,0 +1,13 @@ +-----BEGIN CERTIFICATE----- +MIICADCCAaagAwIBAgIBBzAKBggqhkjOPQQDAjAQMQ4wDAYDVQQDEwVWYXVsdDAe +Fw0xODAzMjMyMjA0MjVaFw0yODAzMjAyMjA0MjVaMA0xCzAJBgNVBAMTAmRiMFkw +EwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEFeB4DynO6IeKOE4zFLlBVFv+4HeWRvK3 +6cQ9L6v5uhLfdcYyqhT/QLbQ4R8ks1vUTTiq0XJsAGdkvkt71fiEl6OB8zCB8DAO +BgNVHQ8BAf8EBAMCA7gwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwG +A1UdEwEB/wQCMAAwKQYDVR0OBCIEIKjVz8n91cej8q6WpDNd0hwSMAE2ddY056PH +hMfaBM6GMCsGA1UdIwQkMCKAIKhHOWaxCw+cksWqW1PnkW/mhL9TBKAr4wtJggNb +iXnDMFkGA1UdEQRSMFCGTnNwaWZmZTovLzExMTExMTExLTIyMjItMzMzMy00NDQ0 +LTU1NTU1NTU1NTU1NS5jb25zdWwvbnMvZGVmYXVsdC9kYy9kYzAxL3N2Yy9kYjAK +BggqhkjOPQQDAgNIADBFAiAdYkokbeZr7W32NhjcNoTMNwpz9CqJpK6Yzu4N6EJc +pAIhALHpRM57zdiMouDOlhGPX5XKzbSl2AnBjFvbPqgFV09Z +-----END CERTIFICATE----- diff --git a/connect/testdata/ca2-svc-db.key.pem b/connect/testdata/ca2-svc-db.key.pem new file mode 100644 index 000000000..7f7ab9ff8 --- /dev/null +++ b/connect/testdata/ca2-svc-db.key.pem @@ -0,0 +1,5 @@ +-----BEGIN EC PRIVATE KEY----- +MHcCAQEEIHnzia+DNTFB7uYQEuWvLR2czGCuDfOTt1FfcBo1uBJioAoGCCqGSM49 +AwEHoUQDQgAEFeB4DynO6IeKOE4zFLlBVFv+4HeWRvK36cQ9L6v5uhLfdcYyqhT/ +QLbQ4R8ks1vUTTiq0XJsAGdkvkt71fiElw== +-----END EC PRIVATE KEY----- diff --git a/connect/testdata/ca2-svc-web.cert.pem b/connect/testdata/ca2-svc-web.cert.pem new file mode 100644 index 000000000..ae1e338f6 --- /dev/null +++ b/connect/testdata/ca2-svc-web.cert.pem @@ -0,0 +1,13 @@ +-----BEGIN CERTIFICATE----- +MIICAjCCAaigAwIBAgIBBjAKBggqhkjOPQQDAjAQMQ4wDAYDVQQDEwVWYXVsdDAe +Fw0xODAzMjMyMjA0MjVaFw0yODAzMjAyMjA0MjVaMA4xDDAKBgNVBAMTA3dlYjBZ +MBMGByqGSM49AgEGCCqGSM49AwEHA0IABM9XzxWFCa80uQDfJEGboUC15Yr+FwDp +OemThalQxFpkL7gQSIgpzgGULIx+jCiu+clJ0QhbWT2dnS8vFUKq35qjgfQwgfEw +DgYDVR0PAQH/BAQDAgO4MB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDATAM +BgNVHRMBAf8EAjAAMCkGA1UdDgQiBCCN+TKHPCOr48hxRCx4rqbWQg5QHkCSNzjZ +qi1JGs13njArBgNVHSMEJDAigCCoRzlmsQsPnJLFqltT55Fv5oS/UwSgK+MLSYID +W4l5wzBaBgNVHREEUzBRhk9zcGlmZmU6Ly8xMTExMTExMS0yMjIyLTMzMzMtNDQ0 +NC01NTU1NTU1NTU1NTUuY29uc3VsL25zL2RlZmF1bHQvZGMvZGMwMS9zdmMvd2Vi +MAoGCCqGSM49BAMCA0gAMEUCIBd6gpL6E8rms5BU+cJeeyv0Rjc18edn2g3q2wLN +r1zAAiEAv16whKwR0DeKkldGLDQIu9nCNvfDZrEWgywIBYbzLxY= +-----END CERTIFICATE----- diff --git a/connect/testdata/ca2-svc-web.key.pem b/connect/testdata/ca2-svc-web.key.pem new file mode 100644 index 000000000..65f0bc48e --- /dev/null +++ b/connect/testdata/ca2-svc-web.key.pem @@ -0,0 +1,5 @@ +-----BEGIN EC PRIVATE KEY----- +MHcCAQEEIOCMjjRexX3qHjixpRwLxggJd9yuskqUoPy8/MepafP+oAoGCCqGSM49 +AwEHoUQDQgAEz1fPFYUJrzS5AN8kQZuhQLXliv4XAOk56ZOFqVDEWmQvuBBIiCnO +AZQsjH6MKK75yUnRCFtZPZ2dLy8VQqrfmg== +-----END EC PRIVATE KEY----- diff --git a/connect/testdata/ca2-xc-by-ca1.cert.pem b/connect/testdata/ca2-xc-by-ca1.cert.pem new file mode 100644 index 000000000..e864f6c00 --- /dev/null +++ b/connect/testdata/ca2-xc-by-ca1.cert.pem @@ -0,0 +1,14 @@ +-----BEGIN CERTIFICATE----- +MIICFjCCAbygAwIBAgIBAjAKBggqhkjOPQQDAjAaMRgwFgYDVQQDEw9Db25zdWwg +SW50ZXJuYWwwHhcNMTgwMzIzMjIwNDI1WhcNMjgwMzIwMjIwNDI1WjAQMQ4wDAYD +VQQDEwVWYXVsdDBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABAIxlZ0cv+0NklOH +j2ym7Lq0QBymAArgN6/UK6MIGTq+8qbReeLIYqGzCDkaXymJKGJIcy9ZYViiNOgV +CeMbj3ejgfwwgfkwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wKQYD +VR0OBCIEIKhHOWaxCw+cksWqW1PnkW/mhL9TBKAr4wtJggNbiXnDMCsGA1UdIwQk +MCKAIKuc1DLYhBLvcKj1Zus9e2r9g/1K8gE4nyJhFZM7GMKpMD8GA1UdEQQ4MDaG +NHNwaWZmZTovLzExMTExMTExLTIyMjItMzMzMy00NDQ0LTU1NTU1NTU1NTU1NS5j +b25zdWwwPQYDVR0eAQH/BDMwMaAvMC2CKzExMTExMTExLTIyMjItMzMzMy00NDQ0 +LTU1NTU1NTU1NTU1NS5jb25zdWwwCgYIKoZIzj0EAwIDSAAwRQIgWWWj8/6SaY2y +wzOtIphwZLewCuLMG6KG8uY4S7UsosgCIQDhCbT/LUKq/A21khQncBmM79ng9Gbx +/4Zw8zbVmnZJKg== +-----END CERTIFICATE----- diff --git a/connect/testdata/mkcerts.go b/connect/testdata/mkcerts.go new file mode 100644 index 000000000..7fe82f53a --- /dev/null +++ b/connect/testdata/mkcerts.go @@ -0,0 +1,243 @@ +package main + +import ( + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/sha256" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "fmt" + "log" + "math/big" + "net/url" + "os" + "regexp" + "strings" + "time" +) + +// You can verify a given leaf with a given root using: +// +// $ openssl verify -verbose -CAfile ca2-ca-vault.cert.pem ca1-svc-db.cert.pem +// +// Note that to verify via the cross-signed intermediate, openssl requires it to +// be bundled with the _root_ CA bundle and will ignore the cert if it's passed +// with the subject. You can do that with: +// +// $ openssl verify -verbose -CAfile \ +// <(cat ca1-ca-consul-internal.cert.pem ca2-xc-by-ca1.cert.pem) \ +// ca2-svc-db.cert.pem +// ca2-svc-db.cert.pem: OK +// +// Note that the same leaf and root without the intermediate should fail: +// +// $ openssl verify -verbose -CAfile ca1-ca-consul-internal.cert.pem ca2-svc-db.cert.pem +// ca2-svc-db.cert.pem: CN = db +// error 20 at 0 depth lookup:unable to get local issuer certificate +// +// NOTE: THIS IS A QUIRK OF OPENSSL; in Connect we will distribute the roots +// alone and stable intermediates like the XC cert to the _leaf_. + +var clusterID = "11111111-2222-3333-4444-555555555555" +var cAs = []string{"Consul Internal", "Vault"} +var services = []string{"web", "db", "cache"} +var slugRe = regexp.MustCompile("[^a-zA-Z0-9]+") +var serial int64 + +type caInfo struct { + id int + name string + slug string + uri *url.URL + pk *ecdsa.PrivateKey + cert *x509.Certificate +} + +func main() { + // Make CA certs + caInfos := make(map[string]caInfo) + var previousCA *caInfo + for idx, name := range cAs { + ca := caInfo{ + id: idx + 1, + name: name, + slug: strings.ToLower(slugRe.ReplaceAllString(name, "-")), + } + pk, err := makePK(fmt.Sprintf("ca%d-ca-%s.key.pem", ca.id, ca.slug)) + if err != nil { + log.Fatal(err) + } + ca.pk = pk + caURI, err := url.Parse(fmt.Sprintf("spiffe://%s.consul", clusterID)) + if err != nil { + log.Fatal(err) + } + ca.uri = caURI + cert, err := makeCACert(ca, previousCA) + if err != nil { + log.Fatal(err) + } + ca.cert = cert + caInfos[name] = ca + previousCA = &ca + } + + // For each CA, make a leaf cert for each service + for _, ca := range caInfos { + for _, svc := range services { + _, err := makeLeafCert(ca, svc) + if err != nil { + log.Fatal(err) + } + } + } +} + +func makePK(path string) (*ecdsa.PrivateKey, error) { + log.Printf("Writing PK file: %s", path) + priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if err != nil { + return nil, err + } + + bs, err := x509.MarshalECPrivateKey(priv) + if err != nil { + return nil, err + } + + err = writePEM(path, "EC PRIVATE KEY", bs) + return priv, nil +} + +func makeCACert(ca caInfo, previousCA *caInfo) (*x509.Certificate, error) { + path := fmt.Sprintf("ca%d-ca-%s.cert.pem", ca.id, ca.slug) + log.Printf("Writing CA cert file: %s", path) + serial++ + subj := pkix.Name{ + CommonName: ca.name, + } + template := x509.Certificate{ + SerialNumber: big.NewInt(serial), + Subject: subj, + // New in go 1.10 + URIs: []*url.URL{ca.uri}, + // Add DNS name constraint + PermittedDNSDomainsCritical: true, + PermittedDNSDomains: []string{ca.uri.Hostname()}, + SignatureAlgorithm: x509.ECDSAWithSHA256, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign | x509.KeyUsageDigitalSignature, + IsCA: true, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: keyID(&ca.pk.PublicKey), + SubjectKeyId: keyID(&ca.pk.PublicKey), + } + bs, err := x509.CreateCertificate(rand.Reader, &template, &template, + &ca.pk.PublicKey, ca.pk) + if err != nil { + return nil, err + } + + err = writePEM(path, "CERTIFICATE", bs) + if err != nil { + return nil, err + } + + cert, err := x509.ParseCertificate(bs) + if err != nil { + return nil, err + } + + if previousCA != nil { + // Also create cross-signed cert as we would use during rotation between + // previous CA and this one. + template.AuthorityKeyId = keyID(&previousCA.pk.PublicKey) + bs, err := x509.CreateCertificate(rand.Reader, &template, + previousCA.cert, &ca.pk.PublicKey, previousCA.pk) + if err != nil { + return nil, err + } + + path := fmt.Sprintf("ca%d-xc-by-ca%d.cert.pem", ca.id, previousCA.id) + err = writePEM(path, "CERTIFICATE", bs) + if err != nil { + return nil, err + } + } + + return cert, err +} + +func keyID(pub *ecdsa.PublicKey) []byte { + // This is not standard; RFC allows any unique identifier as long as they + // match in subject/authority chains but suggests specific hashing of DER + // bytes of public key including DER tags. I can't be bothered to do esp. + // since ECDSA keys don't have a handy way to marshal the publick key alone. + h := sha256.New() + h.Write(pub.X.Bytes()) + h.Write(pub.Y.Bytes()) + return h.Sum([]byte{}) +} + +func makeLeafCert(ca caInfo, svc string) (*x509.Certificate, error) { + svcURI := ca.uri + svcURI.Path = "/ns/default/dc/dc01/svc/" + svc + + keyPath := fmt.Sprintf("ca%d-svc-%s.key.pem", ca.id, svc) + cPath := fmt.Sprintf("ca%d-svc-%s.cert.pem", ca.id, svc) + + pk, err := makePK(keyPath) + if err != nil { + return nil, err + } + + log.Printf("Writing Service Cert: %s", cPath) + + serial++ + subj := pkix.Name{ + CommonName: svc, + } + template := x509.Certificate{ + SerialNumber: big.NewInt(serial), + Subject: subj, + // New in go 1.10 + URIs: []*url.URL{svcURI}, + SignatureAlgorithm: x509.ECDSAWithSHA256, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageDataEncipherment | + x509.KeyUsageKeyAgreement | x509.KeyUsageDigitalSignature | + x509.KeyUsageKeyEncipherment, + ExtKeyUsage: []x509.ExtKeyUsage{ + x509.ExtKeyUsageClientAuth, + x509.ExtKeyUsageServerAuth, + }, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: keyID(&ca.pk.PublicKey), + SubjectKeyId: keyID(&pk.PublicKey), + } + bs, err := x509.CreateCertificate(rand.Reader, &template, ca.cert, + &pk.PublicKey, ca.pk) + if err != nil { + return nil, err + } + + err = writePEM(cPath, "CERTIFICATE", bs) + if err != nil { + return nil, err + } + + return x509.ParseCertificate(bs) +} + +func writePEM(name, typ string, bs []byte) error { + f, err := os.OpenFile(name, os.O_WRONLY|os.O_CREATE, 0600) + if err != nil { + return err + } + defer f.Close() + return pem.Encode(f, &pem.Block{Type: typ, Bytes: bs}) +} diff --git a/connect/testing.go b/connect/testing.go new file mode 100644 index 000000000..90db332a2 --- /dev/null +++ b/connect/testing.go @@ -0,0 +1,88 @@ +package connect + +import ( + "crypto/tls" + "crypto/x509" + "fmt" + "io/ioutil" + "path" + "path/filepath" + "runtime" + + "github.com/mitchellh/go-testing-interface" + "github.com/stretchr/testify/require" +) + +// testDataDir is a janky temporary hack to allow use of these methods from +// proxy package. We need to revisit where all this lives since it logically +// overlaps with consul/agent in Mitchell's PR and that one generates certs on +// the fly which will make this unecessary but I want to get things working for +// now with what I've got :). This wonderful heap kinda-sorta gets the path +// relative to _this_ file so it works even if the Test* method is being called +// from a test binary in another package dir. +func testDataDir() string { + _, filename, _, ok := runtime.Caller(0) + if !ok { + panic("no caller information") + } + return path.Dir(filename) + "/testdata" +} + +// TestCAPool returns an *x509.CertPool containing the named CA certs from the +// testdata dir. +func TestCAPool(t testing.T, caNames ...string) *x509.CertPool { + t.Helper() + pool := x509.NewCertPool() + for _, name := range caNames { + certs, err := filepath.Glob(testDataDir() + "/" + name + "-ca-*.cert.pem") + require.Nil(t, err) + for _, cert := range certs { + caPem, err := ioutil.ReadFile(cert) + require.Nil(t, err) + pool.AppendCertsFromPEM(caPem) + } + } + return pool +} + +// TestSvcKeyPair returns an tls.Certificate containing both cert and private +// key for a given service under a given CA from the testdata dir. +func TestSvcKeyPair(t testing.T, ca, name string) tls.Certificate { + t.Helper() + prefix := fmt.Sprintf(testDataDir()+"/%s-svc-%s", ca, name) + cert, err := tls.LoadX509KeyPair(prefix+".cert.pem", prefix+".key.pem") + require.Nil(t, err) + return cert +} + +// TestTLSConfig returns a *tls.Config suitable for use during tests. +func TestTLSConfig(t testing.T, ca, svc string) *tls.Config { + t.Helper() + return &tls.Config{ + Certificates: []tls.Certificate{TestSvcKeyPair(t, ca, svc)}, + MinVersion: tls.VersionTLS12, + RootCAs: TestCAPool(t, ca), + ClientCAs: TestCAPool(t, ca), + ClientAuth: tls.RequireAndVerifyClientCert, + // In real life we'll need to do this too since otherwise Go will attempt to + // verify DNS names match DNS SAN/CN which we don't want, but we'll hook + // VerifyPeerCertificates and do our own x509 path validation as well as + // AuthZ upcall. For now we are just testing the basic proxy mechanism so + // this is fine. + InsecureSkipVerify: true, + } +} + +// TestAuther is a simple Auther implementation that does nothing but what you +// tell it to! +type TestAuther struct { + // Return is the value returned from an Auth() call. Set it to nil to have all + // certificates unconditionally accepted or a value to have them fail. + Return error +} + +// Auth implements Auther +func (a *TestAuther) Auth(rawCerts [][]byte, + verifiedChains [][]*x509.Certificate) error { + return a.Return +} diff --git a/connect/tls.go b/connect/tls.go new file mode 100644 index 000000000..af66d9c0c --- /dev/null +++ b/connect/tls.go @@ -0,0 +1,124 @@ +package connect + +import ( + "crypto/tls" + "crypto/x509" + "io/ioutil" + "sync" +) + +// defaultTLSConfig returns the standard config for connect clients and servers. +func defaultTLSConfig() *tls.Config { + serverAuther := &ServerAuther{} + return &tls.Config{ + MinVersion: tls.VersionTLS12, + ClientAuth: tls.RequireAndVerifyClientCert, + // We don't have access to go internals that decide if AES hardware + // acceleration is available in order to prefer CHA CHA if not. So let's + // just always prefer AES for now. We can look into doing something uglier + // later like using an external lib for AES checking if it seems important. + // https://github.com/golang/go/blob/df91b8044dbe790c69c16058330f545be069cc1f/src/crypto/tls/common.go#L919:14 + CipherSuites: []uint16{ + tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, + tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, + tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, + tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, + tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, + tls.TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, + }, + // We have to set this since otherwise Go will attempt to verify DNS names + // match DNS SAN/CN which we don't want. We hook up VerifyPeerCertificate to + // do our own path validation as well as Connect AuthZ. + InsecureSkipVerify: true, + // By default auth as if we are a server. Clients need to override this with + // an Auther that is performs correct validation of the server identity they + // intended to connect to. + VerifyPeerCertificate: serverAuther.Auth, + } +} + +// ReloadableTLSConfig exposes a tls.Config that can have it's certificates +// reloaded. This works by +type ReloadableTLSConfig struct { + mu sync.Mutex + + // cfg is the current config to use for new connections + cfg *tls.Config +} + +// NewReloadableTLSConfig returns a reloadable config currently set to base. The +// Auther used to verify certificates for incoming connections on a Server will +// just be copied from the VerifyPeerCertificate passed. Clients will need to +// pass a specific Auther instance when they call TLSConfig that is configured +// to perform the necessary validation of the server's identity. +func NewReloadableTLSConfig(base *tls.Config) *ReloadableTLSConfig { + return &ReloadableTLSConfig{cfg: base} +} + +// ServerTLSConfig returns a *tls.Config that will dynamically load certs for +// each inbound connection via the GetConfigForClient callback. +func (c *ReloadableTLSConfig) ServerTLSConfig() *tls.Config { + // Setup the basic one with current params even though we will be using + // different config for each new conn. + c.mu.Lock() + base := c.cfg + c.mu.Unlock() + + // Dynamically fetch the current config for each new inbound connection + base.GetConfigForClient = func(info *tls.ClientHelloInfo) (*tls.Config, error) { + return c.TLSConfig(nil), nil + } + + return base +} + +// TLSConfig returns the current value for the config. It is safe to call from +// any goroutine. The passed Auther is inserted into the config's +// VerifyPeerCertificate. Passing a nil Auther will leave the default one in the +// base config +func (c *ReloadableTLSConfig) TLSConfig(auther Auther) *tls.Config { + c.mu.Lock() + cfgCopy := c.cfg + c.mu.Unlock() + if auther != nil { + cfgCopy.VerifyPeerCertificate = auther.Auth + } + return cfgCopy +} + +// SetTLSConfig sets the config used for future connections. It is safe to call +// from any goroutine. +func (c *ReloadableTLSConfig) SetTLSConfig(cfg *tls.Config) error { + c.mu.Lock() + defer c.mu.Unlock() + c.cfg = cfg + return nil +} + +// devTLSConfigFromFiles returns a default TLS Config but with certs and CAs +// based on local files for dev. +func devTLSConfigFromFiles(caFile, certFile, + keyFile string) (*tls.Config, error) { + + roots := x509.NewCertPool() + + bs, err := ioutil.ReadFile(caFile) + if err != nil { + return nil, err + } + + roots.AppendCertsFromPEM(bs) + + cert, err := tls.LoadX509KeyPair(certFile, keyFile) + if err != nil { + return nil, err + } + + cfg := defaultTLSConfig() + + cfg.Certificates = []tls.Certificate{cert} + cfg.RootCAs = roots + cfg.ClientCAs = roots + + return cfg, nil +} diff --git a/connect/tls_test.go b/connect/tls_test.go new file mode 100644 index 000000000..0c99df3ad --- /dev/null +++ b/connect/tls_test.go @@ -0,0 +1,45 @@ +package connect + +import ( + "crypto/tls" + "testing" + + "github.com/stretchr/testify/require" +) + +func TestReloadableTLSConfig(t *testing.T) { + base := TestTLSConfig(t, "ca1", "web") + + c := NewReloadableTLSConfig(base) + + a := &TestAuther{ + Return: nil, + } + + // The dynamic config should be the one we loaded, but with the passed auther + expect := base + expect.VerifyPeerCertificate = a.Auth + require.Equal(t, base, c.TLSConfig(a)) + + // The server config should return same too for new connections + serverCfg := c.ServerTLSConfig() + require.NotNil(t, serverCfg.GetConfigForClient) + got, err := serverCfg.GetConfigForClient(&tls.ClientHelloInfo{}) + require.Nil(t, err) + require.Equal(t, base, got) + + // Now change the config as if we just rotated to a new CA + new := TestTLSConfig(t, "ca2", "web") + err = c.SetTLSConfig(new) + require.Nil(t, err) + + // The dynamic config should be the one we loaded (with same auther due to nil) + require.Equal(t, new, c.TLSConfig(nil)) + + // The server config should return same too for new connections + serverCfg = c.ServerTLSConfig() + require.NotNil(t, serverCfg.GetConfigForClient) + got, err = serverCfg.GetConfigForClient(&tls.ClientHelloInfo{}) + require.Nil(t, err) + require.Equal(t, new, got) +} diff --git a/proxy/config.go b/proxy/config.go new file mode 100644 index 000000000..a5958135a --- /dev/null +++ b/proxy/config.go @@ -0,0 +1,111 @@ +package proxy + +import ( + "io/ioutil" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/hcl" +) + +// Config is the publicly configurable state for an entire proxy instance. It's +// mostly used as the format for the local-file config mode which is mostly for +// dev/testing. In normal use, different parts of this config are pulled from +// different locations (e.g. command line, agent config endpoint, agent +// certificate endpoints). +type Config struct { + // ProxyID is the identifier for this proxy as registered in Consul. It's only + // guaranteed to be unique per agent. + ProxyID string `json:"proxy_id" hcl:"proxy_id"` + + // Token is the authentication token provided for queries to the local agent. + Token string `json:"token" hcl:"token"` + + // ProxiedServiceName is the name of the service this proxy is representing. + ProxiedServiceName string `json:"proxied_service_name" hcl:"proxied_service_name"` + + // ProxiedServiceNamespace is the namespace of the service this proxy is + // representing. + ProxiedServiceNamespace string `json:"proxied_service_namespace" hcl:"proxied_service_namespace"` + + // PublicListener configures the mTLS listener. + PublicListener PublicListenerConfig `json:"public_listener" hcl:"public_listener"` + + // Upstreams configures outgoing proxies for remote connect services. + Upstreams []UpstreamConfig `json:"upstreams" hcl:"upstreams"` + + // DevCAFile allows passing the file path to PEM encoded root certificate + // bundle to be used in development instead of the ones supplied by Connect. + DevCAFile string `json:"dev_ca_file" hcl:"dev_ca_file"` + + // DevServiceCertFile allows passing the file path to PEM encoded service + // certificate (client and server) to be used in development instead of the + // ones supplied by Connect. + DevServiceCertFile string `json:"dev_service_cert_file" hcl:"dev_service_cert_file"` + + // DevServiceKeyFile allows passing the file path to PEM encoded service + // private key to be used in development instead of the ones supplied by + // Connect. + DevServiceKeyFile string `json:"dev_service_key_file" hcl:"dev_service_key_file"` +} + +// ConfigWatcher is a simple interface to allow dynamic configurations from +// plugggable sources. +type ConfigWatcher interface { + // Watch returns a channel that will deliver new Configs if something external + // provokes it. + Watch() <-chan *Config +} + +// StaticConfigWatcher is a simple ConfigWatcher that delivers a static Config +// once and then never changes it. +type StaticConfigWatcher struct { + ch chan *Config +} + +// NewStaticConfigWatcher returns a ConfigWatcher for a config that never +// changes. It assumes only one "watcher" will ever call Watch. The config is +// delivered on the first call but will never be delivered again to allow +// callers to call repeatedly (e.g. select in a loop). +func NewStaticConfigWatcher(cfg *Config) *StaticConfigWatcher { + sc := &StaticConfigWatcher{ + // Buffer it so we can queue up the config for first delivery. + ch: make(chan *Config, 1), + } + sc.ch <- cfg + return sc +} + +// Watch implements ConfigWatcher on a static configuration for compatibility. +// It returns itself on the channel once and then leaves it open. +func (sc *StaticConfigWatcher) Watch() <-chan *Config { + return sc.ch +} + +// ParseConfigFile parses proxy configuration form a file for local dev. +func ParseConfigFile(filename string) (*Config, error) { + bs, err := ioutil.ReadFile(filename) + if err != nil { + return nil, err + } + + var cfg Config + + err = hcl.Unmarshal(bs, &cfg) + if err != nil { + return nil, err + } + + return &cfg, nil +} + +// AgentConfigWatcher watches the local Consul agent for proxy config changes. +type AgentConfigWatcher struct { + client *api.Client +} + +// Watch implements ConfigWatcher. +func (w *AgentConfigWatcher) Watch() <-chan *Config { + watch := make(chan *Config) + // TODO implement me + return watch +} diff --git a/proxy/config_test.go b/proxy/config_test.go new file mode 100644 index 000000000..89287d573 --- /dev/null +++ b/proxy/config_test.go @@ -0,0 +1,46 @@ +package proxy + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestParseConfigFile(t *testing.T) { + cfg, err := ParseConfigFile("testdata/config-kitchensink.hcl") + require.Nil(t, err) + + expect := &Config{ + ProxyID: "foo", + Token: "11111111-2222-3333-4444-555555555555", + ProxiedServiceName: "web", + ProxiedServiceNamespace: "default", + PublicListener: PublicListenerConfig{ + BindAddress: ":9999", + LocalServiceAddress: "127.0.0.1:5000", + LocalConnectTimeoutMs: 1000, + HandshakeTimeoutMs: 5000, + }, + Upstreams: []UpstreamConfig{ + { + LocalBindAddress: "127.0.0.1:6000", + DestinationName: "db", + DestinationNamespace: "default", + DestinationType: "service", + ConnectTimeoutMs: 10000, + }, + { + LocalBindAddress: "127.0.0.1:6001", + DestinationName: "geo-cache", + DestinationNamespace: "default", + DestinationType: "prepared_query", + ConnectTimeoutMs: 10000, + }, + }, + DevCAFile: "connect/testdata/ca1-ca-consul-internal.cert.pem", + DevServiceCertFile: "connect/testdata/ca1-svc-web.cert.pem", + DevServiceKeyFile: "connect/testdata/ca1-svc-web.key.pem", + } + + require.Equal(t, expect, cfg) +} diff --git a/proxy/conn.go b/proxy/conn.go new file mode 100644 index 000000000..dfad81db7 --- /dev/null +++ b/proxy/conn.go @@ -0,0 +1,48 @@ +package proxy + +import ( + "io" + "net" + "sync/atomic" +) + +// Conn represents a single proxied TCP connection. +type Conn struct { + src, dst net.Conn + stopping int32 +} + +// NewConn returns a conn joining the two given net.Conn +func NewConn(src, dst net.Conn) *Conn { + return &Conn{ + src: src, + dst: dst, + stopping: 0, + } +} + +// Close closes both connections. +func (c *Conn) Close() { + atomic.StoreInt32(&c.stopping, 1) + c.src.Close() + c.dst.Close() +} + +// CopyBytes will continuously copy bytes in both directions between src and dst +// until either connection is closed. +func (c *Conn) CopyBytes() error { + defer c.Close() + + go func() { + // Need this since Copy is only guaranteed to stop when it's source reader + // (second arg) hits EOF or error but either conn might close first possibly + // causing this goroutine to exit but not the outer one. See TestSc + //defer c.Close() + io.Copy(c.dst, c.src) + }() + _, err := io.Copy(c.src, c.dst) + if atomic.LoadInt32(&c.stopping) == 1 { + return nil + } + return err +} diff --git a/proxy/conn_test.go b/proxy/conn_test.go new file mode 100644 index 000000000..ac907238d --- /dev/null +++ b/proxy/conn_test.go @@ -0,0 +1,119 @@ +package proxy + +import ( + "bufio" + "net" + "testing" + + "github.com/stretchr/testify/require" +) + +// testConnSetup listens on a random TCP port and passes the accepted net.Conn +// back to test code on returned channel. It then creates a source and +// destination Conn. And a cleanup func +func testConnSetup(t *testing.T) (net.Conn, net.Conn, func()) { + t.Helper() + + l, err := net.Listen("tcp", "localhost:0") + require.Nil(t, err) + + ch := make(chan net.Conn, 1) + go func(ch chan net.Conn) { + src, err := l.Accept() + require.Nil(t, err) + ch <- src + }(ch) + + dst, err := net.Dial("tcp", l.Addr().String()) + require.Nil(t, err) + + src := <-ch + + stopper := func() { + l.Close() + src.Close() + dst.Close() + } + + return src, dst, stopper +} + +func TestConn(t *testing.T) { + src, dst, stop := testConnSetup(t) + defer stop() + + c := NewConn(src, dst) + + retCh := make(chan error, 1) + go func() { + retCh <- c.CopyBytes() + }() + + srcR := bufio.NewReader(src) + dstR := bufio.NewReader(dst) + + _, err := src.Write([]byte("ping 1\n")) + require.Nil(t, err) + _, err = dst.Write([]byte("ping 2\n")) + require.Nil(t, err) + + got, err := dstR.ReadString('\n') + require.Equal(t, "ping 1\n", got) + + got, err = srcR.ReadString('\n') + require.Equal(t, "ping 2\n", got) + + _, err = src.Write([]byte("pong 1\n")) + require.Nil(t, err) + _, err = dst.Write([]byte("pong 2\n")) + require.Nil(t, err) + + got, err = dstR.ReadString('\n') + require.Equal(t, "pong 1\n", got) + + got, err = srcR.ReadString('\n') + require.Equal(t, "pong 2\n", got) + + c.Close() + + ret := <-retCh + require.Nil(t, ret, "Close() should not cause error return") +} + +func TestConnSrcClosing(t *testing.T) { + src, dst, stop := testConnSetup(t) + defer stop() + + c := NewConn(src, dst) + retCh := make(chan error, 1) + go func() { + retCh <- c.CopyBytes() + }() + + // If we close the src conn, we expect CopyBytes to return and src to be + // closed too. No good way to assert that the conn is closed really other than + // assume the retCh receive will hand unless CopyBytes exits and that + // CopyBytes defers Closing both. i.e. if this test doesn't time out it's + // good! + src.Close() + <-retCh +} + +func TestConnDstClosing(t *testing.T) { + src, dst, stop := testConnSetup(t) + defer stop() + + c := NewConn(src, dst) + retCh := make(chan error, 1) + go func() { + retCh <- c.CopyBytes() + }() + + // If we close the dst conn, we expect CopyBytes to return and src to be + // closed too. No good way to assert that the conn is closed really other than + // assume the retCh receive will hand unless CopyBytes exits and that + // CopyBytes defers Closing both. i.e. if this test doesn't time out it's + // good! + dst.Close() + <-retCh +} diff --git a/proxy/manager.go b/proxy/manager.go new file mode 100644 index 000000000..c22a1b7ff --- /dev/null +++ b/proxy/manager.go @@ -0,0 +1,140 @@ +package proxy + +import ( + "errors" + "log" + "os" +) + +var ( + // ErrExists is the error returned when adding a proxy that exists already. + ErrExists = errors.New("proxy with that name already exists") + // ErrNotExist is the error returned when removing a proxy that doesn't exist. + ErrNotExist = errors.New("proxy with that name doesn't exist") +) + +// Manager implements the logic for configuring and running a set of proxiers. +// Typically it's used to run one PublicListener and zero or more Upstreams. +type Manager struct { + ch chan managerCmd + + // stopped is used to signal the caller of StopAll when the run loop exits + // after stopping all runners. It's only closed. + stopped chan struct{} + + // runners holds the currently running instances. It should only by accessed + // from within the `run` goroutine. + runners map[string]*Runner + + logger *log.Logger +} + +type managerCmd struct { + name string + p Proxier + errCh chan error +} + +// NewManager creates a manager of proxier instances. +func NewManager() *Manager { + return NewManagerWithLogger(log.New(os.Stdout, "", log.LstdFlags)) +} + +// NewManagerWithLogger creates a manager of proxier instances with the +// specified logger. +func NewManagerWithLogger(logger *log.Logger) *Manager { + m := &Manager{ + ch: make(chan managerCmd), + stopped: make(chan struct{}), + runners: make(map[string]*Runner), + logger: logger, + } + go m.run() + return m +} + +// RunProxier starts a new Proxier instance in the manager. It is safe to call +// from separate goroutines. If there is already a running proxy with the same +// name it returns ErrExists. +func (m *Manager) RunProxier(name string, p Proxier) error { + cmd := managerCmd{ + name: name, + p: p, + errCh: make(chan error), + } + m.ch <- cmd + return <-cmd.errCh +} + +// StopProxier stops a Proxier instance by name. It is safe to call from +// separate goroutines. If the instance with that name doesn't exist it returns +// ErrNotExist. +func (m *Manager) StopProxier(name string) error { + cmd := managerCmd{ + name: name, + p: nil, + errCh: make(chan error), + } + m.ch <- cmd + return <-cmd.errCh +} + +// StopAll shuts down the manager instance and stops all running proxies. It is +// safe to call from any goroutine but must only be called once. +func (m *Manager) StopAll() error { + close(m.ch) + <-m.stopped + return nil +} + +// run is the main manager processing loop. It keeps all actions in a single +// goroutine triggered by channel commands to keep it simple to reason about +// lifecycle events for each proxy. +func (m *Manager) run() { + defer close(m.stopped) + + // range over channel blocks and loops on each message received until channel + // is closed. + for cmd := range m.ch { + if cmd.p == nil { + m.remove(&cmd) + } else { + m.add(&cmd) + } + } + + // Shutting down, Stop all the runners + for _, r := range m.runners { + r.Stop() + } +} + +// add the named proxier instance and stop it. Should only be called from the +// run loop. +func (m *Manager) add(cmd *managerCmd) { + // Check existing + if _, ok := m.runners[cmd.name]; ok { + cmd.errCh <- ErrExists + return + } + + // Start new runner + r := NewRunnerWithLogger(cmd.name, cmd.p, m.logger) + m.runners[cmd.name] = r + go r.Listen() + cmd.errCh <- nil +} + +// remove the named proxier instance and stop it. Should only be called from the +// run loop. +func (m *Manager) remove(cmd *managerCmd) { + // Fetch proxier by name + r, ok := m.runners[cmd.name] + if !ok { + cmd.errCh <- ErrNotExist + return + } + err := r.Stop() + delete(m.runners, cmd.name) + cmd.errCh <- err +} diff --git a/proxy/manager_test.go b/proxy/manager_test.go new file mode 100644 index 000000000..d4fa8c5b4 --- /dev/null +++ b/proxy/manager_test.go @@ -0,0 +1,76 @@ +package proxy + +import ( + "fmt" + "net" + "testing" + + "github.com/stretchr/testify/require" +) + +func TestManager(t *testing.T) { + m := NewManager() + + addrs := TestLocalBindAddrs(t, 3) + + for i := 0; i < len(addrs); i++ { + name := fmt.Sprintf("proxier-%d", i) + // Run proxy + err := m.RunProxier(name, &TestProxier{ + Addr: addrs[i], + Prefix: name + ": ", + }) + require.Nil(t, err) + } + + // Make sure each one is echoing correctly now all are running + for i := 0; i < len(addrs); i++ { + conn, err := net.Dial("tcp", addrs[i]) + require.Nil(t, err) + TestEchoConn(t, conn, fmt.Sprintf("proxier-%d: ", i)) + conn.Close() + } + + // Stop first proxier + err := m.StopProxier("proxier-0") + require.Nil(t, err) + + // We should fail to dial it now. Note that Runner.Stop is synchronous so + // there should be a strong guarantee that it's stopped listening by now. + _, err = net.Dial("tcp", addrs[0]) + require.NotNil(t, err) + + // Rest of proxiers should still be running + for i := 1; i < len(addrs); i++ { + conn, err := net.Dial("tcp", addrs[i]) + require.Nil(t, err) + TestEchoConn(t, conn, fmt.Sprintf("proxier-%d: ", i)) + conn.Close() + } + + // Stop non-existent proxier should fail + err = m.StopProxier("foo") + require.Equal(t, ErrNotExist, err) + + // Add already-running proxier should fail + err = m.RunProxier("proxier-1", &TestProxier{}) + require.Equal(t, ErrExists, err) + + // But rest should stay running + for i := 1; i < len(addrs); i++ { + conn, err := net.Dial("tcp", addrs[i]) + require.Nil(t, err) + TestEchoConn(t, conn, fmt.Sprintf("proxier-%d: ", i)) + conn.Close() + } + + // StopAll should stop everything + err = m.StopAll() + require.Nil(t, err) + + // Verify failures + for i := 0; i < len(addrs); i++ { + _, err = net.Dial("tcp", addrs[i]) + require.NotNilf(t, err, "proxier-%d should not be running", i) + } +} diff --git a/proxy/proxier.go b/proxy/proxier.go new file mode 100644 index 000000000..23940c6ad --- /dev/null +++ b/proxy/proxier.go @@ -0,0 +1,32 @@ +package proxy + +import ( + "errors" + "net" +) + +// ErrStopped is returned for operations on a proxy that is stopped +var ErrStopped = errors.New("stopped") + +// ErrStopping is returned for operations on a proxy that is stopping +var ErrStopping = errors.New("stopping") + +// Proxier is an interface for managing different proxy implementations in a +// standard way. We have at least two different types of Proxier implementations +// needed: one for the incoming mTLS -> local proxy and another for each +// "upstream" service the app needs to talk out to (which listens locally and +// performs service discovery to find a suitable remote service). +type Proxier interface { + // Listener returns a net.Listener that is open and ready for use, the Proxy + // manager will arrange accepting new connections from it and passing them to + // the handler method. + Listener() (net.Listener, error) + + // HandleConn is called for each incoming connection accepted by the listener. + // It is called in it's own goroutine and should run until it hits an error. + // When stopping the Proxier, the manager will simply close the conn provided + // and expects an error to be eventually returned. Any time spent not blocked + // on the passed conn (for example doing service discovery) should therefore + // be time-bound so that shutdown can't stall forever. + HandleConn(conn net.Conn) error +} diff --git a/proxy/proxy.go b/proxy/proxy.go new file mode 100644 index 000000000..a293466b8 --- /dev/null +++ b/proxy/proxy.go @@ -0,0 +1,112 @@ +package proxy + +import ( + "context" + "log" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/connect" +) + +// Proxy implements the built-in connect proxy. +type Proxy struct { + proxyID, token string + + connect connect.Client + manager *Manager + cfgWatch ConfigWatcher + cfg *Config + + logger *log.Logger +} + +// NewFromConfigFile returns a Proxy instance configured just from a local file. +// This is intended mostly for development and bypasses the normal mechanisms +// for fetching config and certificates from the local agent. +func NewFromConfigFile(client *api.Client, filename string, + logger *log.Logger) (*Proxy, error) { + cfg, err := ParseConfigFile(filename) + if err != nil { + return nil, err + } + + connect, err := connect.NewInsecureDevClientWithLocalCerts(client, + cfg.DevCAFile, cfg.DevServiceCertFile, cfg.DevServiceKeyFile) + if err != nil { + return nil, err + } + + p := &Proxy{ + proxyID: cfg.ProxyID, + connect: connect, + manager: NewManagerWithLogger(logger), + cfgWatch: NewStaticConfigWatcher(cfg), + logger: logger, + } + return p, nil +} + +// New returns a Proxy with the given id, consuming the provided (configured) +// agent. It is ready to Run(). +func New(client *api.Client, proxyID string, logger *log.Logger) (*Proxy, error) { + p := &Proxy{ + proxyID: proxyID, + connect: connect.NewClient(client), + manager: NewManagerWithLogger(logger), + cfgWatch: &AgentConfigWatcher{client: client}, + logger: logger, + } + return p, nil +} + +// Run the proxy instance until a fatal error occurs or ctx is cancelled. +func (p *Proxy) Run(ctx context.Context) error { + defer p.manager.StopAll() + + // Watch for config changes (initial setup happens on first "change") + for { + select { + case newCfg := <-p.cfgWatch.Watch(): + p.logger.Printf("[DEBUG] got new config") + if p.cfg == nil { + // Initial setup + err := p.startPublicListener(ctx, newCfg.PublicListener) + if err != nil { + return err + } + } + + // TODO add/remove upstreams properly based on a diff with current + for _, uc := range newCfg.Upstreams { + uc.Client = p.connect + uc.logger = p.logger + err := p.manager.RunProxier(uc.String(), NewUpstream(uc)) + if err == ErrExists { + continue + } + if err != nil { + p.logger.Printf("[ERR] failed to start upstream %s: %s", uc.String(), + err) + } + } + p.cfg = newCfg + + case <-ctx.Done(): + return nil + } + } +} + +func (p *Proxy) startPublicListener(ctx context.Context, + cfg PublicListenerConfig) error { + + // Get TLS creds + tlsCfg, err := p.connect.ServerTLSConfig() + if err != nil { + return err + } + cfg.TLSConfig = tlsCfg + + cfg.logger = p.logger + return p.manager.RunProxier("public_listener", NewPublicListener(cfg)) +} diff --git a/proxy/public_listener.go b/proxy/public_listener.go new file mode 100644 index 000000000..1942992cf --- /dev/null +++ b/proxy/public_listener.go @@ -0,0 +1,119 @@ +package proxy + +import ( + "crypto/tls" + "fmt" + "log" + "net" + "os" + "time" +) + +// PublicListener provides an implementation of Proxier that listens for inbound +// mTLS connections, authenticates them with the local agent, and if successful +// forwards them to the locally configured app. +type PublicListener struct { + cfg *PublicListenerConfig +} + +// PublicListenerConfig contains the most basic parameters needed to start the +// proxy. +// +// Note that the tls.Configs here are expected to be "dynamic" in the sense that +// they are expected to use `GetConfigForClient` (added in go 1.8) to return +// dynamic config per connection if required. +type PublicListenerConfig struct { + // BindAddress is the host:port the public mTLS listener will bind to. + BindAddress string `json:"bind_address" hcl:"bind_address"` + + // LocalServiceAddress is the host:port for the proxied application. This + // should be on loopback or otherwise protected as it's plain TCP. + LocalServiceAddress string `json:"local_service_address" hcl:"local_service_address"` + + // TLSConfig config is used for the mTLS listener. + TLSConfig *tls.Config + + // LocalConnectTimeout is the timeout for establishing connections with the + // local backend. Defaults to 1000 (1s). + LocalConnectTimeoutMs int `json:"local_connect_timeout_ms" hcl:"local_connect_timeout_ms"` + + // HandshakeTimeout is the timeout for incoming mTLS clients to complete a + // handshake. Setting this low avoids DOS by malicious clients holding + // resources open. Defaults to 10000 (10s). + HandshakeTimeoutMs int `json:"handshake_timeout_ms" hcl:"handshake_timeout_ms"` + + logger *log.Logger +} + +func (plc *PublicListenerConfig) applyDefaults() { + if plc.LocalConnectTimeoutMs == 0 { + plc.LocalConnectTimeoutMs = 1000 + } + if plc.HandshakeTimeoutMs == 0 { + plc.HandshakeTimeoutMs = 10000 + } + if plc.logger == nil { + plc.logger = log.New(os.Stdout, "", log.LstdFlags) + } +} + +// NewPublicListener returns a proxy instance with the given config. +func NewPublicListener(cfg PublicListenerConfig) *PublicListener { + p := &PublicListener{ + cfg: &cfg, + } + p.cfg.applyDefaults() + return p +} + +// Listener implements Proxier +func (p *PublicListener) Listener() (net.Listener, error) { + l, err := net.Listen("tcp", p.cfg.BindAddress) + if err != nil { + return nil, err + } + + return tls.NewListener(l, p.cfg.TLSConfig), nil +} + +// HandleConn implements Proxier +func (p *PublicListener) HandleConn(conn net.Conn) error { + defer conn.Close() + tlsConn, ok := conn.(*tls.Conn) + if !ok { + return fmt.Errorf("non-TLS conn") + } + + // Setup Handshake timer + to := time.Duration(p.cfg.HandshakeTimeoutMs) * time.Millisecond + err := tlsConn.SetDeadline(time.Now().Add(to)) + if err != nil { + return err + } + + // Force TLS handshake so that abusive clients can't hold resources open + err = tlsConn.Handshake() + if err != nil { + return err + } + + // Handshake OK, clear the deadline + err = tlsConn.SetDeadline(time.Time{}) + if err != nil { + return err + } + + // Huzzah, open a connection to the backend and let them talk + // TODO maybe add a connection pool here? + to = time.Duration(p.cfg.LocalConnectTimeoutMs) * time.Millisecond + dst, err := net.DialTimeout("tcp", p.cfg.LocalServiceAddress, to) + if err != nil { + return fmt.Errorf("failed dialling local app: %s", err) + } + + p.cfg.logger.Printf("[DEBUG] accepted connection from %s", conn.RemoteAddr()) + + // Hand conn and dst over to Conn to manage the byte copying. + c := NewConn(conn, dst) + return c.CopyBytes() +} diff --git a/proxy/public_listener_test.go b/proxy/public_listener_test.go new file mode 100644 index 000000000..83e84d658 --- /dev/null +++ b/proxy/public_listener_test.go @@ -0,0 +1,38 @@ +package proxy + +import ( + "crypto/tls" + "testing" + + "github.com/hashicorp/consul/connect" + "github.com/stretchr/testify/require" +) + +func TestPublicListener(t *testing.T) { + addrs := TestLocalBindAddrs(t, 2) + + cfg := PublicListenerConfig{ + BindAddress: addrs[0], + LocalServiceAddress: addrs[1], + HandshakeTimeoutMs: 100, + LocalConnectTimeoutMs: 100, + TLSConfig: connect.TestTLSConfig(t, "ca1", "web"), + } + + testApp, err := NewTestTCPServer(t, cfg.LocalServiceAddress) + require.Nil(t, err) + defer testApp.Close() + + p := NewPublicListener(cfg) + + // Run proxy + r := NewRunner("test", p) + go r.Listen() + defer r.Stop() + + // Proxy and backend are running, play the part of a TLS client using same + // cert for now. + conn, err := tls.Dial("tcp", cfg.BindAddress, connect.TestTLSConfig(t, "ca1", "web")) + require.Nil(t, err) + TestEchoConn(t, conn, "") +} diff --git a/proxy/runner.go b/proxy/runner.go new file mode 100644 index 000000000..b559b22b7 --- /dev/null +++ b/proxy/runner.go @@ -0,0 +1,118 @@ +package proxy + +import ( + "log" + "net" + "os" + "sync" + "sync/atomic" +) + +// Runner manages the lifecycle of one Proxier. +type Runner struct { + name string + p Proxier + + // Stopping is if a flag that is updated and read atomically + stopping int32 + stopCh chan struct{} + // wg is used to signal back to Stop when all goroutines have stopped + wg sync.WaitGroup + + logger *log.Logger +} + +// NewRunner returns a Runner ready to Listen. +func NewRunner(name string, p Proxier) *Runner { + return NewRunnerWithLogger(name, p, log.New(os.Stdout, "", log.LstdFlags)) +} + +// NewRunnerWithLogger returns a Runner ready to Listen using the specified +// log.Logger. +func NewRunnerWithLogger(name string, p Proxier, logger *log.Logger) *Runner { + return &Runner{ + name: name, + p: p, + stopCh: make(chan struct{}), + logger: logger, + } +} + +// Listen starts the proxier instance. It blocks until a fatal error occurs or +// Stop() is called. +func (r *Runner) Listen() error { + if atomic.LoadInt32(&r.stopping) == 1 { + return ErrStopped + } + + l, err := r.p.Listener() + if err != nil { + return err + } + r.logger.Printf("[INFO] proxy: %s listening on %s", r.name, l.Addr().String()) + + // Run goroutine that will close listener on stop + go func() { + <-r.stopCh + l.Close() + r.logger.Printf("[INFO] proxy: %s shutdown", r.name) + }() + + // Add one for the accept loop + r.wg.Add(1) + defer r.wg.Done() + + for { + conn, err := l.Accept() + if err != nil { + if atomic.LoadInt32(&r.stopping) == 1 { + return nil + } + return err + } + + go r.handle(conn) + } + + return nil +} + +func (r *Runner) handle(conn net.Conn) { + r.wg.Add(1) + defer r.wg.Done() + + // Start a goroutine that will watch for the Runner stopping and close the + // conn, or watch for the Proxier closing (e.g. because other end hung up) and + // stop the goroutine to avoid leaks + doneCh := make(chan struct{}) + defer close(doneCh) + + go func() { + select { + case <-r.stopCh: + r.logger.Printf("[DEBUG] proxy: %s: terminating conn", r.name) + conn.Close() + return + case <-doneCh: + // Connection is already closed, this goroutine not needed any more + return + } + }() + + err := r.p.HandleConn(conn) + if err != nil { + r.logger.Printf("[DEBUG] proxy: %s: connection terminated: %s", r.name, err) + } else { + r.logger.Printf("[DEBUG] proxy: %s: connection terminated", r.name) + } +} + +// Stop stops the Listener and closes any active connections immediately. +func (r *Runner) Stop() error { + old := atomic.SwapInt32(&r.stopping, 1) + if old == 0 { + close(r.stopCh) + } + r.wg.Wait() + return nil +} diff --git a/proxy/testdata/config-kitchensink.hcl b/proxy/testdata/config-kitchensink.hcl new file mode 100644 index 000000000..766928353 --- /dev/null +++ b/proxy/testdata/config-kitchensink.hcl @@ -0,0 +1,36 @@ +# Example proxy config with everything specified + +proxy_id = "foo" +token = "11111111-2222-3333-4444-555555555555" + +proxied_service_name = "web" +proxied_service_namespace = "default" + +# Assumes running consul in dev mode from the repo root... +dev_ca_file = "connect/testdata/ca1-ca-consul-internal.cert.pem" +dev_service_cert_file = "connect/testdata/ca1-svc-web.cert.pem" +dev_service_key_file = "connect/testdata/ca1-svc-web.key.pem" + +public_listener { + bind_address = ":9999" + local_service_address = "127.0.0.1:5000" + local_connect_timeout_ms = 1000 + handshake_timeout_ms = 5000 +} + +upstreams = [ + { + local_bind_address = "127.0.0.1:6000" + destination_name = "db" + destination_namespace = "default" + destination_type = "service" + connect_timeout_ms = 10000 + }, + { + local_bind_address = "127.0.0.1:6001" + destination_name = "geo-cache" + destination_namespace = "default" + destination_type = "prepared_query" + connect_timeout_ms = 10000 + } +] diff --git a/proxy/testing.go b/proxy/testing.go new file mode 100644 index 000000000..bd132b77f --- /dev/null +++ b/proxy/testing.go @@ -0,0 +1,170 @@ +package proxy + +import ( + "context" + "crypto/tls" + "fmt" + "io" + "log" + "net" + "sync/atomic" + + "github.com/hashicorp/consul/lib/freeport" + "github.com/mitchellh/go-testing-interface" + "github.com/stretchr/testify/require" +) + +// TestLocalBindAddrs returns n localhost address:port strings with free ports +// for binding test listeners to. +func TestLocalBindAddrs(t testing.T, n int) []string { + ports := freeport.GetT(t, n) + addrs := make([]string, n) + for i, p := range ports { + addrs[i] = fmt.Sprintf("localhost:%d", p) + } + return addrs +} + +// TestTCPServer is a simple TCP echo server for use during tests. +type TestTCPServer struct { + l net.Listener + stopped int32 + accepted, closed, active int32 +} + +// NewTestTCPServer opens as a listening socket on the given address and returns +// a TestTCPServer serving requests to it. The server is already started and can +// be stopped by calling Close(). +func NewTestTCPServer(t testing.T, addr string) (*TestTCPServer, error) { + l, err := net.Listen("tcp", addr) + if err != nil { + return nil, err + } + log.Printf("test tcp server listening on %s", addr) + s := &TestTCPServer{ + l: l, + } + go s.accept() + return s, nil +} + +// Close stops the server +func (s *TestTCPServer) Close() { + atomic.StoreInt32(&s.stopped, 1) + if s.l != nil { + s.l.Close() + } +} + +func (s *TestTCPServer) accept() error { + for { + conn, err := s.l.Accept() + if err != nil { + if atomic.LoadInt32(&s.stopped) == 1 { + log.Printf("test tcp echo server %s stopped", s.l.Addr()) + return nil + } + log.Printf("test tcp echo server %s failed: %s", s.l.Addr(), err) + return err + } + + atomic.AddInt32(&s.accepted, 1) + atomic.AddInt32(&s.active, 1) + + go func(c net.Conn) { + io.Copy(c, c) + atomic.AddInt32(&s.closed, 1) + atomic.AddInt32(&s.active, -1) + }(conn) + } +} + +// TestEchoConn attempts to write some bytes to conn and expects to read them +// back within a short timeout (10ms). If prefix is not empty we expect it to be +// poresent at the start of all echoed responses (for example to distinguish +// between multiple echo server instances). +func TestEchoConn(t testing.T, conn net.Conn, prefix string) { + t.Helper() + + // Write some bytes and read them back + n, err := conn.Write([]byte("Hello World")) + require.Equal(t, 11, n) + require.Nil(t, err) + + expectLen := 11 + len(prefix) + + buf := make([]byte, expectLen) + // read until our buffer is full - it might be separate packets if prefix is + // in use. + got := 0 + for got < expectLen { + n, err = conn.Read(buf[got:]) + require.Nil(t, err) + got += n + } + require.Equal(t, expectLen, got) + require.Equal(t, prefix+"Hello World", string(buf[:])) +} + +// TestConnectClient is a testing mock that implements connect.Client but +// stubs the methods to make testing simpler. +type TestConnectClient struct { + Server *TestTCPServer + TLSConfig *tls.Config + Calls []callTuple +} +type callTuple struct { + typ, ns, name string +} + +// ServerTLSConfig implements connect.Client +func (c *TestConnectClient) ServerTLSConfig() (*tls.Config, error) { + return c.TLSConfig, nil +} + +// DialService implements connect.Client +func (c *TestConnectClient) DialService(ctx context.Context, namespace, + name string) (net.Conn, error) { + + c.Calls = append(c.Calls, callTuple{"service", namespace, name}) + + // Actually returning a vanilla TCP conn not a TLS one but the caller + // shouldn't care for tests since this interface should hide all the TLS + // config and verification. + return net.Dial("tcp", c.Server.l.Addr().String()) +} + +// DialPreparedQuery implements connect.Client +func (c *TestConnectClient) DialPreparedQuery(ctx context.Context, namespace, + name string) (net.Conn, error) { + + c.Calls = append(c.Calls, callTuple{"prepared_query", namespace, name}) + + // Actually returning a vanilla TCP conn not a TLS one but the caller + // shouldn't care for tests since this interface should hide all the TLS + // config and verification. + return net.Dial("tcp", c.Server.l.Addr().String()) +} + +// TestProxier is a simple Proxier instance that can be used in tests. +type TestProxier struct { + // Addr to listen on + Addr string + // Prefix to write first before echoing on new connections + Prefix string +} + +// Listener implements Proxier +func (p *TestProxier) Listener() (net.Listener, error) { + return net.Listen("tcp", p.Addr) +} + +// HandleConn implements Proxier +func (p *TestProxier) HandleConn(conn net.Conn) error { + _, err := conn.Write([]byte(p.Prefix)) + if err != nil { + return err + } + _, err = io.Copy(conn, conn) + return err +} diff --git a/proxy/upstream.go b/proxy/upstream.go new file mode 100644 index 000000000..1101624be --- /dev/null +++ b/proxy/upstream.go @@ -0,0 +1,261 @@ +package proxy + +import ( + "context" + "fmt" + "log" + "net" + "os" + "time" + + "github.com/hashicorp/consul/connect" +) + +// Upstream provides an implementation of Proxier that listens for inbound TCP +// connections on the private network shared with the proxied application +// (typically localhost). For each accepted connection from the app, it uses the +// connect.Client to discover an instance and connect over mTLS. +type Upstream struct { + cfg *UpstreamConfig +} + +// UpstreamConfig configures the upstream +type UpstreamConfig struct { + // Client is the connect client to perform discovery with + Client connect.Client + + // LocalAddress is the host:port to listen on for local app connections. + LocalBindAddress string `json:"local_bind_address" hcl:"local_bind_address,attr"` + + // DestinationName is the service name of the destination. + DestinationName string `json:"destination_name" hcl:"destination_name,attr"` + + // DestinationNamespace is the namespace of the destination. + DestinationNamespace string `json:"destination_namespace" hcl:"destination_namespace,attr"` + + // DestinationType determines which service discovery method is used to find a + // candidate instance to connect to. + DestinationType string `json:"destination_type" hcl:"destination_type,attr"` + + // ConnectTimeout is the timeout for establishing connections with the remote + // service instance. Defaults to 10,000 (10s). + ConnectTimeoutMs int `json:"connect_timeout_ms" hcl:"connect_timeout_ms,attr"` + + logger *log.Logger +} + +func (uc *UpstreamConfig) applyDefaults() { + if uc.ConnectTimeoutMs == 0 { + uc.ConnectTimeoutMs = 10000 + } + if uc.logger == nil { + uc.logger = log.New(os.Stdout, "", log.LstdFlags) + } +} + +// String returns a string that uniquely identifies the Upstream. Used for +// identifying the upstream in log output and map keys. +func (uc *UpstreamConfig) String() string { + return fmt.Sprintf("%s->%s:%s/%s", uc.LocalBindAddress, uc.DestinationType, + uc.DestinationNamespace, uc.DestinationName) +} + +// NewUpstream returns an outgoing proxy instance with the given config. +func NewUpstream(cfg UpstreamConfig) *Upstream { + u := &Upstream{ + cfg: &cfg, + } + u.cfg.applyDefaults() + return u +} + +// String returns a string that uniquely identifies the Upstream. Used for +// identifying the upstream in log output and map keys. +func (u *Upstream) String() string { + return u.cfg.String() +} + +// Listener implements Proxier +func (u *Upstream) Listener() (net.Listener, error) { + return net.Listen("tcp", u.cfg.LocalBindAddress) +} + +// HandleConn implements Proxier +func (u *Upstream) HandleConn(conn net.Conn) error { + defer conn.Close() + + // Discover destination instance + dst, err := u.discoverAndDial() + if err != nil { + return err + } + + // Hand conn and dst over to Conn to manage the byte copying. + c := NewConn(conn, dst) + return c.CopyBytes() +} + +func (u *Upstream) discoverAndDial() (net.Conn, error) { + to := time.Duration(u.cfg.ConnectTimeoutMs) * time.Millisecond + ctx, cancel := context.WithTimeout(context.Background(), to) + defer cancel() + + switch u.cfg.DestinationType { + case "service": + return u.cfg.Client.DialService(ctx, u.cfg.DestinationNamespace, + u.cfg.DestinationName) + + case "prepared_query": + return u.cfg.Client.DialPreparedQuery(ctx, u.cfg.DestinationNamespace, + u.cfg.DestinationName) + + default: + return nil, fmt.Errorf("invalid destination type %s", u.cfg.DestinationType) + } +} + +/* +// Upstream represents a service that the proxied application needs to connect +// out to. It provides a dedication local TCP listener (usually listening only +// on loopback) and forwards incoming connections to the proxy to handle. +type Upstream struct { + cfg *UpstreamConfig + wg sync.WaitGroup + + proxy *Proxy + fatalErr error +} + +// NewUpstream creates an upstream ready to attach to a proxy instance with +// Proxy.AddUpstream. An Upstream can only be attached to single Proxy instance +// at once. +func NewUpstream(p *Proxy, cfg *UpstreamConfig) *Upstream { + return &Upstream{ + cfg: cfg, + proxy: p, + shutdown: make(chan struct{}), + } +} + +// UpstreamConfig configures the upstream +type UpstreamConfig struct { + // LocalAddress is the host:port to listen on for local app connections. + LocalAddress string + + // DestinationName is the service name of the destination. + DestinationName string + + // DestinationNamespace is the namespace of the destination. + DestinationNamespace string + + // DestinationType determines which service discovery method is used to find a + // candidate instance to connect to. + DestinationType string +} + +// String returns a string representation for the upstream for debugging or +// use as a unique key. +func (uc *UpstreamConfig) String() string { + return fmt.Sprintf("%s->%s:%s/%s", uc.LocalAddress, uc.DestinationType, + uc.DestinationNamespace, uc.DestinationName) +} + +func (u *Upstream) listen() error { + l, err := net.Listen("tcp", u.cfg.LocalAddress) + if err != nil { + u.fatal(err) + return + } + + for { + conn, err := l.Accept() + if err != nil { + return err + } + + go u.discoverAndConnect(conn) + } +} + +func (u *Upstream) discoverAndConnect(src net.Conn) { + // First, we need an upstream instance from Consul to connect to + dstAddrs, err := u.discoverInstances() + if err != nil { + u.fatal(fmt.Errorf("failed to discover upstream instances: %s", err)) + return + } + + if len(dstAddrs) < 1 { + log.Printf("[INFO] no instances found for %s", len(dstAddrs), u) + } + + // Attempt connection to first one that works + // TODO: configurable number/deadline? + for idx, addr := range dstAddrs { + err := u.proxy.startProxyingConn(src, addr, false) + if err != nil { + log.Printf("[INFO] failed to connect to %s: %s (%d of %d)", addr, err, + idx+1, len(dstAddrs)) + continue + } + return + } + + log.Printf("[INFO] failed to connect to all %d instances for %s", + len(dstAddrs), u) +} + +func (u *Upstream) discoverInstances() ([]string, error) { + switch u.cfg.DestinationType { + case "service": + svcs, _, err := u.cfg.Consul.Health().Service(u.cfg.DestinationName, "", + true, nil) + if err != nil { + return nil, err + } + + addrs := make([]string, len(svcs)) + + // Shuffle order as we go since health endpoint doesn't + perm := rand.Perm(len(addrs)) + for i, se := range svcs { + // Pick location in output array based on next permutation position + j := perm[i] + addrs[j] = fmt.Sprintf("%s:%d", se.Service.Address, se.Service.Port) + } + + return addrs, nil + + case "prepared_query": + pqr, _, err := u.cfg.Consul.PreparedQuery().Execute(u.cfg.DestinationName, + nil) + if err != nil { + return nil, err + } + + addrs := make([]string, 0, len(svcs)) + for _, se := range pqr.Nodes { + addrs = append(addrs, fmt.Sprintf("%s:%d", se.Service.Address, + se.Service.Port)) + } + + // PreparedQuery execution already shuffles the result + return addrs, nil + + default: + u.fatal(fmt.Errorf("invalid destination type %s", u.cfg.DestinationType)) + } +} + +func (u *Upstream) fatal(err Error) { + log.Printf("[ERROR] upstream %s stopping on error: %s", u.cfg.LocalAddress, + err) + u.fatalErr = err +} + +// String returns a string representation for the upstream for debugging or +// use as a unique key. +func (u *Upstream) String() string { + return u.cfg.String() +} +*/ diff --git a/proxy/upstream_test.go b/proxy/upstream_test.go new file mode 100644 index 000000000..79bca0136 --- /dev/null +++ b/proxy/upstream_test.go @@ -0,0 +1,75 @@ +package proxy + +import ( + "net" + "testing" + + "github.com/hashicorp/consul/connect" + "github.com/stretchr/testify/require" +) + +func TestUpstream(t *testing.T) { + tests := []struct { + name string + cfg UpstreamConfig + }{ + { + name: "service", + cfg: UpstreamConfig{ + DestinationType: "service", + DestinationNamespace: "default", + DestinationName: "db", + ConnectTimeoutMs: 100, + }, + }, + { + name: "prepared_query", + cfg: UpstreamConfig{ + DestinationType: "prepared_query", + DestinationNamespace: "default", + DestinationName: "geo-db", + ConnectTimeoutMs: 100, + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + addrs := TestLocalBindAddrs(t, 2) + + testApp, err := NewTestTCPServer(t, addrs[0]) + require.Nil(t, err) + defer testApp.Close() + + // Create mock client that will "discover" our test tcp server as a target and + // skip TLS altogether. + client := &TestConnectClient{ + Server: testApp, + TLSConfig: connect.TestTLSConfig(t, "ca1", "web"), + } + + // Override cfg params + tt.cfg.LocalBindAddress = addrs[1] + tt.cfg.Client = client + + u := NewUpstream(tt.cfg) + + // Run proxy + r := NewRunner("test", u) + go r.Listen() + defer r.Stop() + + // Proxy and fake remote service are running, play the part of the app + // connecting to a remote connect service over TCP. + conn, err := net.Dial("tcp", tt.cfg.LocalBindAddress) + require.Nil(t, err) + TestEchoConn(t, conn, "") + + // Validate that discovery actually was called as we expected + require.Len(t, client.Calls, 1) + require.Equal(t, tt.cfg.DestinationType, client.Calls[0].typ) + require.Equal(t, tt.cfg.DestinationNamespace, client.Calls[0].ns) + require.Equal(t, tt.cfg.DestinationName, client.Calls[0].name) + }) + } +} From 2d6a2ce1e3c5c62c13cfeac96aeabd03446ad301 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 29 Mar 2018 16:25:11 +0100 Subject: [PATCH 110/539] connect.Service based implementation after review feedback. --- agent/connect/testing_ca.go | 108 ++--- agent/connect/testing_ca_test.go | 15 +- connect/auth.go | 43 -- connect/certgen/certgen.go | 86 ++++ connect/client.go | 436 +++++++++--------- connect/client_test.go | 238 +++++----- connect/example_test.go | 53 +++ connect/resolver.go | 131 ++++++ connect/resolver_test.go | 164 +++++++ connect/service.go | 185 ++++++++ connect/service_test.go | 105 +++++ .../testdata/ca1-ca-consul-internal.cert.pem | 14 - .../testdata/ca1-ca-consul-internal.key.pem | 5 - connect/testdata/ca1-svc-cache.cert.pem | 14 - connect/testdata/ca1-svc-cache.key.pem | 5 - connect/testdata/ca1-svc-db.cert.pem | 13 - connect/testdata/ca1-svc-db.key.pem | 5 - connect/testdata/ca1-svc-web.cert.pem | 13 - connect/testdata/ca1-svc-web.key.pem | 5 - connect/testdata/ca2-ca-vault.cert.pem | 14 - connect/testdata/ca2-ca-vault.key.pem | 5 - connect/testdata/ca2-svc-cache.cert.pem | 13 - connect/testdata/ca2-svc-cache.key.pem | 5 - connect/testdata/ca2-svc-db.cert.pem | 13 - connect/testdata/ca2-svc-db.key.pem | 5 - connect/testdata/ca2-svc-web.cert.pem | 13 - connect/testdata/ca2-svc-web.key.pem | 5 - connect/testdata/ca2-xc-by-ca1.cert.pem | 14 - connect/testdata/mkcerts.go | 243 ---------- connect/testing.go | 162 ++++++- connect/tls.go | 120 +++-- connect/tls_test.go | 114 +++-- 32 files changed, 1417 insertions(+), 947 deletions(-) delete mode 100644 connect/auth.go create mode 100644 connect/certgen/certgen.go create mode 100644 connect/example_test.go create mode 100644 connect/resolver.go create mode 100644 connect/resolver_test.go create mode 100644 connect/service.go create mode 100644 connect/service_test.go delete mode 100644 connect/testdata/ca1-ca-consul-internal.cert.pem delete mode 100644 connect/testdata/ca1-ca-consul-internal.key.pem delete mode 100644 connect/testdata/ca1-svc-cache.cert.pem delete mode 100644 connect/testdata/ca1-svc-cache.key.pem delete mode 100644 connect/testdata/ca1-svc-db.cert.pem delete mode 100644 connect/testdata/ca1-svc-db.key.pem delete mode 100644 connect/testdata/ca1-svc-web.cert.pem delete mode 100644 connect/testdata/ca1-svc-web.key.pem delete mode 100644 connect/testdata/ca2-ca-vault.cert.pem delete mode 100644 connect/testdata/ca2-ca-vault.key.pem delete mode 100644 connect/testdata/ca2-svc-cache.cert.pem delete mode 100644 connect/testdata/ca2-svc-cache.key.pem delete mode 100644 connect/testdata/ca2-svc-db.cert.pem delete mode 100644 connect/testdata/ca2-svc-db.key.pem delete mode 100644 connect/testdata/ca2-svc-web.cert.pem delete mode 100644 connect/testdata/ca2-svc-web.key.pem delete mode 100644 connect/testdata/ca2-xc-by-ca1.cert.pem delete mode 100644 connect/testdata/mkcerts.go diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index a2f711763..3fbcf2e02 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -29,7 +29,7 @@ const testClusterID = "11111111-2222-3333-4444-555555555555" // testCACounter is just an atomically incremented counter for creating // unique names for the CA certs. -var testCACounter uint64 = 0 +var testCACounter uint64 // TestCA creates a test CA certificate and signing key and returns it // in the CARoot structure format. The returned CA will be set as Active = true. @@ -44,7 +44,8 @@ func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { result.Name = fmt.Sprintf("Test CA %d", atomic.AddUint64(&testCACounter, 1)) // Create the private key we'll use for this CA cert. - signer := testPrivateKey(t, &result) + signer, keyPEM := testPrivateKey(t) + result.SigningKey = keyPEM // The serial number for the cert sn, err := testSerialNumber() @@ -125,9 +126,9 @@ func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { return &result } -// TestLeaf returns a valid leaf certificate for the named service with -// the given CA Root. -func TestLeaf(t testing.T, service string, root *structs.CARoot) string { +// TestLeaf returns a valid leaf certificate and it's private key for the named +// service with the given CA Root. +func TestLeaf(t testing.T, service string, root *structs.CARoot) (string, string) { // Parse the CA cert and signing key from the root cert := root.SigningCert if cert == "" { @@ -137,7 +138,7 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { if err != nil { t.Fatalf("error parsing CA cert: %s", err) } - signer, err := ParseSigner(root.SigningKey) + caSigner, err := ParseSigner(root.SigningKey) if err != nil { t.Fatalf("error parsing signing key: %s", err) } @@ -156,6 +157,9 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { t.Fatalf("error generating serial number: %s", err) } + // Genereate fresh private key + pkSigner, pkPEM := testPrivateKey(t) + // Cert template for generation template := x509.Certificate{ SerialNumber: sn, @@ -173,14 +177,14 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { }, NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), NotBefore: time.Now(), - AuthorityKeyId: testKeyID(t, signer.Public()), - SubjectKeyId: testKeyID(t, signer.Public()), + AuthorityKeyId: testKeyID(t, caSigner.Public()), + SubjectKeyId: testKeyID(t, pkSigner.Public()), } // Create the certificate, PEM encode it and return that value. var buf bytes.Buffer bs, err := x509.CreateCertificate( - rand.Reader, &template, caCert, signer.Public(), signer) + rand.Reader, &template, caCert, pkSigner.Public(), caSigner) if err != nil { t.Fatalf("error generating certificate: %s", err) } @@ -189,7 +193,7 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) string { t.Fatalf("error encoding private key: %s", err) } - return buf.String() + return buf.String(), pkPEM } // TestCSR returns a CSR to sign the given service along with the PEM-encoded @@ -200,39 +204,22 @@ func TestCSR(t testing.T, uri CertURI) (string, string) { SignatureAlgorithm: x509.ECDSAWithSHA256, } - // Result buffers - var csrBuf, pkBuf bytes.Buffer - // Create the private key we'll use - signer := testPrivateKey(t, nil) + signer, pkPEM := testPrivateKey(t) - { - // Create the private key PEM - bs, err := x509.MarshalECPrivateKey(signer.(*ecdsa.PrivateKey)) - if err != nil { - t.Fatalf("error marshalling PK: %s", err) - } - - err = pem.Encode(&pkBuf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) - if err != nil { - t.Fatalf("error encoding PK: %s", err) - } + // Create the CSR itself + var csrBuf bytes.Buffer + bs, err := x509.CreateCertificateRequest(rand.Reader, template, signer) + if err != nil { + t.Fatalf("error creating CSR: %s", err) } - { - // Create the CSR itself - bs, err := x509.CreateCertificateRequest(rand.Reader, template, signer) - if err != nil { - t.Fatalf("error creating CSR: %s", err) - } - - err = pem.Encode(&csrBuf, &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: bs}) - if err != nil { - t.Fatalf("error encoding CSR: %s", err) - } + err = pem.Encode(&csrBuf, &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: bs}) + if err != nil { + t.Fatalf("error encoding CSR: %s", err) } - return csrBuf.String(), pkBuf.String() + return csrBuf.String(), pkPEM } // testKeyID returns a KeyID from the given public key. This just calls @@ -246,25 +233,26 @@ func testKeyID(t testing.T, raw interface{}) []byte { return result } -// testMemoizePK is the private key that we memoize once we generate it -// once so that our tests don't rely on too much system entropy. -var testMemoizePK atomic.Value - -// testPrivateKey creates an ECDSA based private key. -func testPrivateKey(t testing.T, ca *structs.CARoot) crypto.Signer { - // If we already generated a private key, use that - var pk *ecdsa.PrivateKey - if v := testMemoizePK.Load(); v != nil { - pk = v.(*ecdsa.PrivateKey) - } - - // If we have no key, then create a new one. - if pk == nil { - var err error - pk, err = ecdsa.GenerateKey(elliptic.P256(), rand.Reader) - if err != nil { - t.Fatalf("error generating private key: %s", err) - } +// testPrivateKey creates an ECDSA based private key. Both a crypto.Signer and +// the key in PEM form are returned. +// +// NOTE(banks): this was memoized to save entropy during tests but it turns out +// crypto/rand will never block and always reads from /dev/urandom on unix OSes +// which does not consume entropy. +// +// If we find by profiling it's taking a lot of cycles we could optimise/cache +// again but we at least need to use different keys for each distinct CA (when +// multiple CAs are generated at once e.g. to test cross-signing) and a +// different one again for the leafs otherwise we risk tests that have false +// positives since signatures from different logical cert's keys are +// indistinguishable, but worse we build validation chains using AuthorityKeyID +// which will be the same for multiple CAs/Leafs. Also note that our UUID +// generator also reads from crypto rand and is called far more often during +// tests than this will be. +func testPrivateKey(t testing.T) (crypto.Signer, string) { + pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if err != nil { + t.Fatalf("error generating private key: %s", err) } bs, err := x509.MarshalECPrivateKey(pk) @@ -277,14 +265,8 @@ func testPrivateKey(t testing.T, ca *structs.CARoot) crypto.Signer { if err != nil { t.Fatalf("error encoding private key: %s", err) } - if ca != nil { - ca.SigningKey = buf.String() - } - // Memoize the key - testMemoizePK.Store(pk) - - return pk + return pk, buf.String() } // testSerialNumber generates a serial number suitable for a certificate. diff --git a/agent/connect/testing_ca_test.go b/agent/connect/testing_ca_test.go index d07aac201..193e532c3 100644 --- a/agent/connect/testing_ca_test.go +++ b/agent/connect/testing_ca_test.go @@ -29,7 +29,7 @@ func TestTestCAAndLeaf(t *testing.T) { // Create the certs ca := TestCA(t, nil) - leaf := TestLeaf(t, "web", ca) + leaf, _ := TestLeaf(t, "web", ca) // Create a temporary directory for storing the certs td, err := ioutil.TempDir("", "consul") @@ -62,8 +62,8 @@ func TestTestCAAndLeaf_xc(t *testing.T) { // Create the certs ca1 := TestCA(t, nil) ca2 := TestCA(t, ca1) - leaf1 := TestLeaf(t, "web", ca1) - leaf2 := TestLeaf(t, "web", ca2) + leaf1, _ := TestLeaf(t, "web", ca1) + leaf2, _ := TestLeaf(t, "web", ca2) // Create a temporary directory for storing the certs td, err := ioutil.TempDir("", "consul") @@ -98,12 +98,3 @@ func TestTestCAAndLeaf_xc(t *testing.T) { assert.Nil(err) } } - -// Test that the private key is memoized to preseve system entropy. -func TestTestPrivateKey_memoize(t *testing.T) { - ca1 := TestCA(t, nil) - ca2 := TestCA(t, nil) - if ca1.SigningKey != ca2.SigningKey { - t.Fatal("should have the same signing keys for tests") - } -} diff --git a/connect/auth.go b/connect/auth.go deleted file mode 100644 index 73c16f0bf..000000000 --- a/connect/auth.go +++ /dev/null @@ -1,43 +0,0 @@ -package connect - -import "crypto/x509" - -// Auther is the interface that provides both Authentication and Authorization -// for an mTLS connection. It's only method is compatible with -// tls.Config.VerifyPeerCertificate. -type Auther interface { - // Auth is called during tls Connection establishment to Authenticate and - // Authorize the presented peer. Note that verifiedChains must not be relied - // upon as we typically have to skip Go's internal verification so the - // implementation takes full responsibility to validating the certificate - // against known roots. It is also up to the user of the interface to ensure - // appropriate validation is performed for client or server end by arranging - // for an appropriate implementation to be hooked into the tls.Config used. - Auth(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error -} - -// ClientAuther is used to auth Clients connecting to a Server. -type ClientAuther struct{} - -// Auth implements Auther -func (a *ClientAuther) Auth(rawCerts [][]byte, - verifiedChains [][]*x509.Certificate) error { - - // TODO(banks): implement path validation and AuthZ - return nil -} - -// ServerAuther is used to auth the Server identify from a connecting Client. -type ServerAuther struct { - // TODO(banks): We'll need a way to pass the expected service identity (name, - // namespace, dc, cluster) here based on discovery result. -} - -// Auth implements Auther -func (a *ServerAuther) Auth(rawCerts [][]byte, - verifiedChains [][]*x509.Certificate) error { - - // TODO(banks): implement path validation and verify URI matches the target - // service we intended to connect to. - return nil -} diff --git a/connect/certgen/certgen.go b/connect/certgen/certgen.go new file mode 100644 index 000000000..6fecf6ae1 --- /dev/null +++ b/connect/certgen/certgen.go @@ -0,0 +1,86 @@ +// certgen: a tool for generating test certificates on disk for use as +// test-fixtures and for end-to-end testing and local development. +// +// Example usage: +// +// $ go run connect/certgen/certgen.go -out-dir /tmp/connect-certs +// +// You can verify a given leaf with a given root using: +// +// $ openssl verify -verbose -CAfile ca2-ca.cert.pem ca1-svc-db.cert.pem +// +// Note that to verify via the cross-signed intermediate, openssl requires it to +// be bundled with the _root_ CA bundle and will ignore the cert if it's passed +// with the subject. You can do that with: +// +// $ openssl verify -verbose -CAfile \ +// <(cat ca1-ca.cert.pem ca2-xc-by-ca1.cert.pem) \ +// ca2-svc-db.cert.pem +// ca2-svc-db.cert.pem: OK +// +// Note that the same leaf and root without the intermediate should fail: +// +// $ openssl verify -verbose -CAfile ca1-ca.cert.pem ca2-svc-db.cert.pem +// ca2-svc-db.cert.pem: CN = db +// error 20 at 0 depth lookup:unable to get local issuer certificate +// +// NOTE: THIS IS A QUIRK OF OPENSSL; in Connect we distribute the roots alone +// and stable intermediates like the XC cert to the _leaf_. +package main // import "github.com/hashicorp/consul/connect/certgen" +import ( + "flag" + "fmt" + "io/ioutil" + "log" + "os" + + "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/agent/structs" + "github.com/mitchellh/go-testing-interface" +) + +func main() { + var numCAs = 2 + var services = []string{"web", "db", "cache"} + //var slugRe = regexp.MustCompile("[^a-zA-Z0-9]+") + var outDir string + + flag.StringVar(&outDir, "out-dir", "", + "REQUIRED: the dir to write certificates to") + flag.Parse() + + if outDir == "" { + flag.PrintDefaults() + os.Exit(1) + } + + // Create CA certs + var prevCA *structs.CARoot + for i := 1; i <= numCAs; i++ { + ca := connect.TestCA(&testing.RuntimeT{}, prevCA) + prefix := fmt.Sprintf("%s/ca%d-ca", outDir, i) + writeFile(prefix+".cert.pem", ca.RootCert) + writeFile(prefix+".key.pem", ca.SigningKey) + if prevCA != nil { + fname := fmt.Sprintf("%s/ca%d-xc-by-ca%d.cert.pem", outDir, i, i-1) + writeFile(fname, ca.SigningCert) + } + prevCA = ca + + // Create service certs for each CA + for _, svc := range services { + certPEM, keyPEM := connect.TestLeaf(&testing.RuntimeT{}, svc, ca) + prefix := fmt.Sprintf("%s/ca%d-svc-%s", outDir, i, svc) + writeFile(prefix+".cert.pem", certPEM) + writeFile(prefix+".key.pem", keyPEM) + } + } +} + +func writeFile(name, content string) { + fmt.Println("Writing ", name) + err := ioutil.WriteFile(name, []byte(content), 0600) + if err != nil { + log.Fatalf("failed writing file: %s", err) + } +} diff --git a/connect/client.go b/connect/client.go index 867bf0db5..18e43f4cb 100644 --- a/connect/client.go +++ b/connect/client.go @@ -1,256 +1,256 @@ package connect -import ( - "context" - "crypto/tls" - "fmt" - "math/rand" - "net" +// import ( +// "context" +// "crypto/tls" +// "fmt" +// "math/rand" +// "net" - "github.com/hashicorp/consul/api" -) +// "github.com/hashicorp/consul/api" +// ) -// CertStatus indicates whether the Client currently has valid certificates for -// incoming and outgoing connections. -type CertStatus int +// // CertStatus indicates whether the Client currently has valid certificates for +// // incoming and outgoing connections. +// type CertStatus int -const ( - // CertStatusUnknown is the zero value for CertStatus which may be returned - // when a watch channel is closed on shutdown. It has no other meaning. - CertStatusUnknown CertStatus = iota +// const ( +// // CertStatusUnknown is the zero value for CertStatus which may be returned +// // when a watch channel is closed on shutdown. It has no other meaning. +// CertStatusUnknown CertStatus = iota - // CertStatusOK indicates the client has valid certificates and trust roots to - // Authenticate incoming and outgoing connections. - CertStatusOK +// // CertStatusOK indicates the client has valid certificates and trust roots to +// // Authenticate incoming and outgoing connections. +// CertStatusOK - // CertStatusPending indicates the client is waiting to be issued initial - // certificates, or that it's certificates have expired and it's waiting to be - // issued new ones. In this state all incoming and outgoing connections will - // fail. - CertStatusPending -) +// // CertStatusPending indicates the client is waiting to be issued initial +// // certificates, or that it's certificates have expired and it's waiting to be +// // issued new ones. In this state all incoming and outgoing connections will +// // fail. +// CertStatusPending +// ) -func (s CertStatus) String() string { - switch s { - case CertStatusOK: - return "OK" - case CertStatusPending: - return "pending" - case CertStatusUnknown: - fallthrough - default: - return "unknown" - } -} +// func (s CertStatus) String() string { +// switch s { +// case CertStatusOK: +// return "OK" +// case CertStatusPending: +// return "pending" +// case CertStatusUnknown: +// fallthrough +// default: +// return "unknown" +// } +// } -// Client is the interface a basic client implementation must support. -type Client interface { - // TODO(banks): build this and test it - // CertStatus returns the current status of the client's certificates. It can - // be used to determine if the Client is able to service requests at the - // current time. - //CertStatus() CertStatus +// // Client is the interface a basic client implementation must support. +// type Client interface { +// // TODO(banks): build this and test it +// // CertStatus returns the current status of the client's certificates. It can +// // be used to determine if the Client is able to service requests at the +// // current time. +// //CertStatus() CertStatus - // TODO(banks): build this and test it - // WatchCertStatus returns a channel that is notified on all status changes. - // Note that a message on the channel isn't guaranteed to be different so it's - // value should be inspected. During Client shutdown the channel will be - // closed returning a zero type which is equivalent to CertStatusUnknown. - //WatchCertStatus() <-chan CertStatus +// // TODO(banks): build this and test it +// // WatchCertStatus returns a channel that is notified on all status changes. +// // Note that a message on the channel isn't guaranteed to be different so it's +// // value should be inspected. During Client shutdown the channel will be +// // closed returning a zero type which is equivalent to CertStatusUnknown. +// //WatchCertStatus() <-chan CertStatus - // ServerTLSConfig returns the *tls.Config to be used when creating a TCP - // listener that should accept Connect connections. It is likely that at - // startup the tlsCfg returned will not be immediately usable since - // certificates are typically fetched from the agent asynchronously. In this - // case it's still safe to listen with the provided config, but auth failures - // will occur until initial certificate discovery is complete. In general at - // any time it is possible for certificates to expire before new replacements - // have been issued due to local network errors so the server may not actually - // have a working certificate configuration at any time, however as soon as - // valid certs can be issued it will automatically start working again so - // should take no action. - ServerTLSConfig() (*tls.Config, error) +// // ServerTLSConfig returns the *tls.Config to be used when creating a TCP +// // listener that should accept Connect connections. It is likely that at +// // startup the tlsCfg returned will not be immediately usable since +// // certificates are typically fetched from the agent asynchronously. In this +// // case it's still safe to listen with the provided config, but auth failures +// // will occur until initial certificate discovery is complete. In general at +// // any time it is possible for certificates to expire before new replacements +// // have been issued due to local network errors so the server may not actually +// // have a working certificate configuration at any time, however as soon as +// // valid certs can be issued it will automatically start working again so +// // should take no action. +// ServerTLSConfig() (*tls.Config, error) - // DialService opens a new connection to the named service registered in - // Consul. It will perform service discovery to find healthy instances. If - // there is an error during connection it is returned and the caller may call - // again. The client implementation makes a best effort to make consecutive - // Dials against different instances either by randomising the list and/or - // maintaining a local memory of which instances recently failed. If the - // context passed times out before connection is established and verified an - // error is returned. - DialService(ctx context.Context, namespace, name string) (net.Conn, error) +// // DialService opens a new connection to the named service registered in +// // Consul. It will perform service discovery to find healthy instances. If +// // there is an error during connection it is returned and the caller may call +// // again. The client implementation makes a best effort to make consecutive +// // Dials against different instances either by randomising the list and/or +// // maintaining a local memory of which instances recently failed. If the +// // context passed times out before connection is established and verified an +// // error is returned. +// DialService(ctx context.Context, namespace, name string) (net.Conn, error) - // DialPreparedQuery opens a new connection by executing the named Prepared - // Query against the local Consul agent, and picking one of the returned - // instances to connect to. It will perform service discovery with the same - // semantics as DialService. - DialPreparedQuery(ctx context.Context, namespace, name string) (net.Conn, error) -} +// // DialPreparedQuery opens a new connection by executing the named Prepared +// // Query against the local Consul agent, and picking one of the returned +// // instances to connect to. It will perform service discovery with the same +// // semantics as DialService. +// DialPreparedQuery(ctx context.Context, namespace, name string) (net.Conn, error) +// } -/* +// /* -Maybe also convenience wrappers for: - - listening TLS conn with right config - - http.ListenAndServeTLS equivalent +// Maybe also convenience wrappers for: +// - listening TLS conn with right config +// - http.ListenAndServeTLS equivalent -*/ +// */ -// AgentClient is the primary implementation of a connect.Client which -// communicates with the local Consul agent. -type AgentClient struct { - agent *api.Client - tlsCfg *ReloadableTLSConfig -} +// // AgentClient is the primary implementation of a connect.Client which +// // communicates with the local Consul agent. +// type AgentClient struct { +// agent *api.Client +// tlsCfg *ReloadableTLSConfig +// } -// NewClient returns an AgentClient to allow consuming and providing -// Connect-enabled network services. -func NewClient(agent *api.Client) Client { - // TODO(banks): hook up fetching certs from Agent and updating tlsCfg on cert - // delivery/change. Perhaps need to make - return &AgentClient{ - agent: agent, - tlsCfg: NewReloadableTLSConfig(defaultTLSConfig()), - } -} +// // NewClient returns an AgentClient to allow consuming and providing +// // Connect-enabled network services. +// func NewClient(agent *api.Client) Client { +// // TODO(banks): hook up fetching certs from Agent and updating tlsCfg on cert +// // delivery/change. Perhaps need to make +// return &AgentClient{ +// agent: agent, +// tlsCfg: NewReloadableTLSConfig(defaultTLSConfig()), +// } +// } -// NewInsecureDevClientWithLocalCerts returns an AgentClient that will still do -// service discovery via the local agent but will use externally provided -// certificates and skip authorization. This is intended just for development -// and must not be used ever in production. -func NewInsecureDevClientWithLocalCerts(agent *api.Client, caFile, certFile, - keyFile string) (Client, error) { +// // NewInsecureDevClientWithLocalCerts returns an AgentClient that will still do +// // service discovery via the local agent but will use externally provided +// // certificates and skip authorization. This is intended just for development +// // and must not be used ever in production. +// func NewInsecureDevClientWithLocalCerts(agent *api.Client, caFile, certFile, +// keyFile string) (Client, error) { - cfg, err := devTLSConfigFromFiles(caFile, certFile, keyFile) - if err != nil { - return nil, err - } +// cfg, err := devTLSConfigFromFiles(caFile, certFile, keyFile) +// if err != nil { +// return nil, err +// } - return &AgentClient{ - agent: agent, - tlsCfg: NewReloadableTLSConfig(cfg), - }, nil -} +// return &AgentClient{ +// agent: agent, +// tlsCfg: NewReloadableTLSConfig(cfg), +// }, nil +// } -// ServerTLSConfig implements Client -func (c *AgentClient) ServerTLSConfig() (*tls.Config, error) { - return c.tlsCfg.ServerTLSConfig(), nil -} +// // ServerTLSConfig implements Client +// func (c *AgentClient) ServerTLSConfig() (*tls.Config, error) { +// return c.tlsCfg.ServerTLSConfig(), nil +// } -// DialService implements Client -func (c *AgentClient) DialService(ctx context.Context, namespace, - name string) (net.Conn, error) { - return c.dial(ctx, "service", namespace, name) -} +// // DialService implements Client +// func (c *AgentClient) DialService(ctx context.Context, namespace, +// name string) (net.Conn, error) { +// return c.dial(ctx, "service", namespace, name) +// } -// DialPreparedQuery implements Client -func (c *AgentClient) DialPreparedQuery(ctx context.Context, namespace, - name string) (net.Conn, error) { - return c.dial(ctx, "prepared_query", namespace, name) -} +// // DialPreparedQuery implements Client +// func (c *AgentClient) DialPreparedQuery(ctx context.Context, namespace, +// name string) (net.Conn, error) { +// return c.dial(ctx, "prepared_query", namespace, name) +// } -func (c *AgentClient) dial(ctx context.Context, discoveryType, namespace, - name string) (net.Conn, error) { +// func (c *AgentClient) dial(ctx context.Context, discoveryType, namespace, +// name string) (net.Conn, error) { - svcs, err := c.discoverInstances(ctx, discoveryType, namespace, name) - if err != nil { - return nil, err - } +// svcs, err := c.discoverInstances(ctx, discoveryType, namespace, name) +// if err != nil { +// return nil, err +// } - svc, err := c.pickInstance(svcs) - if err != nil { - return nil, err - } - if svc == nil { - return nil, fmt.Errorf("no healthy services discovered") - } +// svc, err := c.pickInstance(svcs) +// if err != nil { +// return nil, err +// } +// if svc == nil { +// return nil, fmt.Errorf("no healthy services discovered") +// } - // OK we have a service we can dial! We need a ClientAuther that will validate - // the connection is legit. +// // OK we have a service we can dial! We need a ClientAuther that will validate +// // the connection is legit. - // TODO(banks): implement ClientAuther properly to actually verify connected - // cert matches the expected service/cluster etc. based on svc. - auther := &ClientAuther{} - tlsConfig := c.tlsCfg.TLSConfig(auther) +// // TODO(banks): implement ClientAuther properly to actually verify connected +// // cert matches the expected service/cluster etc. based on svc. +// auther := &ClientAuther{} +// tlsConfig := c.tlsCfg.TLSConfig(auther) - // Resolve address TODO(banks): I expected this to happen magically in the - // agent at registration time if I register with no explicit address but - // apparently doesn't. This is a quick hack to make it work for now, need to - // see if there is a better shared code path for doing this. - addr := svc.Service.Address - if addr == "" { - addr = svc.Node.Address - } - var dialer net.Dialer - tcpConn, err := dialer.DialContext(ctx, "tcp", - fmt.Sprintf("%s:%d", addr, svc.Service.Port)) - if err != nil { - return nil, err - } +// // Resolve address TODO(banks): I expected this to happen magically in the +// // agent at registration time if I register with no explicit address but +// // apparently doesn't. This is a quick hack to make it work for now, need to +// // see if there is a better shared code path for doing this. +// addr := svc.Service.Address +// if addr == "" { +// addr = svc.Node.Address +// } +// var dialer net.Dialer +// tcpConn, err := dialer.DialContext(ctx, "tcp", +// fmt.Sprintf("%s:%d", addr, svc.Service.Port)) +// if err != nil { +// return nil, err +// } - tlsConn := tls.Client(tcpConn, tlsConfig) - err = tlsConn.Handshake() - if err != nil { - tlsConn.Close() - return nil, err - } +// tlsConn := tls.Client(tcpConn, tlsConfig) +// err = tlsConn.Handshake() +// if err != nil { +// tlsConn.Close() +// return nil, err +// } - return tlsConn, nil -} +// return tlsConn, nil +// } -// pickInstance returns an instance from the given list to try to connect to. It -// may be made pluggable later, for now it just picks a random one regardless of -// whether the list is already shuffled. -func (c *AgentClient) pickInstance(svcs []*api.ServiceEntry) (*api.ServiceEntry, error) { - if len(svcs) < 1 { - return nil, nil - } - idx := rand.Intn(len(svcs)) - return svcs[idx], nil -} +// // pickInstance returns an instance from the given list to try to connect to. It +// // may be made pluggable later, for now it just picks a random one regardless of +// // whether the list is already shuffled. +// func (c *AgentClient) pickInstance(svcs []*api.ServiceEntry) (*api.ServiceEntry, error) { +// if len(svcs) < 1 { +// return nil, nil +// } +// idx := rand.Intn(len(svcs)) +// return svcs[idx], nil +// } -// discoverInstances returns all instances for the given discoveryType, -// namespace and name. The returned service entries may or may not be shuffled -func (c *AgentClient) discoverInstances(ctx context.Context, discoverType, - namespace, name string) ([]*api.ServiceEntry, error) { +// // discoverInstances returns all instances for the given discoveryType, +// // namespace and name. The returned service entries may or may not be shuffled +// func (c *AgentClient) discoverInstances(ctx context.Context, discoverType, +// namespace, name string) ([]*api.ServiceEntry, error) { - q := &api.QueryOptions{ - // TODO(banks): make this configurable? - AllowStale: true, - } - q = q.WithContext(ctx) +// q := &api.QueryOptions{ +// // TODO(banks): make this configurable? +// AllowStale: true, +// } +// q = q.WithContext(ctx) - switch discoverType { - case "service": - svcs, _, err := c.agent.Health().Connect(name, "", true, q) - if err != nil { - return nil, err - } - return svcs, err +// switch discoverType { +// case "service": +// svcs, _, err := c.agent.Health().Connect(name, "", true, q) +// if err != nil { +// return nil, err +// } +// return svcs, err - case "prepared_query": - // TODO(banks): it's not super clear to me how this should work eventually. - // How do we distinguise between a PreparedQuery for the actual services and - // one that should return the connect proxies where that differs? If we - // can't then we end up with a janky UX where user specifies a reasonable - // prepared query but we try to connect to non-connect services and fail - // with a confusing TLS error. Maybe just a way to filter PreparedQuery - // results by connect-enabled would be sufficient (or even metadata to do - // that ourselves in the response although less efficient). - resp, _, err := c.agent.PreparedQuery().Execute(name, q) - if err != nil { - return nil, err - } +// case "prepared_query": +// // TODO(banks): it's not super clear to me how this should work eventually. +// // How do we distinguise between a PreparedQuery for the actual services and +// // one that should return the connect proxies where that differs? If we +// // can't then we end up with a janky UX where user specifies a reasonable +// // prepared query but we try to connect to non-connect services and fail +// // with a confusing TLS error. Maybe just a way to filter PreparedQuery +// // results by connect-enabled would be sufficient (or even metadata to do +// // that ourselves in the response although less efficient). +// resp, _, err := c.agent.PreparedQuery().Execute(name, q) +// if err != nil { +// return nil, err +// } - // Awkward, we have a slice of api.ServiceEntry here but want a slice of - // *api.ServiceEntry for compat with Connect/Service APIs. Have to convert - // them to keep things type-happy. - svcs := make([]*api.ServiceEntry, len(resp.Nodes)) - for idx, se := range resp.Nodes { - svcs[idx] = &se - } - return svcs, err - default: - return nil, fmt.Errorf("unsupported discovery type: %s", discoverType) - } -} +// // Awkward, we have a slice of api.ServiceEntry here but want a slice of +// // *api.ServiceEntry for compat with Connect/Service APIs. Have to convert +// // them to keep things type-happy. +// svcs := make([]*api.ServiceEntry, len(resp.Nodes)) +// for idx, se := range resp.Nodes { +// svcs[idx] = &se +// } +// return svcs, err +// default: +// return nil, fmt.Errorf("unsupported discovery type: %s", discoverType) +// } +// } diff --git a/connect/client_test.go b/connect/client_test.go index fcb18e600..045bc8fd6 100644 --- a/connect/client_test.go +++ b/connect/client_test.go @@ -1,148 +1,148 @@ package connect -import ( - "context" - "crypto/x509" - "crypto/x509/pkix" - "encoding/asn1" - "io/ioutil" - "net" - "net/http" - "net/http/httptest" - "net/url" - "strconv" - "testing" +// import ( +// "context" +// "crypto/x509" +// "crypto/x509/pkix" +// "encoding/asn1" +// "io/ioutil" +// "net" +// "net/http" +// "net/http/httptest" +// "net/url" +// "strconv" +// "testing" - "github.com/hashicorp/consul/api" - "github.com/hashicorp/consul/testutil" - "github.com/stretchr/testify/require" -) +// "github.com/hashicorp/consul/api" +// "github.com/hashicorp/consul/testutil" +// "github.com/stretchr/testify/require" +// ) -func TestNewInsecureDevClientWithLocalCerts(t *testing.T) { +// func TestNewInsecureDevClientWithLocalCerts(t *testing.T) { - agent, err := api.NewClient(api.DefaultConfig()) - require.Nil(t, err) +// agent, err := api.NewClient(api.DefaultConfig()) +// require.Nil(t, err) - got, err := NewInsecureDevClientWithLocalCerts(agent, - "testdata/ca1-ca-consul-internal.cert.pem", - "testdata/ca1-svc-web.cert.pem", - "testdata/ca1-svc-web.key.pem", - ) - require.Nil(t, err) +// got, err := NewInsecureDevClientWithLocalCerts(agent, +// "testdata/ca1-ca-consul-internal.cert.pem", +// "testdata/ca1-svc-web.cert.pem", +// "testdata/ca1-svc-web.key.pem", +// ) +// require.Nil(t, err) - // Sanity check correct certs were loaded - serverCfg, err := got.ServerTLSConfig() - require.Nil(t, err) - caSubjects := serverCfg.RootCAs.Subjects() - require.Len(t, caSubjects, 1) - caSubject, err := testNameFromRawDN(caSubjects[0]) - require.Nil(t, err) - require.Equal(t, "Consul Internal", caSubject.CommonName) +// // Sanity check correct certs were loaded +// serverCfg, err := got.ServerTLSConfig() +// require.Nil(t, err) +// caSubjects := serverCfg.RootCAs.Subjects() +// require.Len(t, caSubjects, 1) +// caSubject, err := testNameFromRawDN(caSubjects[0]) +// require.Nil(t, err) +// require.Equal(t, "Consul Internal", caSubject.CommonName) - require.Len(t, serverCfg.Certificates, 1) - cert, err := x509.ParseCertificate(serverCfg.Certificates[0].Certificate[0]) - require.Nil(t, err) - require.Equal(t, "web", cert.Subject.CommonName) -} +// require.Len(t, serverCfg.Certificates, 1) +// cert, err := x509.ParseCertificate(serverCfg.Certificates[0].Certificate[0]) +// require.Nil(t, err) +// require.Equal(t, "web", cert.Subject.CommonName) +// } -func testNameFromRawDN(raw []byte) (*pkix.Name, error) { - var seq pkix.RDNSequence - if _, err := asn1.Unmarshal(raw, &seq); err != nil { - return nil, err - } +// func testNameFromRawDN(raw []byte) (*pkix.Name, error) { +// var seq pkix.RDNSequence +// if _, err := asn1.Unmarshal(raw, &seq); err != nil { +// return nil, err +// } - var name pkix.Name - name.FillFromRDNSequence(&seq) - return &name, nil -} +// var name pkix.Name +// name.FillFromRDNSequence(&seq) +// return &name, nil +// } -func testAgent(t *testing.T) (*testutil.TestServer, *api.Client) { - t.Helper() +// func testAgent(t *testing.T) (*testutil.TestServer, *api.Client) { +// t.Helper() - // Make client config - conf := api.DefaultConfig() +// // Make client config +// conf := api.DefaultConfig() - // Create server - server, err := testutil.NewTestServerConfigT(t, nil) - require.Nil(t, err) +// // Create server +// server, err := testutil.NewTestServerConfigT(t, nil) +// require.Nil(t, err) - conf.Address = server.HTTPAddr +// conf.Address = server.HTTPAddr - // Create client - agent, err := api.NewClient(conf) - require.Nil(t, err) +// // Create client +// agent, err := api.NewClient(conf) +// require.Nil(t, err) - return server, agent -} +// return server, agent +// } -func testService(t *testing.T, ca, name string, client *api.Client) *httptest.Server { - t.Helper() +// func testService(t *testing.T, ca, name string, client *api.Client) *httptest.Server { +// t.Helper() - // Run a test service to discover - server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { - w.Write([]byte("svc: " + name)) - })) - server.TLS = TestTLSConfig(t, ca, name) - server.StartTLS() +// // Run a test service to discover +// server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { +// w.Write([]byte("svc: " + name)) +// })) +// server.TLS = TestTLSConfig(t, ca, name) +// server.StartTLS() - u, err := url.Parse(server.URL) - require.Nil(t, err) +// u, err := url.Parse(server.URL) +// require.Nil(t, err) - port, err := strconv.Atoi(u.Port()) - require.Nil(t, err) +// port, err := strconv.Atoi(u.Port()) +// require.Nil(t, err) - // If client is passed, register the test service instance - if client != nil { - svc := &api.AgentServiceRegistration{ - // TODO(banks): we don't really have a good way to represent - // connect-native apps yet so we have to pretend out little server is a - // proxy for now. - Kind: api.ServiceKindConnectProxy, - ProxyDestination: name, - Name: name + "-proxy", - Address: u.Hostname(), - Port: port, - } - err := client.Agent().ServiceRegister(svc) - require.Nil(t, err) - } +// // If client is passed, register the test service instance +// if client != nil { +// svc := &api.AgentServiceRegistration{ +// // TODO(banks): we don't really have a good way to represent +// // connect-native apps yet so we have to pretend out little server is a +// // proxy for now. +// Kind: api.ServiceKindConnectProxy, +// ProxyDestination: name, +// Name: name + "-proxy", +// Address: u.Hostname(), +// Port: port, +// } +// err := client.Agent().ServiceRegister(svc) +// require.Nil(t, err) +// } - return server -} +// return server +// } -func TestDialService(t *testing.T) { - consulServer, agent := testAgent(t) - defer consulServer.Stop() +// func TestDialService(t *testing.T) { +// consulServer, agent := testAgent(t) +// defer consulServer.Stop() - svc := testService(t, "ca1", "web", agent) - defer svc.Close() +// svc := testService(t, "ca1", "web", agent) +// defer svc.Close() - c, err := NewInsecureDevClientWithLocalCerts(agent, - "testdata/ca1-ca-consul-internal.cert.pem", - "testdata/ca1-svc-web.cert.pem", - "testdata/ca1-svc-web.key.pem", - ) - require.Nil(t, err) +// c, err := NewInsecureDevClientWithLocalCerts(agent, +// "testdata/ca1-ca-consul-internal.cert.pem", +// "testdata/ca1-svc-web.cert.pem", +// "testdata/ca1-svc-web.key.pem", +// ) +// require.Nil(t, err) - conn, err := c.DialService(context.Background(), "default", "web") - require.Nilf(t, err, "err: %s", err) +// conn, err := c.DialService(context.Background(), "default", "web") +// require.Nilf(t, err, "err: %s", err) - // Inject our conn into http.Transport - httpClient := &http.Client{ - Transport: &http.Transport{ - DialTLS: func(network, addr string) (net.Conn, error) { - return conn, nil - }, - }, - } +// // Inject our conn into http.Transport +// httpClient := &http.Client{ +// Transport: &http.Transport{ +// DialTLS: func(network, addr string) (net.Conn, error) { +// return conn, nil +// }, +// }, +// } - // Don't be fooled the hostname here is ignored since we did the dialling - // ourselves - resp, err := httpClient.Get("https://web.connect.consul/") - require.Nil(t, err) - defer resp.Body.Close() - body, err := ioutil.ReadAll(resp.Body) - require.Nil(t, err) +// // Don't be fooled the hostname here is ignored since we did the dialling +// // ourselves +// resp, err := httpClient.Get("https://web.connect.consul/") +// require.Nil(t, err) +// defer resp.Body.Close() +// body, err := ioutil.ReadAll(resp.Body) +// require.Nil(t, err) - require.Equal(t, "svc: web", string(body)) -} +// require.Equal(t, "svc: web", string(body)) +// } diff --git a/connect/example_test.go b/connect/example_test.go new file mode 100644 index 000000000..eb66bdbc0 --- /dev/null +++ b/connect/example_test.go @@ -0,0 +1,53 @@ +package connect + +import ( + "crypto/tls" + "log" + "net" + "net/http" + + "github.com/hashicorp/consul/api" +) + +type apiHandler struct{} + +func (apiHandler) ServeHTTP(http.ResponseWriter, *http.Request) {} + +// Note: this assumes a suitable Consul ACL token with 'service:write' for +// service 'web' is set in CONSUL_HTTP_TOKEN ENV var. +func ExampleService_ServerTLSConfig_hTTP() { + client, _ := api.NewClient(api.DefaultConfig()) + svc, _ := NewService("web", client) + server := &http.Server{ + Addr: ":8080", + Handler: apiHandler{}, + TLSConfig: svc.ServerTLSConfig(), + } + // Cert and key files are blank since the tls.Config will handle providing + // those dynamically. + log.Fatal(server.ListenAndServeTLS("", "")) +} + +func acceptLoop(l net.Listener) {} + +// Note: this assumes a suitable Consul ACL token with 'service:write' for +// service 'web' is set in CONSUL_HTTP_TOKEN ENV var. +func ExampleService_ServerTLSConfig_tLS() { + client, _ := api.NewClient(api.DefaultConfig()) + svc, _ := NewService("web", client) + l, _ := tls.Listen("tcp", ":8080", svc.ServerTLSConfig()) + acceptLoop(l) +} + +func handleResponse(r *http.Response) {} + +// Note: this assumes a suitable Consul ACL token with 'service:write' for +// service 'web' is set in CONSUL_HTTP_TOKEN ENV var. +func ExampleService_HTTPClient() { + client, _ := api.NewClient(api.DefaultConfig()) + svc, _ := NewService("web", client) + + httpClient := svc.HTTPClient() + resp, _ := httpClient.Get("https://web.service.consul/foo/bar") + handleResponse(resp) +} diff --git a/connect/resolver.go b/connect/resolver.go new file mode 100644 index 000000000..41dc70e82 --- /dev/null +++ b/connect/resolver.go @@ -0,0 +1,131 @@ +package connect + +import ( + "context" + "fmt" + "math/rand" + + "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/api" + testing "github.com/mitchellh/go-testing-interface" +) + +// Resolver is the interface implemented by a service discovery mechanism. +type Resolver interface { + // Resolve returns a single service instance to connect to. Implementations + // may attempt to ensure the instance returned is currently available. It is + // expected that a client will re-dial on a connection failure so making an + // effort to return a different service instance each time where available + // increases reliability. The context passed can be used to impose timeouts + // which may or may not be respected by implementations that make network + // calls to resolve the service. The addr returned is a string in any valid + // form for passing directly to `net.Dial("tcp", addr)`. + Resolve(ctx context.Context) (addr string, certURI connect.CertURI, err error) +} + +// StaticResolver is a statically defined resolver. This can be used to connect +// to an known-Connect endpoint without performing service discovery. +type StaticResolver struct { + // Addr is the network address (including port) of the instance. It must be + // the connect-enabled mTLS server and may be a proxy in front of the actual + // target service process. It is a string in any valid form for passing + // directly to `net.Dial("tcp", addr)`. + Addr string + + // CertURL is the _identity_ we expect the server to present in it's TLS + // certificate. It must be an exact match or the connection will be rejected. + CertURI connect.CertURI +} + +// Resolve implements Resolver by returning the static values. +func (sr *StaticResolver) Resolve(ctx context.Context) (string, connect.CertURI, error) { + return sr.Addr, sr.CertURI, nil +} + +const ( + // ConsulResolverTypeService indicates resolving healthy service nodes. + ConsulResolverTypeService int = iota + + // ConsulResolverTypePreparedQuery indicates resolving via prepared query. + ConsulResolverTypePreparedQuery +) + +// ConsulResolver queries Consul for a service instance. +type ConsulResolver struct { + // Client is the Consul API client to use. Must be non-nil or Resolve will + // panic. + Client *api.Client + + // Namespace of the query target + Namespace string + + // Name of the query target + Name string + + // Type of the query target, + Type int + + // Datacenter to resolve in, empty indicates agent's local DC. + Datacenter string +} + +// Resolve performs service discovery against the local Consul agent and returns +// the address and expected identity of a suitable service instance. +func (cr *ConsulResolver) Resolve(ctx context.Context) (string, connect.CertURI, error) { + switch cr.Type { + case ConsulResolverTypeService: + return cr.resolveService(ctx) + case ConsulResolverTypePreparedQuery: + // TODO(banks): we need to figure out what API changes are needed for + // prepared queries to become connect-aware. How do we signal that we want + // connect-enabled endpoints vs the direct ones for the responses? + return "", nil, fmt.Errorf("not implemented") + default: + return "", nil, fmt.Errorf("unknown resolver type") + } +} + +func (cr *ConsulResolver) resolveService(ctx context.Context) (string, connect.CertURI, error) { + health := cr.Client.Health() + + svcs, _, err := health.Connect(cr.Name, "", true, cr.queryOptions(ctx)) + if err != nil { + return "", nil, err + } + + if len(svcs) < 1 { + return "", nil, fmt.Errorf("no healthy instances found") + } + + // Services are not shuffled by HTTP API, pick one at (pseudo) random. + idx := 0 + if len(svcs) > 1 { + idx = rand.Intn(len(svcs)) + } + + addr := svcs[idx].Service.Address + if addr == "" { + addr = svcs[idx].Node.Address + } + port := svcs[idx].Service.Port + + // Generate the expected CertURI + + // TODO(banks): when we've figured out the CA story around generating and + // propagating these trust domains we need to actually fetch the trust domain + // somehow. We also need to implement namespaces. Use of test function here is + // temporary pending the work on trust domains. + certURI := connect.TestSpiffeIDService(&testing.RuntimeT{}, cr.Name) + + return fmt.Sprintf("%s:%d", addr, port), certURI, nil +} + +func (cr *ConsulResolver) queryOptions(ctx context.Context) *api.QueryOptions { + q := &api.QueryOptions{ + // We may make this configurable one day but we may also implement our own + // caching which is even more stale so... + AllowStale: true, + Datacenter: cr.Datacenter, + } + return q.WithContext(ctx) +} diff --git a/connect/resolver_test.go b/connect/resolver_test.go new file mode 100644 index 000000000..29a40e3d3 --- /dev/null +++ b/connect/resolver_test.go @@ -0,0 +1,164 @@ +package connect + +import ( + "context" + "testing" + "time" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/api" + "github.com/stretchr/testify/require" +) + +func TestStaticResolver_Resolve(t *testing.T) { + type fields struct { + Addr string + CertURI connect.CertURI + } + tests := []struct { + name string + fields fields + }{ + { + name: "simples", + fields: fields{"1.2.3.4:80", connect.TestSpiffeIDService(t, "foo")}, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + sr := StaticResolver{ + Addr: tt.fields.Addr, + CertURI: tt.fields.CertURI, + } + addr, certURI, err := sr.Resolve(context.Background()) + require := require.New(t) + require.Nil(err) + require.Equal(sr.Addr, addr) + require.Equal(sr.CertURI, certURI) + }) + } +} + +func TestConsulResolver_Resolve(t *testing.T) { + + // Setup a local test agent to query + agent := agent.NewTestAgent("test-consul", "") + defer agent.Shutdown() + + cfg := api.DefaultConfig() + cfg.Address = agent.HTTPAddr() + client, err := api.NewClient(cfg) + require.Nil(t, err) + + // Setup a service with a connect proxy instance + regSrv := &api.AgentServiceRegistration{ + Name: "web", + Port: 8080, + } + err = client.Agent().ServiceRegister(regSrv) + require.Nil(t, err) + + regProxy := &api.AgentServiceRegistration{ + Kind: "connect-proxy", + Name: "web-proxy", + Port: 9090, + ProxyDestination: "web", + } + err = client.Agent().ServiceRegister(regProxy) + require.Nil(t, err) + + // And another proxy so we can test handling with multiple endpoints returned + regProxy.Port = 9091 + regProxy.ID = "web-proxy-2" + err = client.Agent().ServiceRegister(regProxy) + require.Nil(t, err) + + proxyAddrs := []string{ + agent.Config.AdvertiseAddrLAN.String() + ":9090", + agent.Config.AdvertiseAddrLAN.String() + ":9091", + } + + type fields struct { + Namespace string + Name string + Type int + Datacenter string + } + tests := []struct { + name string + fields fields + timeout time.Duration + wantAddr string + wantCertURI connect.CertURI + wantErr bool + }{ + { + name: "basic service discovery", + fields: fields{ + Namespace: "default", + Name: "web", + Type: ConsulResolverTypeService, + }, + wantCertURI: connect.TestSpiffeIDService(t, "web"), + wantErr: false, + }, + { + name: "Bad Type errors", + fields: fields{ + Namespace: "default", + Name: "web", + Type: 123, + }, + wantErr: true, + }, + { + name: "Non-existent service errors", + fields: fields{ + Namespace: "default", + Name: "foo", + Type: ConsulResolverTypeService, + }, + wantErr: true, + }, + { + name: "timeout errors", + fields: fields{ + Namespace: "default", + Name: "web", + Type: ConsulResolverTypeService, + }, + timeout: 1 * time.Nanosecond, + wantErr: true, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + require := require.New(t) + cr := &ConsulResolver{ + Client: client, + Namespace: tt.fields.Namespace, + Name: tt.fields.Name, + Type: tt.fields.Type, + Datacenter: tt.fields.Datacenter, + } + // WithCancel just to have a cancel func in scope to assign in the if + // clause. + ctx, cancel := context.WithCancel(context.Background()) + if tt.timeout > 0 { + ctx, cancel = context.WithTimeout(ctx, tt.timeout) + } + defer cancel() + gotAddr, gotCertURI, err := cr.Resolve(ctx) + if tt.wantErr { + require.NotNil(err) + return + } + + require.Nil(err) + // Address should be either of the registered proxy ports so check both + require.Contains(proxyAddrs, gotAddr) + require.Equal(tt.wantCertURI, gotCertURI) + }) + } +} diff --git a/connect/service.go b/connect/service.go new file mode 100644 index 000000000..db83ce5aa --- /dev/null +++ b/connect/service.go @@ -0,0 +1,185 @@ +package connect + +import ( + "context" + "crypto/tls" + "log" + "net" + "net/http" + "os" + "time" + + "github.com/hashicorp/consul/api" +) + +// Service represents a Consul service that accepts and/or connects via Connect. +// This can represent a service that only is a server, only is a client, or +// both. +// +// TODO(banks): API for monitoring status of certs from app +// +// TODO(banks): Agent implicit health checks based on knowing which certs are +// available should prevent clients being routed until the agent knows the +// service has been delivered valid certificates. Once built, document that here +// too. +type Service struct { + // serviceID is the unique ID for this service in the agent-local catalog. + // This is often but not always the service name. This is used to request + // Connect metadata. If the service with this ID doesn't exist on the local + // agent no error will be returned and the Service will retry periodically. + // This allows service startup and registration to happen in either order + // without coordination since they might be performed by separate processes. + serviceID string + + // client is the Consul API client. It must be configured with an appropriate + // Token that has `service:write` policy on the provided ServiceID. If an + // insufficient token is provided, the Service will abort further attempts to + // fetch certificates and print a loud error message. It will not Close() or + // kill the process since that could lead to a crash loop in every service if + // ACL token was revoked. All attempts to dial will error and any incoming + // connections will fail to verify. + client *api.Client + + // serverTLSCfg is the (reloadable) TLS config we use for serving. + serverTLSCfg *ReloadableTLSConfig + + // clientTLSCfg is the (reloadable) TLS config we use for dialling. + clientTLSCfg *ReloadableTLSConfig + + logger *log.Logger +} + +// NewService creates and starts a Service. The caller must close the returned +// service to free resources and allow the program to exit normally. This is +// typically called in a signal handler. +func NewService(serviceID string, client *api.Client) (*Service, error) { + return NewServiceWithLogger(serviceID, client, + log.New(os.Stderr, "", log.LstdFlags)) +} + +// NewServiceWithLogger starts the service with a specified log.Logger. +func NewServiceWithLogger(serviceID string, client *api.Client, + logger *log.Logger) (*Service, error) { + s := &Service{ + serviceID: serviceID, + client: client, + logger: logger, + } + s.serverTLSCfg = NewReloadableTLSConfig(defaultTLSConfig(serverVerifyCerts)) + s.clientTLSCfg = NewReloadableTLSConfig(defaultTLSConfig(clientVerifyCerts)) + + // TODO(banks) run the background certificate sync + return s, nil +} + +// NewDevServiceFromCertFiles creates a Service using certificate and key files +// passed instead of fetching them from the client. +func NewDevServiceFromCertFiles(serviceID string, client *api.Client, + logger *log.Logger, caFile, certFile, keyFile string) (*Service, error) { + s := &Service{ + serviceID: serviceID, + client: client, + logger: logger, + } + tlsCfg, err := devTLSConfigFromFiles(caFile, certFile, keyFile) + if err != nil { + return nil, err + } + + // Note that NewReloadableTLSConfig makes a copy so we can re-use the same + // base for both client and server with swapped verifiers. + tlsCfg.VerifyPeerCertificate = serverVerifyCerts + s.serverTLSCfg = NewReloadableTLSConfig(tlsCfg) + tlsCfg.VerifyPeerCertificate = clientVerifyCerts + s.clientTLSCfg = NewReloadableTLSConfig(tlsCfg) + return s, nil +} + +// ServerTLSConfig returns a *tls.Config that allows any TCP listener to accept +// and authorize incoming Connect clients. It will return a single static config +// with hooks to dynamically load certificates, and perform Connect +// authorization during verification. Service implementations do not need to +// reload this to get new certificates. +// +// At any time it may be possible that the Service instance does not have access +// to usable certificates due to not being initially setup yet or a prolonged +// error during renewal. The listener will be able to accept connections again +// once connectivity is restored provided the client's Token is valid. +func (s *Service) ServerTLSConfig() *tls.Config { + return s.serverTLSCfg.TLSConfig() +} + +// Dial connects to a remote Connect-enabled server. The passed Resolver is used +// to discover a single candidate instance which will be dialled and have it's +// TLS certificate verified against the expected identity. Failures are returned +// directly with no retries. Repeated dials may use different instances +// depending on the Resolver implementation. +// +// Timeout can be managed via the Context. +func (s *Service) Dial(ctx context.Context, resolver Resolver) (net.Conn, error) { + addr, certURI, err := resolver.Resolve(ctx) + if err != nil { + return nil, err + } + var dialer net.Dialer + tcpConn, err := dialer.DialContext(ctx, "tcp", addr) + if err != nil { + return nil, err + } + + tlsConn := tls.Client(tcpConn, s.clientTLSCfg.TLSConfig()) + // Set deadline for Handshake to complete. + deadline, ok := ctx.Deadline() + if ok { + tlsConn.SetDeadline(deadline) + } + err = tlsConn.Handshake() + if err != nil { + tlsConn.Close() + return nil, err + } + // Clear deadline since that was only for connection. Caller can set their own + // deadline later as necessary. + tlsConn.SetDeadline(time.Time{}) + + // Verify that the connect server's URI matches certURI + err = verifyServerCertMatchesURI(tlsConn.ConnectionState().PeerCertificates, + certURI) + if err != nil { + tlsConn.Close() + return nil, err + } + + return tlsConn, nil +} + +// HTTPDialContext is compatible with http.Transport.DialContext. It expects the +// addr hostname to be specified using Consul DNS query syntax, e.g. +// "web.service.consul". It converts that into the equivalent ConsulResolver and +// then call s.Dial with the resolver. This is low level, clients should +// typically use HTTPClient directly. +func (s *Service) HTTPDialContext(ctx context.Context, network, + addr string) (net.Conn, error) { + var r ConsulResolver + // TODO(banks): parse addr into ConsulResolver + return s.Dial(ctx, &r) +} + +// HTTPClient returns an *http.Client configured to dial remote Consul Connect +// HTTP services. The client will return an error if attempting to make requests +// to a non HTTPS hostname. It resolves the domain of the request with the same +// syntax as Consul DNS queries although it performs discovery directly via the +// API rather than just relying on Consul DNS. Hostnames that are not valid +// Consul DNS queries will fail. +func (s *Service) HTTPClient() *http.Client { + return &http.Client{ + Transport: &http.Transport{ + DialContext: s.HTTPDialContext, + }, + } +} + +// Close stops the service and frees resources. +func (s *Service) Close() { + // TODO(banks): stop background activity if started +} diff --git a/connect/service_test.go b/connect/service_test.go new file mode 100644 index 000000000..a2adfe7f1 --- /dev/null +++ b/connect/service_test.go @@ -0,0 +1,105 @@ +package connect + +import ( + "context" + "fmt" + "testing" + "time" + + "github.com/hashicorp/consul/agent/connect" + "github.com/stretchr/testify/require" +) + +func TestService_Dial(t *testing.T) { + ca := connect.TestCA(t, nil) + + tests := []struct { + name string + accept bool + handshake bool + presentService string + wantErr string + }{ + { + name: "working", + accept: true, + handshake: true, + presentService: "db", + wantErr: "", + }, + { + name: "tcp connect fail", + accept: false, + handshake: false, + presentService: "db", + wantErr: "connection refused", + }, + { + name: "handshake timeout", + accept: true, + handshake: false, + presentService: "db", + wantErr: "i/o timeout", + }, + { + name: "bad cert", + accept: true, + handshake: true, + presentService: "web", + wantErr: "peer certificate mismatch", + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + require := require.New(t) + + s, err := NewService("web", nil) + require.Nil(err) + + // Force TLSConfig + s.clientTLSCfg = NewReloadableTLSConfig(TestTLSConfig(t, "web", ca)) + + ctx, cancel := context.WithTimeout(context.Background(), + 100*time.Millisecond) + defer cancel() + + testSvc := NewTestService(t, tt.presentService, ca) + testSvc.TimeoutHandshake = !tt.handshake + + if tt.accept { + go func() { + err := testSvc.Serve() + require.Nil(err) + }() + defer testSvc.Close() + } + + // Always expect to be connecting to a "DB" + resolver := &StaticResolver{ + Addr: testSvc.Addr, + CertURI: connect.TestSpiffeIDService(t, "db"), + } + + // All test runs should complete in under 500ms due to the timeout about. + // Don't wait for whole test run to get stuck. + testTimeout := 500 * time.Millisecond + testTimer := time.AfterFunc(testTimeout, func() { + panic(fmt.Sprintf("test timed out after %s", testTimeout)) + }) + + conn, err := s.Dial(ctx, resolver) + testTimer.Stop() + + if tt.wantErr == "" { + require.Nil(err) + } else { + require.NotNil(err) + require.Contains(err.Error(), tt.wantErr) + } + + if err == nil { + conn.Close() + } + }) + } +} diff --git a/connect/testdata/ca1-ca-consul-internal.cert.pem b/connect/testdata/ca1-ca-consul-internal.cert.pem deleted file mode 100644 index 6a557775f..000000000 --- a/connect/testdata/ca1-ca-consul-internal.cert.pem +++ /dev/null @@ -1,14 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICIDCCAcagAwIBAgIBATAKBggqhkjOPQQDAjAaMRgwFgYDVQQDEw9Db25zdWwg -SW50ZXJuYWwwHhcNMTgwMzIzMjIwNDI1WhcNMjgwMzIwMjIwNDI1WjAaMRgwFgYD -VQQDEw9Db25zdWwgSW50ZXJuYWwwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAAT3 -IPiDHugKYEVaSpIzBjqU5lQrmirC6N1XHyOAhF2psGGxcxezpf8Vgy5Iv6XbmeHr -cttyzUYtUKhrFBhxkPYRo4H8MIH5MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8E -BTADAQH/MCkGA1UdDgQiBCCrnNQy2IQS73Co9WbrPXtq/YP9SvIBOJ8iYRWTOxjC -qTArBgNVHSMEJDAigCCrnNQy2IQS73Co9WbrPXtq/YP9SvIBOJ8iYRWTOxjCqTA/ -BgNVHREEODA2hjRzcGlmZmU6Ly8xMTExMTExMS0yMjIyLTMzMzMtNDQ0NC01NTU1 -NTU1NTU1NTUuY29uc3VsMD0GA1UdHgEB/wQzMDGgLzAtgisxMTExMTExMS0yMjIy -LTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsMAoGCCqGSM49BAMCA0gAMEUC -IQDwWL6ZuszKrZjSJwDzdhRQtj1ppezJrKaDTJx+4F/tyQIgEaQCR935ztIqZzgO -Ka6ozcH2Ubd4j4cDC1XswVMW6zs= ------END CERTIFICATE----- diff --git a/connect/testdata/ca1-ca-consul-internal.key.pem b/connect/testdata/ca1-ca-consul-internal.key.pem deleted file mode 100644 index 8c40fd26b..000000000 --- a/connect/testdata/ca1-ca-consul-internal.key.pem +++ /dev/null @@ -1,5 +0,0 @@ ------BEGIN EC PRIVATE KEY----- -MHcCAQEEIDUDO3I7WKbLTTWkNKA4unB2RLq/RX+L+XIFssDE/AD7oAoGCCqGSM49 -AwEHoUQDQgAE9yD4gx7oCmBFWkqSMwY6lOZUK5oqwujdVx8jgIRdqbBhsXMXs6X/ -FYMuSL+l25nh63Lbcs1GLVCoaxQYcZD2EQ== ------END EC PRIVATE KEY----- diff --git a/connect/testdata/ca1-svc-cache.cert.pem b/connect/testdata/ca1-svc-cache.cert.pem deleted file mode 100644 index 097a2b6a6..000000000 --- a/connect/testdata/ca1-svc-cache.cert.pem +++ /dev/null @@ -1,14 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICEDCCAbagAwIBAgIBBTAKBggqhkjOPQQDAjAaMRgwFgYDVQQDEw9Db25zdWwg -SW50ZXJuYWwwHhcNMTgwMzIzMjIwNDI1WhcNMjgwMzIwMjIwNDI1WjAQMQ4wDAYD -VQQDEwVjYWNoZTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABOWw8369v4DHJAI6 -k061hU8rxaQs87mZFQ52JfleJjRoDUuZIPLhZHMFbvbI8pDWi7YdjluNbzNNh6nu -fAivylujgfYwgfMwDgYDVR0PAQH/BAQDAgO4MB0GA1UdJQQWMBQGCCsGAQUFBwMC -BggrBgEFBQcDATAMBgNVHRMBAf8EAjAAMCkGA1UdDgQiBCCHhMqV2/R8meSsXtwh -OLC9hP7WQfuvwJ6V6uwKZdEofTArBgNVHSMEJDAigCCrnNQy2IQS73Co9WbrPXtq -/YP9SvIBOJ8iYRWTOxjCqTBcBgNVHREEVTBThlFzcGlmZmU6Ly8xMTExMTExMS0y -MjIyLTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsL25zL2RlZmF1bHQvZGMv -ZGMwMS9zdmMvY2FjaGUwCgYIKoZIzj0EAwIDSAAwRQIgPfekKBd/ltpVkdjnB0Hp -cV9HPwy12tXp4suR2nspSNkCIQD1Th/hvxuBKkRYy9Bl+jgTbrFdd4fLCWPeFbaM -sgLK7g== ------END CERTIFICATE----- diff --git a/connect/testdata/ca1-svc-cache.key.pem b/connect/testdata/ca1-svc-cache.key.pem deleted file mode 100644 index f780f63db..000000000 --- a/connect/testdata/ca1-svc-cache.key.pem +++ /dev/null @@ -1,5 +0,0 @@ ------BEGIN EC PRIVATE KEY----- -MHcCAQEEIPTSPV2cWNnO69f+vYyCg5frpoBtK6L+kZVLrGCv3TdnoAoGCCqGSM49 -AwEHoUQDQgAE5bDzfr2/gMckAjqTTrWFTyvFpCzzuZkVDnYl+V4mNGgNS5kg8uFk -cwVu9sjykNaLth2OW41vM02Hqe58CK/KWw== ------END EC PRIVATE KEY----- diff --git a/connect/testdata/ca1-svc-db.cert.pem b/connect/testdata/ca1-svc-db.cert.pem deleted file mode 100644 index d00a38ea0..000000000 --- a/connect/testdata/ca1-svc-db.cert.pem +++ /dev/null @@ -1,13 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICCjCCAbCgAwIBAgIBBDAKBggqhkjOPQQDAjAaMRgwFgYDVQQDEw9Db25zdWwg -SW50ZXJuYWwwHhcNMTgwMzIzMjIwNDI1WhcNMjgwMzIwMjIwNDI1WjANMQswCQYD -VQQDEwJkYjBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABEcTyr2l7yYWZuh++02M -usR20QrZtHdd7goKmYrIpQ3ekmHuLLgJWgTTaIhCj8fzbryep+s8oM7EiPhRQ14l -uSujgfMwgfAwDgYDVR0PAQH/BAQDAgO4MB0GA1UdJQQWMBQGCCsGAQUFBwMCBggr -BgEFBQcDATAMBgNVHRMBAf8EAjAAMCkGA1UdDgQiBCAy6jHCBBT2bii+aMJCDJ33 -bFJtR72bxDBUi5b+YWyWwDArBgNVHSMEJDAigCCrnNQy2IQS73Co9WbrPXtq/YP9 -SvIBOJ8iYRWTOxjCqTBZBgNVHREEUjBQhk5zcGlmZmU6Ly8xMTExMTExMS0yMjIy -LTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsL25zL2RlZmF1bHQvZGMvZGMw -MS9zdmMvZGIwCgYIKoZIzj0EAwIDSAAwRQIhALCW4cOEpuYfLJ0NGwEmYG5Fko0N -WMccL0gEQzKUbIWrAiAIw8wkTSf1O8vTHeKdR1fCmdVoDRFRKB643PaofUzFxA== ------END CERTIFICATE----- diff --git a/connect/testdata/ca1-svc-db.key.pem b/connect/testdata/ca1-svc-db.key.pem deleted file mode 100644 index 3ec23a33b..000000000 --- a/connect/testdata/ca1-svc-db.key.pem +++ /dev/null @@ -1,5 +0,0 @@ ------BEGIN EC PRIVATE KEY----- -MHcCAQEEIMHv1pjt75IjKXzl8l4rBtEFS1pEuOM4WNgeHg5Qn1RroAoGCCqGSM49 -AwEHoUQDQgAERxPKvaXvJhZm6H77TYy6xHbRCtm0d13uCgqZisilDd6SYe4suAla -BNNoiEKPx/NuvJ6n6zygzsSI+FFDXiW5Kw== ------END EC PRIVATE KEY----- diff --git a/connect/testdata/ca1-svc-web.cert.pem b/connect/testdata/ca1-svc-web.cert.pem deleted file mode 100644 index a786a2c06..000000000 --- a/connect/testdata/ca1-svc-web.cert.pem +++ /dev/null @@ -1,13 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICDDCCAbKgAwIBAgIBAzAKBggqhkjOPQQDAjAaMRgwFgYDVQQDEw9Db25zdWwg -SW50ZXJuYWwwHhcNMTgwMzIzMjIwNDI1WhcNMjgwMzIwMjIwNDI1WjAOMQwwCgYD -VQQDEwN3ZWIwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARF47lERGXziNBC74Kh -U3W29/M7JO9LIUaJgK0LJbhgf0MuPxf7gX+PnxH5ImI5yfXRv0SSxeCq7377IkXP -XS6Fo4H0MIHxMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcDAgYI -KwYBBQUHAwEwDAYDVR0TAQH/BAIwADApBgNVHQ4EIgQg26hfNYiVwYRm7CQJvdOd -NIOmG3t8vNwXCtktC782cf8wKwYDVR0jBCQwIoAgq5zUMtiEEu9wqPVm6z17av2D -/UryATifImEVkzsYwqkwWgYDVR0RBFMwUYZPc3BpZmZlOi8vMTExMTExMTEtMjIy -Mi0zMzMzLTQ0NDQtNTU1NTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2Rj -MDEvc3ZjL3dlYjAKBggqhkjOPQQDAgNIADBFAiAzi8uBs+ApPfAZZm5eO/hhVZiv -E8p84VKCqPeF3tFfoAIhANVkdSnp2AKU5T7SlJHmieu3DFNWCVpajlHJvf286J94 ------END CERTIFICATE----- diff --git a/connect/testdata/ca1-svc-web.key.pem b/connect/testdata/ca1-svc-web.key.pem deleted file mode 100644 index 8ed82c13c..000000000 --- a/connect/testdata/ca1-svc-web.key.pem +++ /dev/null @@ -1,5 +0,0 @@ ------BEGIN EC PRIVATE KEY----- -MHcCAQEEIPOIj4BFS0fknG+uAVKZIWRpnzp7O3OKpBDgEmuml7lcoAoGCCqGSM49 -AwEHoUQDQgAEReO5RERl84jQQu+CoVN1tvfzOyTvSyFGiYCtCyW4YH9DLj8X+4F/ -j58R+SJiOcn10b9EksXgqu9++yJFz10uhQ== ------END EC PRIVATE KEY----- diff --git a/connect/testdata/ca2-ca-vault.cert.pem b/connect/testdata/ca2-ca-vault.cert.pem deleted file mode 100644 index a7f617468..000000000 --- a/connect/testdata/ca2-ca-vault.cert.pem +++ /dev/null @@ -1,14 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICDDCCAbKgAwIBAgIBAjAKBggqhkjOPQQDAjAQMQ4wDAYDVQQDEwVWYXVsdDAe -Fw0xODAzMjMyMjA0MjVaFw0yODAzMjAyMjA0MjVaMBAxDjAMBgNVBAMTBVZhdWx0 -MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEAjGVnRy/7Q2SU4ePbKbsurRAHKYA -CuA3r9QrowgZOr7yptF54shiobMIORpfKYkoYkhzL1lhWKI06BUJ4xuPd6OB/DCB -+TAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zApBgNVHQ4EIgQgqEc5 -ZrELD5ySxapbU+eRb+aEv1MEoCvjC0mCA1uJecMwKwYDVR0jBCQwIoAgqEc5ZrEL -D5ySxapbU+eRb+aEv1MEoCvjC0mCA1uJecMwPwYDVR0RBDgwNoY0c3BpZmZlOi8v -MTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1NTU1NTU1NTU1LmNvbnN1bDA9BgNV -HR4BAf8EMzAxoC8wLYIrMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1NTU1NTU1 -NTU1LmNvbnN1bDAKBggqhkjOPQQDAgNIADBFAiEA6pBdeglhq//A7sYnYk85XL+3 -4IDrXrGN3KjC9qo3J9ICIDS9pEoTPWAWDfn1ccPafKVBrJm6KrmljcvymQ2QUDIZ ------END CERTIFICATE----- ----- diff --git a/connect/testdata/ca2-ca-vault.key.pem b/connect/testdata/ca2-ca-vault.key.pem deleted file mode 100644 index 43534b961..000000000 --- a/connect/testdata/ca2-ca-vault.key.pem +++ /dev/null @@ -1,5 +0,0 @@ ------BEGIN EC PRIVATE KEY----- -MHcCAQEEIKnuCctuvtyzf+M6B8jGqejG4T5o7NMRYjO2M3dZITCboAoGCCqGSM49 -AwEHoUQDQgAEAjGVnRy/7Q2SU4ePbKbsurRAHKYACuA3r9QrowgZOr7yptF54shi -obMIORpfKYkoYkhzL1lhWKI06BUJ4xuPdw== ------END EC PRIVATE KEY----- diff --git a/connect/testdata/ca2-svc-cache.cert.pem b/connect/testdata/ca2-svc-cache.cert.pem deleted file mode 100644 index 32110e232..000000000 --- a/connect/testdata/ca2-svc-cache.cert.pem +++ /dev/null @@ -1,13 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICBzCCAaygAwIBAgIBCDAKBggqhkjOPQQDAjAQMQ4wDAYDVQQDEwVWYXVsdDAe -Fw0xODAzMjMyMjA0MjVaFw0yODAzMjAyMjA0MjVaMBAxDjAMBgNVBAMTBWNhY2hl -MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEyB6D+Eqi/71EhUrBWlcZOV2vjS9Y -xnUQ3jfH+QUZur7WOuGLnO7eArbAHcDbqKGyDWxlkZH04sGYOXaEW7UUd6OB9jCB -8zAOBgNVHQ8BAf8EBAMCA7gwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMB -MAwGA1UdEwEB/wQCMAAwKQYDVR0OBCIEIGapiHFxlbYbNKFlwdPMpKhIypvNZXo8 -k/OZLki/vurQMCsGA1UdIwQkMCKAIKhHOWaxCw+cksWqW1PnkW/mhL9TBKAr4wtJ -ggNbiXnDMFwGA1UdEQRVMFOGUXNwaWZmZTovLzExMTExMTExLTIyMjItMzMzMy00 -NDQ0LTU1NTU1NTU1NTU1NS5jb25zdWwvbnMvZGVmYXVsdC9kYy9kYzAxL3N2Yy9j -YWNoZTAKBggqhkjOPQQDAgNJADBGAiEA/vRLXbkigS6l89MxFk0RFE7Zo4vorv7s -E1juCOsVJBICIQDXlpmYH9fPon6DYMyOxQttNjkuWbJgnPv7rPg+CllRyA== ------END CERTIFICATE----- diff --git a/connect/testdata/ca2-svc-cache.key.pem b/connect/testdata/ca2-svc-cache.key.pem deleted file mode 100644 index cabad8179..000000000 --- a/connect/testdata/ca2-svc-cache.key.pem +++ /dev/null @@ -1,5 +0,0 @@ ------BEGIN EC PRIVATE KEY----- -MHcCAQEEIEbQOv4odF2Tu8ZnJTJuytvOd2HOF9HxgGw5ei1pkP4moAoGCCqGSM49 -AwEHoUQDQgAEyB6D+Eqi/71EhUrBWlcZOV2vjS9YxnUQ3jfH+QUZur7WOuGLnO7e -ArbAHcDbqKGyDWxlkZH04sGYOXaEW7UUdw== ------END EC PRIVATE KEY----- diff --git a/connect/testdata/ca2-svc-db.cert.pem b/connect/testdata/ca2-svc-db.cert.pem deleted file mode 100644 index 33273058a..000000000 --- a/connect/testdata/ca2-svc-db.cert.pem +++ /dev/null @@ -1,13 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICADCCAaagAwIBAgIBBzAKBggqhkjOPQQDAjAQMQ4wDAYDVQQDEwVWYXVsdDAe -Fw0xODAzMjMyMjA0MjVaFw0yODAzMjAyMjA0MjVaMA0xCzAJBgNVBAMTAmRiMFkw -EwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEFeB4DynO6IeKOE4zFLlBVFv+4HeWRvK3 -6cQ9L6v5uhLfdcYyqhT/QLbQ4R8ks1vUTTiq0XJsAGdkvkt71fiEl6OB8zCB8DAO -BgNVHQ8BAf8EBAMCA7gwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMAwG -A1UdEwEB/wQCMAAwKQYDVR0OBCIEIKjVz8n91cej8q6WpDNd0hwSMAE2ddY056PH -hMfaBM6GMCsGA1UdIwQkMCKAIKhHOWaxCw+cksWqW1PnkW/mhL9TBKAr4wtJggNb -iXnDMFkGA1UdEQRSMFCGTnNwaWZmZTovLzExMTExMTExLTIyMjItMzMzMy00NDQ0 -LTU1NTU1NTU1NTU1NS5jb25zdWwvbnMvZGVmYXVsdC9kYy9kYzAxL3N2Yy9kYjAK -BggqhkjOPQQDAgNIADBFAiAdYkokbeZr7W32NhjcNoTMNwpz9CqJpK6Yzu4N6EJc -pAIhALHpRM57zdiMouDOlhGPX5XKzbSl2AnBjFvbPqgFV09Z ------END CERTIFICATE----- diff --git a/connect/testdata/ca2-svc-db.key.pem b/connect/testdata/ca2-svc-db.key.pem deleted file mode 100644 index 7f7ab9ff8..000000000 --- a/connect/testdata/ca2-svc-db.key.pem +++ /dev/null @@ -1,5 +0,0 @@ ------BEGIN EC PRIVATE KEY----- -MHcCAQEEIHnzia+DNTFB7uYQEuWvLR2czGCuDfOTt1FfcBo1uBJioAoGCCqGSM49 -AwEHoUQDQgAEFeB4DynO6IeKOE4zFLlBVFv+4HeWRvK36cQ9L6v5uhLfdcYyqhT/ -QLbQ4R8ks1vUTTiq0XJsAGdkvkt71fiElw== ------END EC PRIVATE KEY----- diff --git a/connect/testdata/ca2-svc-web.cert.pem b/connect/testdata/ca2-svc-web.cert.pem deleted file mode 100644 index ae1e338f6..000000000 --- a/connect/testdata/ca2-svc-web.cert.pem +++ /dev/null @@ -1,13 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICAjCCAaigAwIBAgIBBjAKBggqhkjOPQQDAjAQMQ4wDAYDVQQDEwVWYXVsdDAe -Fw0xODAzMjMyMjA0MjVaFw0yODAzMjAyMjA0MjVaMA4xDDAKBgNVBAMTA3dlYjBZ -MBMGByqGSM49AgEGCCqGSM49AwEHA0IABM9XzxWFCa80uQDfJEGboUC15Yr+FwDp -OemThalQxFpkL7gQSIgpzgGULIx+jCiu+clJ0QhbWT2dnS8vFUKq35qjgfQwgfEw -DgYDVR0PAQH/BAQDAgO4MB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDATAM -BgNVHRMBAf8EAjAAMCkGA1UdDgQiBCCN+TKHPCOr48hxRCx4rqbWQg5QHkCSNzjZ -qi1JGs13njArBgNVHSMEJDAigCCoRzlmsQsPnJLFqltT55Fv5oS/UwSgK+MLSYID -W4l5wzBaBgNVHREEUzBRhk9zcGlmZmU6Ly8xMTExMTExMS0yMjIyLTMzMzMtNDQ0 -NC01NTU1NTU1NTU1NTUuY29uc3VsL25zL2RlZmF1bHQvZGMvZGMwMS9zdmMvd2Vi -MAoGCCqGSM49BAMCA0gAMEUCIBd6gpL6E8rms5BU+cJeeyv0Rjc18edn2g3q2wLN -r1zAAiEAv16whKwR0DeKkldGLDQIu9nCNvfDZrEWgywIBYbzLxY= ------END CERTIFICATE----- diff --git a/connect/testdata/ca2-svc-web.key.pem b/connect/testdata/ca2-svc-web.key.pem deleted file mode 100644 index 65f0bc48e..000000000 --- a/connect/testdata/ca2-svc-web.key.pem +++ /dev/null @@ -1,5 +0,0 @@ ------BEGIN EC PRIVATE KEY----- -MHcCAQEEIOCMjjRexX3qHjixpRwLxggJd9yuskqUoPy8/MepafP+oAoGCCqGSM49 -AwEHoUQDQgAEz1fPFYUJrzS5AN8kQZuhQLXliv4XAOk56ZOFqVDEWmQvuBBIiCnO -AZQsjH6MKK75yUnRCFtZPZ2dLy8VQqrfmg== ------END EC PRIVATE KEY----- diff --git a/connect/testdata/ca2-xc-by-ca1.cert.pem b/connect/testdata/ca2-xc-by-ca1.cert.pem deleted file mode 100644 index e864f6c00..000000000 --- a/connect/testdata/ca2-xc-by-ca1.cert.pem +++ /dev/null @@ -1,14 +0,0 @@ ------BEGIN CERTIFICATE----- -MIICFjCCAbygAwIBAgIBAjAKBggqhkjOPQQDAjAaMRgwFgYDVQQDEw9Db25zdWwg -SW50ZXJuYWwwHhcNMTgwMzIzMjIwNDI1WhcNMjgwMzIwMjIwNDI1WjAQMQ4wDAYD -VQQDEwVWYXVsdDBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABAIxlZ0cv+0NklOH -j2ym7Lq0QBymAArgN6/UK6MIGTq+8qbReeLIYqGzCDkaXymJKGJIcy9ZYViiNOgV -CeMbj3ejgfwwgfkwDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wKQYD -VR0OBCIEIKhHOWaxCw+cksWqW1PnkW/mhL9TBKAr4wtJggNbiXnDMCsGA1UdIwQk -MCKAIKuc1DLYhBLvcKj1Zus9e2r9g/1K8gE4nyJhFZM7GMKpMD8GA1UdEQQ4MDaG -NHNwaWZmZTovLzExMTExMTExLTIyMjItMzMzMy00NDQ0LTU1NTU1NTU1NTU1NS5j -b25zdWwwPQYDVR0eAQH/BDMwMaAvMC2CKzExMTExMTExLTIyMjItMzMzMy00NDQ0 -LTU1NTU1NTU1NTU1NS5jb25zdWwwCgYIKoZIzj0EAwIDSAAwRQIgWWWj8/6SaY2y -wzOtIphwZLewCuLMG6KG8uY4S7UsosgCIQDhCbT/LUKq/A21khQncBmM79ng9Gbx -/4Zw8zbVmnZJKg== ------END CERTIFICATE----- diff --git a/connect/testdata/mkcerts.go b/connect/testdata/mkcerts.go deleted file mode 100644 index 7fe82f53a..000000000 --- a/connect/testdata/mkcerts.go +++ /dev/null @@ -1,243 +0,0 @@ -package main - -import ( - "crypto/ecdsa" - "crypto/elliptic" - "crypto/rand" - "crypto/sha256" - "crypto/x509" - "crypto/x509/pkix" - "encoding/pem" - "fmt" - "log" - "math/big" - "net/url" - "os" - "regexp" - "strings" - "time" -) - -// You can verify a given leaf with a given root using: -// -// $ openssl verify -verbose -CAfile ca2-ca-vault.cert.pem ca1-svc-db.cert.pem -// -// Note that to verify via the cross-signed intermediate, openssl requires it to -// be bundled with the _root_ CA bundle and will ignore the cert if it's passed -// with the subject. You can do that with: -// -// $ openssl verify -verbose -CAfile \ -// <(cat ca1-ca-consul-internal.cert.pem ca2-xc-by-ca1.cert.pem) \ -// ca2-svc-db.cert.pem -// ca2-svc-db.cert.pem: OK -// -// Note that the same leaf and root without the intermediate should fail: -// -// $ openssl verify -verbose -CAfile ca1-ca-consul-internal.cert.pem ca2-svc-db.cert.pem -// ca2-svc-db.cert.pem: CN = db -// error 20 at 0 depth lookup:unable to get local issuer certificate -// -// NOTE: THIS IS A QUIRK OF OPENSSL; in Connect we will distribute the roots -// alone and stable intermediates like the XC cert to the _leaf_. - -var clusterID = "11111111-2222-3333-4444-555555555555" -var cAs = []string{"Consul Internal", "Vault"} -var services = []string{"web", "db", "cache"} -var slugRe = regexp.MustCompile("[^a-zA-Z0-9]+") -var serial int64 - -type caInfo struct { - id int - name string - slug string - uri *url.URL - pk *ecdsa.PrivateKey - cert *x509.Certificate -} - -func main() { - // Make CA certs - caInfos := make(map[string]caInfo) - var previousCA *caInfo - for idx, name := range cAs { - ca := caInfo{ - id: idx + 1, - name: name, - slug: strings.ToLower(slugRe.ReplaceAllString(name, "-")), - } - pk, err := makePK(fmt.Sprintf("ca%d-ca-%s.key.pem", ca.id, ca.slug)) - if err != nil { - log.Fatal(err) - } - ca.pk = pk - caURI, err := url.Parse(fmt.Sprintf("spiffe://%s.consul", clusterID)) - if err != nil { - log.Fatal(err) - } - ca.uri = caURI - cert, err := makeCACert(ca, previousCA) - if err != nil { - log.Fatal(err) - } - ca.cert = cert - caInfos[name] = ca - previousCA = &ca - } - - // For each CA, make a leaf cert for each service - for _, ca := range caInfos { - for _, svc := range services { - _, err := makeLeafCert(ca, svc) - if err != nil { - log.Fatal(err) - } - } - } -} - -func makePK(path string) (*ecdsa.PrivateKey, error) { - log.Printf("Writing PK file: %s", path) - priv, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) - if err != nil { - return nil, err - } - - bs, err := x509.MarshalECPrivateKey(priv) - if err != nil { - return nil, err - } - - err = writePEM(path, "EC PRIVATE KEY", bs) - return priv, nil -} - -func makeCACert(ca caInfo, previousCA *caInfo) (*x509.Certificate, error) { - path := fmt.Sprintf("ca%d-ca-%s.cert.pem", ca.id, ca.slug) - log.Printf("Writing CA cert file: %s", path) - serial++ - subj := pkix.Name{ - CommonName: ca.name, - } - template := x509.Certificate{ - SerialNumber: big.NewInt(serial), - Subject: subj, - // New in go 1.10 - URIs: []*url.URL{ca.uri}, - // Add DNS name constraint - PermittedDNSDomainsCritical: true, - PermittedDNSDomains: []string{ca.uri.Hostname()}, - SignatureAlgorithm: x509.ECDSAWithSHA256, - BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageCertSign | x509.KeyUsageCRLSign | x509.KeyUsageDigitalSignature, - IsCA: true, - NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), - NotBefore: time.Now(), - AuthorityKeyId: keyID(&ca.pk.PublicKey), - SubjectKeyId: keyID(&ca.pk.PublicKey), - } - bs, err := x509.CreateCertificate(rand.Reader, &template, &template, - &ca.pk.PublicKey, ca.pk) - if err != nil { - return nil, err - } - - err = writePEM(path, "CERTIFICATE", bs) - if err != nil { - return nil, err - } - - cert, err := x509.ParseCertificate(bs) - if err != nil { - return nil, err - } - - if previousCA != nil { - // Also create cross-signed cert as we would use during rotation between - // previous CA and this one. - template.AuthorityKeyId = keyID(&previousCA.pk.PublicKey) - bs, err := x509.CreateCertificate(rand.Reader, &template, - previousCA.cert, &ca.pk.PublicKey, previousCA.pk) - if err != nil { - return nil, err - } - - path := fmt.Sprintf("ca%d-xc-by-ca%d.cert.pem", ca.id, previousCA.id) - err = writePEM(path, "CERTIFICATE", bs) - if err != nil { - return nil, err - } - } - - return cert, err -} - -func keyID(pub *ecdsa.PublicKey) []byte { - // This is not standard; RFC allows any unique identifier as long as they - // match in subject/authority chains but suggests specific hashing of DER - // bytes of public key including DER tags. I can't be bothered to do esp. - // since ECDSA keys don't have a handy way to marshal the publick key alone. - h := sha256.New() - h.Write(pub.X.Bytes()) - h.Write(pub.Y.Bytes()) - return h.Sum([]byte{}) -} - -func makeLeafCert(ca caInfo, svc string) (*x509.Certificate, error) { - svcURI := ca.uri - svcURI.Path = "/ns/default/dc/dc01/svc/" + svc - - keyPath := fmt.Sprintf("ca%d-svc-%s.key.pem", ca.id, svc) - cPath := fmt.Sprintf("ca%d-svc-%s.cert.pem", ca.id, svc) - - pk, err := makePK(keyPath) - if err != nil { - return nil, err - } - - log.Printf("Writing Service Cert: %s", cPath) - - serial++ - subj := pkix.Name{ - CommonName: svc, - } - template := x509.Certificate{ - SerialNumber: big.NewInt(serial), - Subject: subj, - // New in go 1.10 - URIs: []*url.URL{svcURI}, - SignatureAlgorithm: x509.ECDSAWithSHA256, - BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageDataEncipherment | - x509.KeyUsageKeyAgreement | x509.KeyUsageDigitalSignature | - x509.KeyUsageKeyEncipherment, - ExtKeyUsage: []x509.ExtKeyUsage{ - x509.ExtKeyUsageClientAuth, - x509.ExtKeyUsageServerAuth, - }, - NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), - NotBefore: time.Now(), - AuthorityKeyId: keyID(&ca.pk.PublicKey), - SubjectKeyId: keyID(&pk.PublicKey), - } - bs, err := x509.CreateCertificate(rand.Reader, &template, ca.cert, - &pk.PublicKey, ca.pk) - if err != nil { - return nil, err - } - - err = writePEM(cPath, "CERTIFICATE", bs) - if err != nil { - return nil, err - } - - return x509.ParseCertificate(bs) -} - -func writePEM(name, typ string, bs []byte) error { - f, err := os.OpenFile(name, os.O_WRONLY|os.O_CREATE, 0600) - if err != nil { - return err - } - defer f.Close() - return pem.Encode(f, &pem.Block{Type: typ, Bytes: bs}) -} diff --git a/connect/testing.go b/connect/testing.go index 90db332a2..7e1b9cdac 100644 --- a/connect/testing.go +++ b/connect/testing.go @@ -4,30 +4,155 @@ import ( "crypto/tls" "crypto/x509" "fmt" - "io/ioutil" - "path" - "path/filepath" - "runtime" + "io" + "net" + "sync/atomic" - "github.com/mitchellh/go-testing-interface" + "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/lib/freeport" + testing "github.com/mitchellh/go-testing-interface" "github.com/stretchr/testify/require" ) -// testDataDir is a janky temporary hack to allow use of these methods from -// proxy package. We need to revisit where all this lives since it logically -// overlaps with consul/agent in Mitchell's PR and that one generates certs on -// the fly which will make this unecessary but I want to get things working for -// now with what I've got :). This wonderful heap kinda-sorta gets the path -// relative to _this_ file so it works even if the Test* method is being called -// from a test binary in another package dir. -func testDataDir() string { - _, filename, _, ok := runtime.Caller(0) - if !ok { - panic("no caller information") - } - return path.Dir(filename) + "/testdata" +// testVerifier creates a helper verifyFunc that can be set in a tls.Config and +// records calls made, passing back the certificates presented via the returned +// channel. The channel is buffered so up to 128 verification calls can be made +// without reading the chan before verification blocks. +func testVerifier(t testing.T, returnErr error) (verifyFunc, chan [][]byte) { + ch := make(chan [][]byte, 128) + return func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error { + ch <- rawCerts + return returnErr + }, ch } +// TestTLSConfig returns a *tls.Config suitable for use during tests. +func TestTLSConfig(t testing.T, service string, ca *structs.CARoot) *tls.Config { + t.Helper() + + // Insecure default (nil verifier) + cfg := defaultTLSConfig(nil) + cfg.Certificates = []tls.Certificate{TestSvcKeyPair(t, service, ca)} + cfg.RootCAs = TestCAPool(t, ca) + cfg.ClientCAs = TestCAPool(t, ca) + return cfg +} + +// TestCAPool returns an *x509.CertPool containing the passed CA's root(s) +func TestCAPool(t testing.T, cas ...*structs.CARoot) *x509.CertPool { + t.Helper() + pool := x509.NewCertPool() + for _, ca := range cas { + pool.AppendCertsFromPEM([]byte(ca.RootCert)) + } + return pool +} + +// TestSvcKeyPair returns an tls.Certificate containing both cert and private +// key for a given service under a given CA from the testdata dir. +func TestSvcKeyPair(t testing.T, service string, ca *structs.CARoot) tls.Certificate { + t.Helper() + certPEM, keyPEM := connect.TestLeaf(t, service, ca) + cert, err := tls.X509KeyPair([]byte(certPEM), []byte(keyPEM)) + require.Nil(t, err) + return cert +} + +// TestPeerCertificates returns a []*x509.Certificate as you'd get from +// tls.Conn.ConnectionState().PeerCertificates including the named certificate. +func TestPeerCertificates(t testing.T, service string, ca *structs.CARoot) []*x509.Certificate { + t.Helper() + certPEM, _ := connect.TestLeaf(t, service, ca) + cert, err := connect.ParseCert(certPEM) + require.Nil(t, err) + return []*x509.Certificate{cert} +} + +// TestService runs a service listener that can be used to test clients. It's +// behaviour can be controlled by the struct members. +type TestService struct { + // The service name to serve. + Service string + // The (test) CA to use for generating certs. + CA *structs.CARoot + // TimeoutHandshake controls whether the listening server will complete a TLS + // handshake quickly enough. + TimeoutHandshake bool + // TLSCfg is the tls.Config that will be used. By default it's set up from the + // service and ca set. + TLSCfg *tls.Config + // Addr is the listen address. It is set to a random free port on `localhost` + // by default. + Addr string + + l net.Listener + stopFlag int32 + stopChan chan struct{} +} + +// NewTestService returns a TestService. It should be closed when test is +// complete. +func NewTestService(t testing.T, service string, ca *structs.CARoot) *TestService { + ports := freeport.GetT(t, 1) + return &TestService{ + Service: service, + CA: ca, + stopChan: make(chan struct{}), + TLSCfg: TestTLSConfig(t, service, ca), + Addr: fmt.Sprintf("localhost:%d", ports[0]), + } +} + +// Serve runs a TestService and blocks until it is closed or errors. +func (s *TestService) Serve() error { + // Just accept TCP conn but so we can control timing of accept/handshake + l, err := net.Listen("tcp", s.Addr) + if err != nil { + return err + } + s.l = l + + for { + conn, err := s.l.Accept() + if err != nil { + if atomic.LoadInt32(&s.stopFlag) == 1 { + return nil + } + return err + } + + // Ignore the conn if we are not actively ha + if !s.TimeoutHandshake { + // Upgrade conn to TLS + conn = tls.Server(conn, s.TLSCfg) + + // Run an echo service + go io.Copy(conn, conn) + } + + // Close this conn when we stop + go func(c net.Conn) { + <-s.stopChan + c.Close() + }(conn) + } + + return nil +} + +// Close stops a TestService +func (s *TestService) Close() { + old := atomic.SwapInt32(&s.stopFlag, 1) + if old == 0 { + if s.l != nil { + s.l.Close() + } + close(s.stopChan) + } +} + +/* // TestCAPool returns an *x509.CertPool containing the named CA certs from the // testdata dir. func TestCAPool(t testing.T, caNames ...string) *x509.CertPool { @@ -86,3 +211,4 @@ func (a *TestAuther) Auth(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error { return a.Return } +*/ diff --git a/connect/tls.go b/connect/tls.go index af66d9c0c..8d3bc3a94 100644 --- a/connect/tls.go +++ b/connect/tls.go @@ -3,13 +3,18 @@ package connect import ( "crypto/tls" "crypto/x509" + "errors" "io/ioutil" "sync" + + "github.com/hashicorp/consul/agent/connect" ) -// defaultTLSConfig returns the standard config for connect clients and servers. -func defaultTLSConfig() *tls.Config { - serverAuther := &ServerAuther{} +// verifyFunc is the type of tls.Config.VerifyPeerCertificate for convenience. +type verifyFunc func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error + +// defaultTLSConfig returns the standard config. +func defaultTLSConfig(verify verifyFunc) *tls.Config { return &tls.Config{ MinVersion: tls.VersionTLS12, ClientAuth: tls.RequireAndVerifyClientCert, @@ -29,16 +34,18 @@ func defaultTLSConfig() *tls.Config { // We have to set this since otherwise Go will attempt to verify DNS names // match DNS SAN/CN which we don't want. We hook up VerifyPeerCertificate to // do our own path validation as well as Connect AuthZ. - InsecureSkipVerify: true, - // By default auth as if we are a server. Clients need to override this with - // an Auther that is performs correct validation of the server identity they - // intended to connect to. - VerifyPeerCertificate: serverAuther.Auth, + InsecureSkipVerify: true, + VerifyPeerCertificate: verify, + // Include h2 to allow connect http servers to automatically support http2. + // See: https://github.com/golang/go/blob/917c33fe8672116b04848cf11545296789cafd3b/src/net/http/server.go#L2724-L2731 + NextProtos: []string{"h2"}, } } // ReloadableTLSConfig exposes a tls.Config that can have it's certificates -// reloaded. This works by +// reloaded. On a server, this uses GetConfigForClient to pass the current +// tls.Config or client certificate for each acceptted connection. On a client, +// this uses GetClientCertificate to provide the current client certificate. type ReloadableTLSConfig struct { mu sync.Mutex @@ -46,52 +53,40 @@ type ReloadableTLSConfig struct { cfg *tls.Config } -// NewReloadableTLSConfig returns a reloadable config currently set to base. The -// Auther used to verify certificates for incoming connections on a Server will -// just be copied from the VerifyPeerCertificate passed. Clients will need to -// pass a specific Auther instance when they call TLSConfig that is configured -// to perform the necessary validation of the server's identity. +// NewReloadableTLSConfig returns a reloadable config currently set to base. func NewReloadableTLSConfig(base *tls.Config) *ReloadableTLSConfig { - return &ReloadableTLSConfig{cfg: base} + c := &ReloadableTLSConfig{} + c.SetTLSConfig(base) + return c } -// ServerTLSConfig returns a *tls.Config that will dynamically load certs for -// each inbound connection via the GetConfigForClient callback. -func (c *ReloadableTLSConfig) ServerTLSConfig() *tls.Config { - // Setup the basic one with current params even though we will be using - // different config for each new conn. - c.mu.Lock() - base := c.cfg - c.mu.Unlock() - - // Dynamically fetch the current config for each new inbound connection - base.GetConfigForClient = func(info *tls.ClientHelloInfo) (*tls.Config, error) { - return c.TLSConfig(nil), nil - } - - return base -} - -// TLSConfig returns the current value for the config. It is safe to call from -// any goroutine. The passed Auther is inserted into the config's -// VerifyPeerCertificate. Passing a nil Auther will leave the default one in the -// base config -func (c *ReloadableTLSConfig) TLSConfig(auther Auther) *tls.Config { +// TLSConfig returns a *tls.Config that will dynamically load certs. It's +// suitable for use in either a client or server. +func (c *ReloadableTLSConfig) TLSConfig() *tls.Config { c.mu.Lock() cfgCopy := c.cfg c.mu.Unlock() - if auther != nil { - cfgCopy.VerifyPeerCertificate = auther.Auth - } return cfgCopy } // SetTLSConfig sets the config used for future connections. It is safe to call // from any goroutine. func (c *ReloadableTLSConfig) SetTLSConfig(cfg *tls.Config) error { + copy := cfg.Clone() + copy.GetClientCertificate = func(*tls.CertificateRequestInfo) (*tls.Certificate, error) { + current := c.TLSConfig() + if len(current.Certificates) < 1 { + return nil, errors.New("tls: no certificates configured") + } + return ¤t.Certificates[0], nil + } + copy.GetConfigForClient = func(*tls.ClientHelloInfo) (*tls.Config, error) { + return c.TLSConfig(), nil + } + c.mu.Lock() defer c.mu.Unlock() - c.cfg = cfg + c.cfg = copy return nil } @@ -114,7 +109,8 @@ func devTLSConfigFromFiles(caFile, certFile, return nil, err } - cfg := defaultTLSConfig() + // Insecure no verification + cfg := defaultTLSConfig(nil) cfg.Certificates = []tls.Certificate{cert} cfg.RootCAs = roots @@ -122,3 +118,43 @@ func devTLSConfigFromFiles(caFile, certFile, return cfg, nil } + +// verifyServerCertMatchesURI is used on tls connections dialled to a connect +// server to ensure that the certificate it presented has the correct identity. +func verifyServerCertMatchesURI(certs []*x509.Certificate, + expected connect.CertURI) error { + expectedStr := expected.URI().String() + + if len(certs) < 1 { + return errors.New("peer certificate mismatch") + } + + // Only check the first cert assuming this is the only leaf. It's not clear if + // services might ever legitimately present multiple leaf certificates or if + // the slice is just to allow presenting the whole chain of intermediates. + cert := certs[0] + + // Our certs will only ever have a single URI for now so only check that + if len(cert.URIs) < 1 { + return errors.New("peer certificate mismatch") + } + // We may want to do better than string matching later in some special + // cases and/or encapsulate the "match" logic inside the CertURI + // implementation but for now this is all we need. + if cert.URIs[0].String() == expectedStr { + return nil + } + return errors.New("peer certificate mismatch") +} + +// serverVerifyCerts is the verifyFunc for use on Connect servers. +func serverVerifyCerts(rawCerts [][]byte, chains [][]*x509.Certificate) error { + // TODO(banks): implement me + return nil +} + +// clientVerifyCerts is the verifyFunc for use on Connect clients. +func clientVerifyCerts(rawCerts [][]byte, chains [][]*x509.Certificate) error { + // TODO(banks): implement me + return nil +} diff --git a/connect/tls_test.go b/connect/tls_test.go index 0c99df3ad..3605f22db 100644 --- a/connect/tls_test.go +++ b/connect/tls_test.go @@ -1,45 +1,103 @@ package connect import ( - "crypto/tls" + "crypto/x509" "testing" + "github.com/hashicorp/consul/agent/connect" "github.com/stretchr/testify/require" ) func TestReloadableTLSConfig(t *testing.T) { - base := TestTLSConfig(t, "ca1", "web") + require := require.New(t) + verify, _ := testVerifier(t, nil) + base := defaultTLSConfig(verify) c := NewReloadableTLSConfig(base) - a := &TestAuther{ - Return: nil, - } + // The dynamic config should be the one we loaded (with some different hooks) + got := c.TLSConfig() + expect := *base + // Equal and even cmp.Diff fail on tls.Config due to unexported fields in + // each. Compare a few things to prove it's returning the bits we + // specifically set. + require.Equal(expect.Certificates, got.Certificates) + require.Equal(expect.RootCAs, got.RootCAs) + require.Equal(expect.ClientCAs, got.ClientCAs) + require.Equal(expect.InsecureSkipVerify, got.InsecureSkipVerify) + require.Equal(expect.MinVersion, got.MinVersion) + require.Equal(expect.CipherSuites, got.CipherSuites) + require.NotNil(got.GetClientCertificate) + require.NotNil(got.GetConfigForClient) + require.Contains(got.NextProtos, "h2") - // The dynamic config should be the one we loaded, but with the passed auther - expect := base - expect.VerifyPeerCertificate = a.Auth - require.Equal(t, base, c.TLSConfig(a)) + ca := connect.TestCA(t, nil) - // The server config should return same too for new connections - serverCfg := c.ServerTLSConfig() - require.NotNil(t, serverCfg.GetConfigForClient) - got, err := serverCfg.GetConfigForClient(&tls.ClientHelloInfo{}) - require.Nil(t, err) - require.Equal(t, base, got) + // Now change the config as if we just loaded certs from Consul + new := TestTLSConfig(t, "web", ca) + err := c.SetTLSConfig(new) + require.Nil(err) - // Now change the config as if we just rotated to a new CA - new := TestTLSConfig(t, "ca2", "web") - err = c.SetTLSConfig(new) - require.Nil(t, err) + // Change the passed config to ensure SetTLSConfig made a copy otherwise this + // is racey. + expect = *new + new.Certificates = nil - // The dynamic config should be the one we loaded (with same auther due to nil) - require.Equal(t, new, c.TLSConfig(nil)) - - // The server config should return same too for new connections - serverCfg = c.ServerTLSConfig() - require.NotNil(t, serverCfg.GetConfigForClient) - got, err = serverCfg.GetConfigForClient(&tls.ClientHelloInfo{}) - require.Nil(t, err) - require.Equal(t, new, got) + // The dynamic config should be the one we loaded (with some different hooks) + got = c.TLSConfig() + require.Equal(expect.Certificates, got.Certificates) + require.Equal(expect.RootCAs, got.RootCAs) + require.Equal(expect.ClientCAs, got.ClientCAs) + require.Equal(expect.InsecureSkipVerify, got.InsecureSkipVerify) + require.Equal(expect.MinVersion, got.MinVersion) + require.Equal(expect.CipherSuites, got.CipherSuites) + require.NotNil(got.GetClientCertificate) + require.NotNil(got.GetConfigForClient) + require.Contains(got.NextProtos, "h2") +} + +func Test_verifyServerCertMatchesURI(t *testing.T) { + ca1 := connect.TestCA(t, nil) + + tests := []struct { + name string + certs []*x509.Certificate + expected connect.CertURI + wantErr bool + }{ + { + name: "simple match", + certs: TestPeerCertificates(t, "web", ca1), + expected: connect.TestSpiffeIDService(t, "web"), + wantErr: false, + }, + { + name: "mismatch", + certs: TestPeerCertificates(t, "web", ca1), + expected: connect.TestSpiffeIDService(t, "db"), + wantErr: true, + }, + { + name: "no certs", + certs: []*x509.Certificate{}, + expected: connect.TestSpiffeIDService(t, "db"), + wantErr: true, + }, + { + name: "nil certs", + certs: nil, + expected: connect.TestSpiffeIDService(t, "db"), + wantErr: true, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + err := verifyServerCertMatchesURI(tt.certs, tt.expected) + if tt.wantErr { + require.NotNil(t, err) + } else { + require.Nil(t, err) + } + }) + } } From 67669abf82ef8e21dfa3dc9db0bf0d405b4a61a6 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Tue, 3 Apr 2018 12:55:50 +0100 Subject: [PATCH 111/539] Remove old connect client and proxy implementation --- connect/client.go | 256 ------------------------- connect/client_test.go | 148 --------------- connect/testing.go | 61 ------ proxy/config.go | 111 ----------- proxy/config_test.go | 46 ----- proxy/conn.go | 48 ----- proxy/conn_test.go | 119 ------------ proxy/manager.go | 140 -------------- proxy/manager_test.go | 76 -------- proxy/proxier.go | 32 ---- proxy/proxy.go | 112 ----------- proxy/public_listener.go | 119 ------------ proxy/public_listener_test.go | 38 ---- proxy/runner.go | 118 ------------ proxy/testdata/config-kitchensink.hcl | 36 ---- proxy/testing.go | 170 ----------------- proxy/upstream.go | 261 -------------------------- proxy/upstream_test.go | 75 -------- 18 files changed, 1966 deletions(-) delete mode 100644 connect/client.go delete mode 100644 connect/client_test.go delete mode 100644 proxy/config.go delete mode 100644 proxy/config_test.go delete mode 100644 proxy/conn.go delete mode 100644 proxy/conn_test.go delete mode 100644 proxy/manager.go delete mode 100644 proxy/manager_test.go delete mode 100644 proxy/proxier.go delete mode 100644 proxy/proxy.go delete mode 100644 proxy/public_listener.go delete mode 100644 proxy/public_listener_test.go delete mode 100644 proxy/runner.go delete mode 100644 proxy/testdata/config-kitchensink.hcl delete mode 100644 proxy/testing.go delete mode 100644 proxy/upstream.go delete mode 100644 proxy/upstream_test.go diff --git a/connect/client.go b/connect/client.go deleted file mode 100644 index 18e43f4cb..000000000 --- a/connect/client.go +++ /dev/null @@ -1,256 +0,0 @@ -package connect - -// import ( -// "context" -// "crypto/tls" -// "fmt" -// "math/rand" -// "net" - -// "github.com/hashicorp/consul/api" -// ) - -// // CertStatus indicates whether the Client currently has valid certificates for -// // incoming and outgoing connections. -// type CertStatus int - -// const ( -// // CertStatusUnknown is the zero value for CertStatus which may be returned -// // when a watch channel is closed on shutdown. It has no other meaning. -// CertStatusUnknown CertStatus = iota - -// // CertStatusOK indicates the client has valid certificates and trust roots to -// // Authenticate incoming and outgoing connections. -// CertStatusOK - -// // CertStatusPending indicates the client is waiting to be issued initial -// // certificates, or that it's certificates have expired and it's waiting to be -// // issued new ones. In this state all incoming and outgoing connections will -// // fail. -// CertStatusPending -// ) - -// func (s CertStatus) String() string { -// switch s { -// case CertStatusOK: -// return "OK" -// case CertStatusPending: -// return "pending" -// case CertStatusUnknown: -// fallthrough -// default: -// return "unknown" -// } -// } - -// // Client is the interface a basic client implementation must support. -// type Client interface { -// // TODO(banks): build this and test it -// // CertStatus returns the current status of the client's certificates. It can -// // be used to determine if the Client is able to service requests at the -// // current time. -// //CertStatus() CertStatus - -// // TODO(banks): build this and test it -// // WatchCertStatus returns a channel that is notified on all status changes. -// // Note that a message on the channel isn't guaranteed to be different so it's -// // value should be inspected. During Client shutdown the channel will be -// // closed returning a zero type which is equivalent to CertStatusUnknown. -// //WatchCertStatus() <-chan CertStatus - -// // ServerTLSConfig returns the *tls.Config to be used when creating a TCP -// // listener that should accept Connect connections. It is likely that at -// // startup the tlsCfg returned will not be immediately usable since -// // certificates are typically fetched from the agent asynchronously. In this -// // case it's still safe to listen with the provided config, but auth failures -// // will occur until initial certificate discovery is complete. In general at -// // any time it is possible for certificates to expire before new replacements -// // have been issued due to local network errors so the server may not actually -// // have a working certificate configuration at any time, however as soon as -// // valid certs can be issued it will automatically start working again so -// // should take no action. -// ServerTLSConfig() (*tls.Config, error) - -// // DialService opens a new connection to the named service registered in -// // Consul. It will perform service discovery to find healthy instances. If -// // there is an error during connection it is returned and the caller may call -// // again. The client implementation makes a best effort to make consecutive -// // Dials against different instances either by randomising the list and/or -// // maintaining a local memory of which instances recently failed. If the -// // context passed times out before connection is established and verified an -// // error is returned. -// DialService(ctx context.Context, namespace, name string) (net.Conn, error) - -// // DialPreparedQuery opens a new connection by executing the named Prepared -// // Query against the local Consul agent, and picking one of the returned -// // instances to connect to. It will perform service discovery with the same -// // semantics as DialService. -// DialPreparedQuery(ctx context.Context, namespace, name string) (net.Conn, error) -// } - -// /* - -// Maybe also convenience wrappers for: -// - listening TLS conn with right config -// - http.ListenAndServeTLS equivalent - -// */ - -// // AgentClient is the primary implementation of a connect.Client which -// // communicates with the local Consul agent. -// type AgentClient struct { -// agent *api.Client -// tlsCfg *ReloadableTLSConfig -// } - -// // NewClient returns an AgentClient to allow consuming and providing -// // Connect-enabled network services. -// func NewClient(agent *api.Client) Client { -// // TODO(banks): hook up fetching certs from Agent and updating tlsCfg on cert -// // delivery/change. Perhaps need to make -// return &AgentClient{ -// agent: agent, -// tlsCfg: NewReloadableTLSConfig(defaultTLSConfig()), -// } -// } - -// // NewInsecureDevClientWithLocalCerts returns an AgentClient that will still do -// // service discovery via the local agent but will use externally provided -// // certificates and skip authorization. This is intended just for development -// // and must not be used ever in production. -// func NewInsecureDevClientWithLocalCerts(agent *api.Client, caFile, certFile, -// keyFile string) (Client, error) { - -// cfg, err := devTLSConfigFromFiles(caFile, certFile, keyFile) -// if err != nil { -// return nil, err -// } - -// return &AgentClient{ -// agent: agent, -// tlsCfg: NewReloadableTLSConfig(cfg), -// }, nil -// } - -// // ServerTLSConfig implements Client -// func (c *AgentClient) ServerTLSConfig() (*tls.Config, error) { -// return c.tlsCfg.ServerTLSConfig(), nil -// } - -// // DialService implements Client -// func (c *AgentClient) DialService(ctx context.Context, namespace, -// name string) (net.Conn, error) { -// return c.dial(ctx, "service", namespace, name) -// } - -// // DialPreparedQuery implements Client -// func (c *AgentClient) DialPreparedQuery(ctx context.Context, namespace, -// name string) (net.Conn, error) { -// return c.dial(ctx, "prepared_query", namespace, name) -// } - -// func (c *AgentClient) dial(ctx context.Context, discoveryType, namespace, -// name string) (net.Conn, error) { - -// svcs, err := c.discoverInstances(ctx, discoveryType, namespace, name) -// if err != nil { -// return nil, err -// } - -// svc, err := c.pickInstance(svcs) -// if err != nil { -// return nil, err -// } -// if svc == nil { -// return nil, fmt.Errorf("no healthy services discovered") -// } - -// // OK we have a service we can dial! We need a ClientAuther that will validate -// // the connection is legit. - -// // TODO(banks): implement ClientAuther properly to actually verify connected -// // cert matches the expected service/cluster etc. based on svc. -// auther := &ClientAuther{} -// tlsConfig := c.tlsCfg.TLSConfig(auther) - -// // Resolve address TODO(banks): I expected this to happen magically in the -// // agent at registration time if I register with no explicit address but -// // apparently doesn't. This is a quick hack to make it work for now, need to -// // see if there is a better shared code path for doing this. -// addr := svc.Service.Address -// if addr == "" { -// addr = svc.Node.Address -// } -// var dialer net.Dialer -// tcpConn, err := dialer.DialContext(ctx, "tcp", -// fmt.Sprintf("%s:%d", addr, svc.Service.Port)) -// if err != nil { -// return nil, err -// } - -// tlsConn := tls.Client(tcpConn, tlsConfig) -// err = tlsConn.Handshake() -// if err != nil { -// tlsConn.Close() -// return nil, err -// } - -// return tlsConn, nil -// } - -// // pickInstance returns an instance from the given list to try to connect to. It -// // may be made pluggable later, for now it just picks a random one regardless of -// // whether the list is already shuffled. -// func (c *AgentClient) pickInstance(svcs []*api.ServiceEntry) (*api.ServiceEntry, error) { -// if len(svcs) < 1 { -// return nil, nil -// } -// idx := rand.Intn(len(svcs)) -// return svcs[idx], nil -// } - -// // discoverInstances returns all instances for the given discoveryType, -// // namespace and name. The returned service entries may or may not be shuffled -// func (c *AgentClient) discoverInstances(ctx context.Context, discoverType, -// namespace, name string) ([]*api.ServiceEntry, error) { - -// q := &api.QueryOptions{ -// // TODO(banks): make this configurable? -// AllowStale: true, -// } -// q = q.WithContext(ctx) - -// switch discoverType { -// case "service": -// svcs, _, err := c.agent.Health().Connect(name, "", true, q) -// if err != nil { -// return nil, err -// } -// return svcs, err - -// case "prepared_query": -// // TODO(banks): it's not super clear to me how this should work eventually. -// // How do we distinguise between a PreparedQuery for the actual services and -// // one that should return the connect proxies where that differs? If we -// // can't then we end up with a janky UX where user specifies a reasonable -// // prepared query but we try to connect to non-connect services and fail -// // with a confusing TLS error. Maybe just a way to filter PreparedQuery -// // results by connect-enabled would be sufficient (or even metadata to do -// // that ourselves in the response although less efficient). -// resp, _, err := c.agent.PreparedQuery().Execute(name, q) -// if err != nil { -// return nil, err -// } - -// // Awkward, we have a slice of api.ServiceEntry here but want a slice of -// // *api.ServiceEntry for compat with Connect/Service APIs. Have to convert -// // them to keep things type-happy. -// svcs := make([]*api.ServiceEntry, len(resp.Nodes)) -// for idx, se := range resp.Nodes { -// svcs[idx] = &se -// } -// return svcs, err -// default: -// return nil, fmt.Errorf("unsupported discovery type: %s", discoverType) -// } -// } diff --git a/connect/client_test.go b/connect/client_test.go deleted file mode 100644 index 045bc8fd6..000000000 --- a/connect/client_test.go +++ /dev/null @@ -1,148 +0,0 @@ -package connect - -// import ( -// "context" -// "crypto/x509" -// "crypto/x509/pkix" -// "encoding/asn1" -// "io/ioutil" -// "net" -// "net/http" -// "net/http/httptest" -// "net/url" -// "strconv" -// "testing" - -// "github.com/hashicorp/consul/api" -// "github.com/hashicorp/consul/testutil" -// "github.com/stretchr/testify/require" -// ) - -// func TestNewInsecureDevClientWithLocalCerts(t *testing.T) { - -// agent, err := api.NewClient(api.DefaultConfig()) -// require.Nil(t, err) - -// got, err := NewInsecureDevClientWithLocalCerts(agent, -// "testdata/ca1-ca-consul-internal.cert.pem", -// "testdata/ca1-svc-web.cert.pem", -// "testdata/ca1-svc-web.key.pem", -// ) -// require.Nil(t, err) - -// // Sanity check correct certs were loaded -// serverCfg, err := got.ServerTLSConfig() -// require.Nil(t, err) -// caSubjects := serverCfg.RootCAs.Subjects() -// require.Len(t, caSubjects, 1) -// caSubject, err := testNameFromRawDN(caSubjects[0]) -// require.Nil(t, err) -// require.Equal(t, "Consul Internal", caSubject.CommonName) - -// require.Len(t, serverCfg.Certificates, 1) -// cert, err := x509.ParseCertificate(serverCfg.Certificates[0].Certificate[0]) -// require.Nil(t, err) -// require.Equal(t, "web", cert.Subject.CommonName) -// } - -// func testNameFromRawDN(raw []byte) (*pkix.Name, error) { -// var seq pkix.RDNSequence -// if _, err := asn1.Unmarshal(raw, &seq); err != nil { -// return nil, err -// } - -// var name pkix.Name -// name.FillFromRDNSequence(&seq) -// return &name, nil -// } - -// func testAgent(t *testing.T) (*testutil.TestServer, *api.Client) { -// t.Helper() - -// // Make client config -// conf := api.DefaultConfig() - -// // Create server -// server, err := testutil.NewTestServerConfigT(t, nil) -// require.Nil(t, err) - -// conf.Address = server.HTTPAddr - -// // Create client -// agent, err := api.NewClient(conf) -// require.Nil(t, err) - -// return server, agent -// } - -// func testService(t *testing.T, ca, name string, client *api.Client) *httptest.Server { -// t.Helper() - -// // Run a test service to discover -// server := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { -// w.Write([]byte("svc: " + name)) -// })) -// server.TLS = TestTLSConfig(t, ca, name) -// server.StartTLS() - -// u, err := url.Parse(server.URL) -// require.Nil(t, err) - -// port, err := strconv.Atoi(u.Port()) -// require.Nil(t, err) - -// // If client is passed, register the test service instance -// if client != nil { -// svc := &api.AgentServiceRegistration{ -// // TODO(banks): we don't really have a good way to represent -// // connect-native apps yet so we have to pretend out little server is a -// // proxy for now. -// Kind: api.ServiceKindConnectProxy, -// ProxyDestination: name, -// Name: name + "-proxy", -// Address: u.Hostname(), -// Port: port, -// } -// err := client.Agent().ServiceRegister(svc) -// require.Nil(t, err) -// } - -// return server -// } - -// func TestDialService(t *testing.T) { -// consulServer, agent := testAgent(t) -// defer consulServer.Stop() - -// svc := testService(t, "ca1", "web", agent) -// defer svc.Close() - -// c, err := NewInsecureDevClientWithLocalCerts(agent, -// "testdata/ca1-ca-consul-internal.cert.pem", -// "testdata/ca1-svc-web.cert.pem", -// "testdata/ca1-svc-web.key.pem", -// ) -// require.Nil(t, err) - -// conn, err := c.DialService(context.Background(), "default", "web") -// require.Nilf(t, err, "err: %s", err) - -// // Inject our conn into http.Transport -// httpClient := &http.Client{ -// Transport: &http.Transport{ -// DialTLS: func(network, addr string) (net.Conn, error) { -// return conn, nil -// }, -// }, -// } - -// // Don't be fooled the hostname here is ignored since we did the dialling -// // ourselves -// resp, err := httpClient.Get("https://web.connect.consul/") -// require.Nil(t, err) -// defer resp.Body.Close() -// body, err := ioutil.ReadAll(resp.Body) -// require.Nil(t, err) - -// require.Equal(t, "svc: web", string(body)) -// } diff --git a/connect/testing.go b/connect/testing.go index 7e1b9cdac..f6fa438cf 100644 --- a/connect/testing.go +++ b/connect/testing.go @@ -151,64 +151,3 @@ func (s *TestService) Close() { close(s.stopChan) } } - -/* -// TestCAPool returns an *x509.CertPool containing the named CA certs from the -// testdata dir. -func TestCAPool(t testing.T, caNames ...string) *x509.CertPool { - t.Helper() - pool := x509.NewCertPool() - for _, name := range caNames { - certs, err := filepath.Glob(testDataDir() + "/" + name + "-ca-*.cert.pem") - require.Nil(t, err) - for _, cert := range certs { - caPem, err := ioutil.ReadFile(cert) - require.Nil(t, err) - pool.AppendCertsFromPEM(caPem) - } - } - return pool -} - -// TestSvcKeyPair returns an tls.Certificate containing both cert and private -// key for a given service under a given CA from the testdata dir. -func TestSvcKeyPair(t testing.T, ca, name string) tls.Certificate { - t.Helper() - prefix := fmt.Sprintf(testDataDir()+"/%s-svc-%s", ca, name) - cert, err := tls.LoadX509KeyPair(prefix+".cert.pem", prefix+".key.pem") - require.Nil(t, err) - return cert -} - -// TestTLSConfig returns a *tls.Config suitable for use during tests. -func TestTLSConfig(t testing.T, ca, svc string) *tls.Config { - t.Helper() - return &tls.Config{ - Certificates: []tls.Certificate{TestSvcKeyPair(t, ca, svc)}, - MinVersion: tls.VersionTLS12, - RootCAs: TestCAPool(t, ca), - ClientCAs: TestCAPool(t, ca), - ClientAuth: tls.RequireAndVerifyClientCert, - // In real life we'll need to do this too since otherwise Go will attempt to - // verify DNS names match DNS SAN/CN which we don't want, but we'll hook - // VerifyPeerCertificates and do our own x509 path validation as well as - // AuthZ upcall. For now we are just testing the basic proxy mechanism so - // this is fine. - InsecureSkipVerify: true, - } -} - -// TestAuther is a simple Auther implementation that does nothing but what you -// tell it to! -type TestAuther struct { - // Return is the value returned from an Auth() call. Set it to nil to have all - // certificates unconditionally accepted or a value to have them fail. - Return error -} - -// Auth implements Auther -func (a *TestAuther) Auth(rawCerts [][]byte, - verifiedChains [][]*x509.Certificate) error { - return a.Return -} -*/ diff --git a/proxy/config.go b/proxy/config.go deleted file mode 100644 index a5958135a..000000000 --- a/proxy/config.go +++ /dev/null @@ -1,111 +0,0 @@ -package proxy - -import ( - "io/ioutil" - - "github.com/hashicorp/consul/api" - "github.com/hashicorp/hcl" -) - -// Config is the publicly configurable state for an entire proxy instance. It's -// mostly used as the format for the local-file config mode which is mostly for -// dev/testing. In normal use, different parts of this config are pulled from -// different locations (e.g. command line, agent config endpoint, agent -// certificate endpoints). -type Config struct { - // ProxyID is the identifier for this proxy as registered in Consul. It's only - // guaranteed to be unique per agent. - ProxyID string `json:"proxy_id" hcl:"proxy_id"` - - // Token is the authentication token provided for queries to the local agent. - Token string `json:"token" hcl:"token"` - - // ProxiedServiceName is the name of the service this proxy is representing. - ProxiedServiceName string `json:"proxied_service_name" hcl:"proxied_service_name"` - - // ProxiedServiceNamespace is the namespace of the service this proxy is - // representing. - ProxiedServiceNamespace string `json:"proxied_service_namespace" hcl:"proxied_service_namespace"` - - // PublicListener configures the mTLS listener. - PublicListener PublicListenerConfig `json:"public_listener" hcl:"public_listener"` - - // Upstreams configures outgoing proxies for remote connect services. - Upstreams []UpstreamConfig `json:"upstreams" hcl:"upstreams"` - - // DevCAFile allows passing the file path to PEM encoded root certificate - // bundle to be used in development instead of the ones supplied by Connect. - DevCAFile string `json:"dev_ca_file" hcl:"dev_ca_file"` - - // DevServiceCertFile allows passing the file path to PEM encoded service - // certificate (client and server) to be used in development instead of the - // ones supplied by Connect. - DevServiceCertFile string `json:"dev_service_cert_file" hcl:"dev_service_cert_file"` - - // DevServiceKeyFile allows passing the file path to PEM encoded service - // private key to be used in development instead of the ones supplied by - // Connect. - DevServiceKeyFile string `json:"dev_service_key_file" hcl:"dev_service_key_file"` -} - -// ConfigWatcher is a simple interface to allow dynamic configurations from -// plugggable sources. -type ConfigWatcher interface { - // Watch returns a channel that will deliver new Configs if something external - // provokes it. - Watch() <-chan *Config -} - -// StaticConfigWatcher is a simple ConfigWatcher that delivers a static Config -// once and then never changes it. -type StaticConfigWatcher struct { - ch chan *Config -} - -// NewStaticConfigWatcher returns a ConfigWatcher for a config that never -// changes. It assumes only one "watcher" will ever call Watch. The config is -// delivered on the first call but will never be delivered again to allow -// callers to call repeatedly (e.g. select in a loop). -func NewStaticConfigWatcher(cfg *Config) *StaticConfigWatcher { - sc := &StaticConfigWatcher{ - // Buffer it so we can queue up the config for first delivery. - ch: make(chan *Config, 1), - } - sc.ch <- cfg - return sc -} - -// Watch implements ConfigWatcher on a static configuration for compatibility. -// It returns itself on the channel once and then leaves it open. -func (sc *StaticConfigWatcher) Watch() <-chan *Config { - return sc.ch -} - -// ParseConfigFile parses proxy configuration form a file for local dev. -func ParseConfigFile(filename string) (*Config, error) { - bs, err := ioutil.ReadFile(filename) - if err != nil { - return nil, err - } - - var cfg Config - - err = hcl.Unmarshal(bs, &cfg) - if err != nil { - return nil, err - } - - return &cfg, nil -} - -// AgentConfigWatcher watches the local Consul agent for proxy config changes. -type AgentConfigWatcher struct { - client *api.Client -} - -// Watch implements ConfigWatcher. -func (w *AgentConfigWatcher) Watch() <-chan *Config { - watch := make(chan *Config) - // TODO implement me - return watch -} diff --git a/proxy/config_test.go b/proxy/config_test.go deleted file mode 100644 index 89287d573..000000000 --- a/proxy/config_test.go +++ /dev/null @@ -1,46 +0,0 @@ -package proxy - -import ( - "testing" - - "github.com/stretchr/testify/require" -) - -func TestParseConfigFile(t *testing.T) { - cfg, err := ParseConfigFile("testdata/config-kitchensink.hcl") - require.Nil(t, err) - - expect := &Config{ - ProxyID: "foo", - Token: "11111111-2222-3333-4444-555555555555", - ProxiedServiceName: "web", - ProxiedServiceNamespace: "default", - PublicListener: PublicListenerConfig{ - BindAddress: ":9999", - LocalServiceAddress: "127.0.0.1:5000", - LocalConnectTimeoutMs: 1000, - HandshakeTimeoutMs: 5000, - }, - Upstreams: []UpstreamConfig{ - { - LocalBindAddress: "127.0.0.1:6000", - DestinationName: "db", - DestinationNamespace: "default", - DestinationType: "service", - ConnectTimeoutMs: 10000, - }, - { - LocalBindAddress: "127.0.0.1:6001", - DestinationName: "geo-cache", - DestinationNamespace: "default", - DestinationType: "prepared_query", - ConnectTimeoutMs: 10000, - }, - }, - DevCAFile: "connect/testdata/ca1-ca-consul-internal.cert.pem", - DevServiceCertFile: "connect/testdata/ca1-svc-web.cert.pem", - DevServiceKeyFile: "connect/testdata/ca1-svc-web.key.pem", - } - - require.Equal(t, expect, cfg) -} diff --git a/proxy/conn.go b/proxy/conn.go deleted file mode 100644 index dfad81db7..000000000 --- a/proxy/conn.go +++ /dev/null @@ -1,48 +0,0 @@ -package proxy - -import ( - "io" - "net" - "sync/atomic" -) - -// Conn represents a single proxied TCP connection. -type Conn struct { - src, dst net.Conn - stopping int32 -} - -// NewConn returns a conn joining the two given net.Conn -func NewConn(src, dst net.Conn) *Conn { - return &Conn{ - src: src, - dst: dst, - stopping: 0, - } -} - -// Close closes both connections. -func (c *Conn) Close() { - atomic.StoreInt32(&c.stopping, 1) - c.src.Close() - c.dst.Close() -} - -// CopyBytes will continuously copy bytes in both directions between src and dst -// until either connection is closed. -func (c *Conn) CopyBytes() error { - defer c.Close() - - go func() { - // Need this since Copy is only guaranteed to stop when it's source reader - // (second arg) hits EOF or error but either conn might close first possibly - // causing this goroutine to exit but not the outer one. See TestSc - //defer c.Close() - io.Copy(c.dst, c.src) - }() - _, err := io.Copy(c.src, c.dst) - if atomic.LoadInt32(&c.stopping) == 1 { - return nil - } - return err -} diff --git a/proxy/conn_test.go b/proxy/conn_test.go deleted file mode 100644 index ac907238d..000000000 --- a/proxy/conn_test.go +++ /dev/null @@ -1,119 +0,0 @@ -package proxy - -import ( - "bufio" - "net" - "testing" - - "github.com/stretchr/testify/require" -) - -// testConnSetup listens on a random TCP port and passes the accepted net.Conn -// back to test code on returned channel. It then creates a source and -// destination Conn. And a cleanup func -func testConnSetup(t *testing.T) (net.Conn, net.Conn, func()) { - t.Helper() - - l, err := net.Listen("tcp", "localhost:0") - require.Nil(t, err) - - ch := make(chan net.Conn, 1) - go func(ch chan net.Conn) { - src, err := l.Accept() - require.Nil(t, err) - ch <- src - }(ch) - - dst, err := net.Dial("tcp", l.Addr().String()) - require.Nil(t, err) - - src := <-ch - - stopper := func() { - l.Close() - src.Close() - dst.Close() - } - - return src, dst, stopper -} - -func TestConn(t *testing.T) { - src, dst, stop := testConnSetup(t) - defer stop() - - c := NewConn(src, dst) - - retCh := make(chan error, 1) - go func() { - retCh <- c.CopyBytes() - }() - - srcR := bufio.NewReader(src) - dstR := bufio.NewReader(dst) - - _, err := src.Write([]byte("ping 1\n")) - require.Nil(t, err) - _, err = dst.Write([]byte("ping 2\n")) - require.Nil(t, err) - - got, err := dstR.ReadString('\n') - require.Equal(t, "ping 1\n", got) - - got, err = srcR.ReadString('\n') - require.Equal(t, "ping 2\n", got) - - _, err = src.Write([]byte("pong 1\n")) - require.Nil(t, err) - _, err = dst.Write([]byte("pong 2\n")) - require.Nil(t, err) - - got, err = dstR.ReadString('\n') - require.Equal(t, "pong 1\n", got) - - got, err = srcR.ReadString('\n') - require.Equal(t, "pong 2\n", got) - - c.Close() - - ret := <-retCh - require.Nil(t, ret, "Close() should not cause error return") -} - -func TestConnSrcClosing(t *testing.T) { - src, dst, stop := testConnSetup(t) - defer stop() - - c := NewConn(src, dst) - retCh := make(chan error, 1) - go func() { - retCh <- c.CopyBytes() - }() - - // If we close the src conn, we expect CopyBytes to return and src to be - // closed too. No good way to assert that the conn is closed really other than - // assume the retCh receive will hand unless CopyBytes exits and that - // CopyBytes defers Closing both. i.e. if this test doesn't time out it's - // good! - src.Close() - <-retCh -} - -func TestConnDstClosing(t *testing.T) { - src, dst, stop := testConnSetup(t) - defer stop() - - c := NewConn(src, dst) - retCh := make(chan error, 1) - go func() { - retCh <- c.CopyBytes() - }() - - // If we close the dst conn, we expect CopyBytes to return and src to be - // closed too. No good way to assert that the conn is closed really other than - // assume the retCh receive will hand unless CopyBytes exits and that - // CopyBytes defers Closing both. i.e. if this test doesn't time out it's - // good! - dst.Close() - <-retCh -} diff --git a/proxy/manager.go b/proxy/manager.go deleted file mode 100644 index c22a1b7ff..000000000 --- a/proxy/manager.go +++ /dev/null @@ -1,140 +0,0 @@ -package proxy - -import ( - "errors" - "log" - "os" -) - -var ( - // ErrExists is the error returned when adding a proxy that exists already. - ErrExists = errors.New("proxy with that name already exists") - // ErrNotExist is the error returned when removing a proxy that doesn't exist. - ErrNotExist = errors.New("proxy with that name doesn't exist") -) - -// Manager implements the logic for configuring and running a set of proxiers. -// Typically it's used to run one PublicListener and zero or more Upstreams. -type Manager struct { - ch chan managerCmd - - // stopped is used to signal the caller of StopAll when the run loop exits - // after stopping all runners. It's only closed. - stopped chan struct{} - - // runners holds the currently running instances. It should only by accessed - // from within the `run` goroutine. - runners map[string]*Runner - - logger *log.Logger -} - -type managerCmd struct { - name string - p Proxier - errCh chan error -} - -// NewManager creates a manager of proxier instances. -func NewManager() *Manager { - return NewManagerWithLogger(log.New(os.Stdout, "", log.LstdFlags)) -} - -// NewManagerWithLogger creates a manager of proxier instances with the -// specified logger. -func NewManagerWithLogger(logger *log.Logger) *Manager { - m := &Manager{ - ch: make(chan managerCmd), - stopped: make(chan struct{}), - runners: make(map[string]*Runner), - logger: logger, - } - go m.run() - return m -} - -// RunProxier starts a new Proxier instance in the manager. It is safe to call -// from separate goroutines. If there is already a running proxy with the same -// name it returns ErrExists. -func (m *Manager) RunProxier(name string, p Proxier) error { - cmd := managerCmd{ - name: name, - p: p, - errCh: make(chan error), - } - m.ch <- cmd - return <-cmd.errCh -} - -// StopProxier stops a Proxier instance by name. It is safe to call from -// separate goroutines. If the instance with that name doesn't exist it returns -// ErrNotExist. -func (m *Manager) StopProxier(name string) error { - cmd := managerCmd{ - name: name, - p: nil, - errCh: make(chan error), - } - m.ch <- cmd - return <-cmd.errCh -} - -// StopAll shuts down the manager instance and stops all running proxies. It is -// safe to call from any goroutine but must only be called once. -func (m *Manager) StopAll() error { - close(m.ch) - <-m.stopped - return nil -} - -// run is the main manager processing loop. It keeps all actions in a single -// goroutine triggered by channel commands to keep it simple to reason about -// lifecycle events for each proxy. -func (m *Manager) run() { - defer close(m.stopped) - - // range over channel blocks and loops on each message received until channel - // is closed. - for cmd := range m.ch { - if cmd.p == nil { - m.remove(&cmd) - } else { - m.add(&cmd) - } - } - - // Shutting down, Stop all the runners - for _, r := range m.runners { - r.Stop() - } -} - -// add the named proxier instance and stop it. Should only be called from the -// run loop. -func (m *Manager) add(cmd *managerCmd) { - // Check existing - if _, ok := m.runners[cmd.name]; ok { - cmd.errCh <- ErrExists - return - } - - // Start new runner - r := NewRunnerWithLogger(cmd.name, cmd.p, m.logger) - m.runners[cmd.name] = r - go r.Listen() - cmd.errCh <- nil -} - -// remove the named proxier instance and stop it. Should only be called from the -// run loop. -func (m *Manager) remove(cmd *managerCmd) { - // Fetch proxier by name - r, ok := m.runners[cmd.name] - if !ok { - cmd.errCh <- ErrNotExist - return - } - err := r.Stop() - delete(m.runners, cmd.name) - cmd.errCh <- err -} diff --git a/proxy/manager_test.go b/proxy/manager_test.go deleted file mode 100644 index d4fa8c5b4..000000000 --- a/proxy/manager_test.go +++ /dev/null @@ -1,76 +0,0 @@ -package proxy - -import ( - "fmt" - "net" - "testing" - - "github.com/stretchr/testify/require" -) - -func TestManager(t *testing.T) { - m := NewManager() - - addrs := TestLocalBindAddrs(t, 3) - - for i := 0; i < len(addrs); i++ { - name := fmt.Sprintf("proxier-%d", i) - // Run proxy - err := m.RunProxier(name, &TestProxier{ - Addr: addrs[i], - Prefix: name + ": ", - }) - require.Nil(t, err) - } - - // Make sure each one is echoing correctly now all are running - for i := 0; i < len(addrs); i++ { - conn, err := net.Dial("tcp", addrs[i]) - require.Nil(t, err) - TestEchoConn(t, conn, fmt.Sprintf("proxier-%d: ", i)) - conn.Close() - } - - // Stop first proxier - err := m.StopProxier("proxier-0") - require.Nil(t, err) - - // We should fail to dial it now. Note that Runner.Stop is synchronous so - // there should be a strong guarantee that it's stopped listening by now. - _, err = net.Dial("tcp", addrs[0]) - require.NotNil(t, err) - - // Rest of proxiers should still be running - for i := 1; i < len(addrs); i++ { - conn, err := net.Dial("tcp", addrs[i]) - require.Nil(t, err) - TestEchoConn(t, conn, fmt.Sprintf("proxier-%d: ", i)) - conn.Close() - } - - // Stop non-existent proxier should fail - err = m.StopProxier("foo") - require.Equal(t, ErrNotExist, err) - - // Add already-running proxier should fail - err = m.RunProxier("proxier-1", &TestProxier{}) - require.Equal(t, ErrExists, err) - - // But rest should stay running - for i := 1; i < len(addrs); i++ { - conn, err := net.Dial("tcp", addrs[i]) - require.Nil(t, err) - TestEchoConn(t, conn, fmt.Sprintf("proxier-%d: ", i)) - conn.Close() - } - - // StopAll should stop everything - err = m.StopAll() - require.Nil(t, err) - - // Verify failures - for i := 0; i < len(addrs); i++ { - _, err = net.Dial("tcp", addrs[i]) - require.NotNilf(t, err, "proxier-%d should not be running", i) - } -} diff --git a/proxy/proxier.go b/proxy/proxier.go deleted file mode 100644 index 23940c6ad..000000000 --- a/proxy/proxier.go +++ /dev/null @@ -1,32 +0,0 @@ -package proxy - -import ( - "errors" - "net" -) - -// ErrStopped is returned for operations on a proxy that is stopped -var ErrStopped = errors.New("stopped") - -// ErrStopping is returned for operations on a proxy that is stopping -var ErrStopping = errors.New("stopping") - -// Proxier is an interface for managing different proxy implementations in a -// standard way. We have at least two different types of Proxier implementations -// needed: one for the incoming mTLS -> local proxy and another for each -// "upstream" service the app needs to talk out to (which listens locally and -// performs service discovery to find a suitable remote service). -type Proxier interface { - // Listener returns a net.Listener that is open and ready for use, the Proxy - // manager will arrange accepting new connections from it and passing them to - // the handler method. - Listener() (net.Listener, error) - - // HandleConn is called for each incoming connection accepted by the listener. - // It is called in it's own goroutine and should run until it hits an error. - // When stopping the Proxier, the manager will simply close the conn provided - // and expects an error to be eventually returned. Any time spent not blocked - // on the passed conn (for example doing service discovery) should therefore - // be time-bound so that shutdown can't stall forever. - HandleConn(conn net.Conn) error -} diff --git a/proxy/proxy.go b/proxy/proxy.go deleted file mode 100644 index a293466b8..000000000 --- a/proxy/proxy.go +++ /dev/null @@ -1,112 +0,0 @@ -package proxy - -import ( - "context" - "log" - - "github.com/hashicorp/consul/api" - "github.com/hashicorp/consul/connect" -) - -// Proxy implements the built-in connect proxy. -type Proxy struct { - proxyID, token string - - connect connect.Client - manager *Manager - cfgWatch ConfigWatcher - cfg *Config - - logger *log.Logger -} - -// NewFromConfigFile returns a Proxy instance configured just from a local file. -// This is intended mostly for development and bypasses the normal mechanisms -// for fetching config and certificates from the local agent. -func NewFromConfigFile(client *api.Client, filename string, - logger *log.Logger) (*Proxy, error) { - cfg, err := ParseConfigFile(filename) - if err != nil { - return nil, err - } - - connect, err := connect.NewInsecureDevClientWithLocalCerts(client, - cfg.DevCAFile, cfg.DevServiceCertFile, cfg.DevServiceKeyFile) - if err != nil { - return nil, err - } - - p := &Proxy{ - proxyID: cfg.ProxyID, - connect: connect, - manager: NewManagerWithLogger(logger), - cfgWatch: NewStaticConfigWatcher(cfg), - logger: logger, - } - return p, nil -} - -// New returns a Proxy with the given id, consuming the provided (configured) -// agent. It is ready to Run(). -func New(client *api.Client, proxyID string, logger *log.Logger) (*Proxy, error) { - p := &Proxy{ - proxyID: proxyID, - connect: connect.NewClient(client), - manager: NewManagerWithLogger(logger), - cfgWatch: &AgentConfigWatcher{client: client}, - logger: logger, - } - return p, nil -} - -// Run the proxy instance until a fatal error occurs or ctx is cancelled. -func (p *Proxy) Run(ctx context.Context) error { - defer p.manager.StopAll() - - // Watch for config changes (initial setup happens on first "change") - for { - select { - case newCfg := <-p.cfgWatch.Watch(): - p.logger.Printf("[DEBUG] got new config") - if p.cfg == nil { - // Initial setup - err := p.startPublicListener(ctx, newCfg.PublicListener) - if err != nil { - return err - } - } - - // TODO add/remove upstreams properly based on a diff with current - for _, uc := range newCfg.Upstreams { - uc.Client = p.connect - uc.logger = p.logger - err := p.manager.RunProxier(uc.String(), NewUpstream(uc)) - if err == ErrExists { - continue - } - if err != nil { - p.logger.Printf("[ERR] failed to start upstream %s: %s", uc.String(), - err) - } - } - p.cfg = newCfg - - case <-ctx.Done(): - return nil - } - } -} - -func (p *Proxy) startPublicListener(ctx context.Context, - cfg PublicListenerConfig) error { - - // Get TLS creds - tlsCfg, err := p.connect.ServerTLSConfig() - if err != nil { - return err - } - cfg.TLSConfig = tlsCfg - - cfg.logger = p.logger - return p.manager.RunProxier("public_listener", NewPublicListener(cfg)) -} diff --git a/proxy/public_listener.go b/proxy/public_listener.go deleted file mode 100644 index 1942992cf..000000000 --- a/proxy/public_listener.go +++ /dev/null @@ -1,119 +0,0 @@ -package proxy - -import ( - "crypto/tls" - "fmt" - "log" - "net" - "os" - "time" -) - -// PublicListener provides an implementation of Proxier that listens for inbound -// mTLS connections, authenticates them with the local agent, and if successful -// forwards them to the locally configured app. -type PublicListener struct { - cfg *PublicListenerConfig -} - -// PublicListenerConfig contains the most basic parameters needed to start the -// proxy. -// -// Note that the tls.Configs here are expected to be "dynamic" in the sense that -// they are expected to use `GetConfigForClient` (added in go 1.8) to return -// dynamic config per connection if required. -type PublicListenerConfig struct { - // BindAddress is the host:port the public mTLS listener will bind to. - BindAddress string `json:"bind_address" hcl:"bind_address"` - - // LocalServiceAddress is the host:port for the proxied application. This - // should be on loopback or otherwise protected as it's plain TCP. - LocalServiceAddress string `json:"local_service_address" hcl:"local_service_address"` - - // TLSConfig config is used for the mTLS listener. - TLSConfig *tls.Config - - // LocalConnectTimeout is the timeout for establishing connections with the - // local backend. Defaults to 1000 (1s). - LocalConnectTimeoutMs int `json:"local_connect_timeout_ms" hcl:"local_connect_timeout_ms"` - - // HandshakeTimeout is the timeout for incoming mTLS clients to complete a - // handshake. Setting this low avoids DOS by malicious clients holding - // resources open. Defaults to 10000 (10s). - HandshakeTimeoutMs int `json:"handshake_timeout_ms" hcl:"handshake_timeout_ms"` - - logger *log.Logger -} - -func (plc *PublicListenerConfig) applyDefaults() { - if plc.LocalConnectTimeoutMs == 0 { - plc.LocalConnectTimeoutMs = 1000 - } - if plc.HandshakeTimeoutMs == 0 { - plc.HandshakeTimeoutMs = 10000 - } - if plc.logger == nil { - plc.logger = log.New(os.Stdout, "", log.LstdFlags) - } -} - -// NewPublicListener returns a proxy instance with the given config. -func NewPublicListener(cfg PublicListenerConfig) *PublicListener { - p := &PublicListener{ - cfg: &cfg, - } - p.cfg.applyDefaults() - return p -} - -// Listener implements Proxier -func (p *PublicListener) Listener() (net.Listener, error) { - l, err := net.Listen("tcp", p.cfg.BindAddress) - if err != nil { - return nil, err - } - - return tls.NewListener(l, p.cfg.TLSConfig), nil -} - -// HandleConn implements Proxier -func (p *PublicListener) HandleConn(conn net.Conn) error { - defer conn.Close() - tlsConn, ok := conn.(*tls.Conn) - if !ok { - return fmt.Errorf("non-TLS conn") - } - - // Setup Handshake timer - to := time.Duration(p.cfg.HandshakeTimeoutMs) * time.Millisecond - err := tlsConn.SetDeadline(time.Now().Add(to)) - if err != nil { - return err - } - - // Force TLS handshake so that abusive clients can't hold resources open - err = tlsConn.Handshake() - if err != nil { - return err - } - - // Handshake OK, clear the deadline - err = tlsConn.SetDeadline(time.Time{}) - if err != nil { - return err - } - - // Huzzah, open a connection to the backend and let them talk - // TODO maybe add a connection pool here? - to = time.Duration(p.cfg.LocalConnectTimeoutMs) * time.Millisecond - dst, err := net.DialTimeout("tcp", p.cfg.LocalServiceAddress, to) - if err != nil { - return fmt.Errorf("failed dialling local app: %s", err) - } - - p.cfg.logger.Printf("[DEBUG] accepted connection from %s", conn.RemoteAddr()) - - // Hand conn and dst over to Conn to manage the byte copying. - c := NewConn(conn, dst) - return c.CopyBytes() -} diff --git a/proxy/public_listener_test.go b/proxy/public_listener_test.go deleted file mode 100644 index 83e84d658..000000000 --- a/proxy/public_listener_test.go +++ /dev/null @@ -1,38 +0,0 @@ -package proxy - -import ( - "crypto/tls" - "testing" - - "github.com/hashicorp/consul/connect" - "github.com/stretchr/testify/require" -) - -func TestPublicListener(t *testing.T) { - addrs := TestLocalBindAddrs(t, 2) - - cfg := PublicListenerConfig{ - BindAddress: addrs[0], - LocalServiceAddress: addrs[1], - HandshakeTimeoutMs: 100, - LocalConnectTimeoutMs: 100, - TLSConfig: connect.TestTLSConfig(t, "ca1", "web"), - } - - testApp, err := NewTestTCPServer(t, cfg.LocalServiceAddress) - require.Nil(t, err) - defer testApp.Close() - - p := NewPublicListener(cfg) - - // Run proxy - r := NewRunner("test", p) - go r.Listen() - defer r.Stop() - - // Proxy and backend are running, play the part of a TLS client using same - // cert for now. - conn, err := tls.Dial("tcp", cfg.BindAddress, connect.TestTLSConfig(t, "ca1", "web")) - require.Nil(t, err) - TestEchoConn(t, conn, "") -} diff --git a/proxy/runner.go b/proxy/runner.go deleted file mode 100644 index b559b22b7..000000000 --- a/proxy/runner.go +++ /dev/null @@ -1,118 +0,0 @@ -package proxy - -import ( - "log" - "net" - "os" - "sync" - "sync/atomic" -) - -// Runner manages the lifecycle of one Proxier. -type Runner struct { - name string - p Proxier - - // Stopping is if a flag that is updated and read atomically - stopping int32 - stopCh chan struct{} - // wg is used to signal back to Stop when all goroutines have stopped - wg sync.WaitGroup - - logger *log.Logger -} - -// NewRunner returns a Runner ready to Listen. -func NewRunner(name string, p Proxier) *Runner { - return NewRunnerWithLogger(name, p, log.New(os.Stdout, "", log.LstdFlags)) -} - -// NewRunnerWithLogger returns a Runner ready to Listen using the specified -// log.Logger. -func NewRunnerWithLogger(name string, p Proxier, logger *log.Logger) *Runner { - return &Runner{ - name: name, - p: p, - stopCh: make(chan struct{}), - logger: logger, - } -} - -// Listen starts the proxier instance. It blocks until a fatal error occurs or -// Stop() is called. -func (r *Runner) Listen() error { - if atomic.LoadInt32(&r.stopping) == 1 { - return ErrStopped - } - - l, err := r.p.Listener() - if err != nil { - return err - } - r.logger.Printf("[INFO] proxy: %s listening on %s", r.name, l.Addr().String()) - - // Run goroutine that will close listener on stop - go func() { - <-r.stopCh - l.Close() - r.logger.Printf("[INFO] proxy: %s shutdown", r.name) - }() - - // Add one for the accept loop - r.wg.Add(1) - defer r.wg.Done() - - for { - conn, err := l.Accept() - if err != nil { - if atomic.LoadInt32(&r.stopping) == 1 { - return nil - } - return err - } - - go r.handle(conn) - } - - return nil -} - -func (r *Runner) handle(conn net.Conn) { - r.wg.Add(1) - defer r.wg.Done() - - // Start a goroutine that will watch for the Runner stopping and close the - // conn, or watch for the Proxier closing (e.g. because other end hung up) and - // stop the goroutine to avoid leaks - doneCh := make(chan struct{}) - defer close(doneCh) - - go func() { - select { - case <-r.stopCh: - r.logger.Printf("[DEBUG] proxy: %s: terminating conn", r.name) - conn.Close() - return - case <-doneCh: - // Connection is already closed, this goroutine not needed any more - return - } - }() - - err := r.p.HandleConn(conn) - if err != nil { - r.logger.Printf("[DEBUG] proxy: %s: connection terminated: %s", r.name, err) - } else { - r.logger.Printf("[DEBUG] proxy: %s: connection terminated", r.name) - } -} - -// Stop stops the Listener and closes any active connections immediately. -func (r *Runner) Stop() error { - old := atomic.SwapInt32(&r.stopping, 1) - if old == 0 { - close(r.stopCh) - } - r.wg.Wait() - return nil -} diff --git a/proxy/testdata/config-kitchensink.hcl b/proxy/testdata/config-kitchensink.hcl deleted file mode 100644 index 766928353..000000000 --- a/proxy/testdata/config-kitchensink.hcl +++ /dev/null @@ -1,36 +0,0 @@ -# Example proxy config with everything specified - -proxy_id = "foo" -token = "11111111-2222-3333-4444-555555555555" - -proxied_service_name = "web" -proxied_service_namespace = "default" - -# Assumes running consul in dev mode from the repo root... -dev_ca_file = "connect/testdata/ca1-ca-consul-internal.cert.pem" -dev_service_cert_file = "connect/testdata/ca1-svc-web.cert.pem" -dev_service_key_file = "connect/testdata/ca1-svc-web.key.pem" - -public_listener { - bind_address = ":9999" - local_service_address = "127.0.0.1:5000" - local_connect_timeout_ms = 1000 - handshake_timeout_ms = 5000 -} - -upstreams = [ - { - local_bind_address = "127.0.0.1:6000" - destination_name = "db" - destination_namespace = "default" - destination_type = "service" - connect_timeout_ms = 10000 - }, - { - local_bind_address = "127.0.0.1:6001" - destination_name = "geo-cache" - destination_namespace = "default" - destination_type = "prepared_query" - connect_timeout_ms = 10000 - } -] diff --git a/proxy/testing.go b/proxy/testing.go deleted file mode 100644 index bd132b77f..000000000 --- a/proxy/testing.go +++ /dev/null @@ -1,170 +0,0 @@ -package proxy - -import ( - "context" - "crypto/tls" - "fmt" - "io" - "log" - "net" - "sync/atomic" - - "github.com/hashicorp/consul/lib/freeport" - "github.com/mitchellh/go-testing-interface" - "github.com/stretchr/testify/require" -) - -// TestLocalBindAddrs returns n localhost address:port strings with free ports -// for binding test listeners to. -func TestLocalBindAddrs(t testing.T, n int) []string { - ports := freeport.GetT(t, n) - addrs := make([]string, n) - for i, p := range ports { - addrs[i] = fmt.Sprintf("localhost:%d", p) - } - return addrs -} - -// TestTCPServer is a simple TCP echo server for use during tests. -type TestTCPServer struct { - l net.Listener - stopped int32 - accepted, closed, active int32 -} - -// NewTestTCPServer opens as a listening socket on the given address and returns -// a TestTCPServer serving requests to it. The server is already started and can -// be stopped by calling Close(). -func NewTestTCPServer(t testing.T, addr string) (*TestTCPServer, error) { - l, err := net.Listen("tcp", addr) - if err != nil { - return nil, err - } - log.Printf("test tcp server listening on %s", addr) - s := &TestTCPServer{ - l: l, - } - go s.accept() - return s, nil -} - -// Close stops the server -func (s *TestTCPServer) Close() { - atomic.StoreInt32(&s.stopped, 1) - if s.l != nil { - s.l.Close() - } -} - -func (s *TestTCPServer) accept() error { - for { - conn, err := s.l.Accept() - if err != nil { - if atomic.LoadInt32(&s.stopped) == 1 { - log.Printf("test tcp echo server %s stopped", s.l.Addr()) - return nil - } - log.Printf("test tcp echo server %s failed: %s", s.l.Addr(), err) - return err - } - - atomic.AddInt32(&s.accepted, 1) - atomic.AddInt32(&s.active, 1) - - go func(c net.Conn) { - io.Copy(c, c) - atomic.AddInt32(&s.closed, 1) - atomic.AddInt32(&s.active, -1) - }(conn) - } -} - -// TestEchoConn attempts to write some bytes to conn and expects to read them -// back within a short timeout (10ms). If prefix is not empty we expect it to be -// poresent at the start of all echoed responses (for example to distinguish -// between multiple echo server instances). -func TestEchoConn(t testing.T, conn net.Conn, prefix string) { - t.Helper() - - // Write some bytes and read them back - n, err := conn.Write([]byte("Hello World")) - require.Equal(t, 11, n) - require.Nil(t, err) - - expectLen := 11 + len(prefix) - - buf := make([]byte, expectLen) - // read until our buffer is full - it might be separate packets if prefix is - // in use. - got := 0 - for got < expectLen { - n, err = conn.Read(buf[got:]) - require.Nil(t, err) - got += n - } - require.Equal(t, expectLen, got) - require.Equal(t, prefix+"Hello World", string(buf[:])) -} - -// TestConnectClient is a testing mock that implements connect.Client but -// stubs the methods to make testing simpler. -type TestConnectClient struct { - Server *TestTCPServer - TLSConfig *tls.Config - Calls []callTuple -} -type callTuple struct { - typ, ns, name string -} - -// ServerTLSConfig implements connect.Client -func (c *TestConnectClient) ServerTLSConfig() (*tls.Config, error) { - return c.TLSConfig, nil -} - -// DialService implements connect.Client -func (c *TestConnectClient) DialService(ctx context.Context, namespace, - name string) (net.Conn, error) { - - c.Calls = append(c.Calls, callTuple{"service", namespace, name}) - - // Actually returning a vanilla TCP conn not a TLS one but the caller - // shouldn't care for tests since this interface should hide all the TLS - // config and verification. - return net.Dial("tcp", c.Server.l.Addr().String()) -} - -// DialPreparedQuery implements connect.Client -func (c *TestConnectClient) DialPreparedQuery(ctx context.Context, namespace, - name string) (net.Conn, error) { - - c.Calls = append(c.Calls, callTuple{"prepared_query", namespace, name}) - - // Actually returning a vanilla TCP conn not a TLS one but the caller - // shouldn't care for tests since this interface should hide all the TLS - // config and verification. - return net.Dial("tcp", c.Server.l.Addr().String()) -} - -// TestProxier is a simple Proxier instance that can be used in tests. -type TestProxier struct { - // Addr to listen on - Addr string - // Prefix to write first before echoing on new connections - Prefix string -} - -// Listener implements Proxier -func (p *TestProxier) Listener() (net.Listener, error) { - return net.Listen("tcp", p.Addr) -} - -// HandleConn implements Proxier -func (p *TestProxier) HandleConn(conn net.Conn) error { - _, err := conn.Write([]byte(p.Prefix)) - if err != nil { - return err - } - _, err = io.Copy(conn, conn) - return err -} diff --git a/proxy/upstream.go b/proxy/upstream.go deleted file mode 100644 index 1101624be..000000000 --- a/proxy/upstream.go +++ /dev/null @@ -1,261 +0,0 @@ -package proxy - -import ( - "context" - "fmt" - "log" - "net" - "os" - "time" - - "github.com/hashicorp/consul/connect" -) - -// Upstream provides an implementation of Proxier that listens for inbound TCP -// connections on the private network shared with the proxied application -// (typically localhost). For each accepted connection from the app, it uses the -// connect.Client to discover an instance and connect over mTLS. -type Upstream struct { - cfg *UpstreamConfig -} - -// UpstreamConfig configures the upstream -type UpstreamConfig struct { - // Client is the connect client to perform discovery with - Client connect.Client - - // LocalAddress is the host:port to listen on for local app connections. - LocalBindAddress string `json:"local_bind_address" hcl:"local_bind_address,attr"` - - // DestinationName is the service name of the destination. - DestinationName string `json:"destination_name" hcl:"destination_name,attr"` - - // DestinationNamespace is the namespace of the destination. - DestinationNamespace string `json:"destination_namespace" hcl:"destination_namespace,attr"` - - // DestinationType determines which service discovery method is used to find a - // candidate instance to connect to. - DestinationType string `json:"destination_type" hcl:"destination_type,attr"` - - // ConnectTimeout is the timeout for establishing connections with the remote - // service instance. Defaults to 10,000 (10s). - ConnectTimeoutMs int `json:"connect_timeout_ms" hcl:"connect_timeout_ms,attr"` - - logger *log.Logger -} - -func (uc *UpstreamConfig) applyDefaults() { - if uc.ConnectTimeoutMs == 0 { - uc.ConnectTimeoutMs = 10000 - } - if uc.logger == nil { - uc.logger = log.New(os.Stdout, "", log.LstdFlags) - } -} - -// String returns a string that uniquely identifies the Upstream. Used for -// identifying the upstream in log output and map keys. -func (uc *UpstreamConfig) String() string { - return fmt.Sprintf("%s->%s:%s/%s", uc.LocalBindAddress, uc.DestinationType, - uc.DestinationNamespace, uc.DestinationName) -} - -// NewUpstream returns an outgoing proxy instance with the given config. -func NewUpstream(cfg UpstreamConfig) *Upstream { - u := &Upstream{ - cfg: &cfg, - } - u.cfg.applyDefaults() - return u -} - -// String returns a string that uniquely identifies the Upstream. Used for -// identifying the upstream in log output and map keys. -func (u *Upstream) String() string { - return u.cfg.String() -} - -// Listener implements Proxier -func (u *Upstream) Listener() (net.Listener, error) { - return net.Listen("tcp", u.cfg.LocalBindAddress) -} - -// HandleConn implements Proxier -func (u *Upstream) HandleConn(conn net.Conn) error { - defer conn.Close() - - // Discover destination instance - dst, err := u.discoverAndDial() - if err != nil { - return err - } - - // Hand conn and dst over to Conn to manage the byte copying. - c := NewConn(conn, dst) - return c.CopyBytes() -} - -func (u *Upstream) discoverAndDial() (net.Conn, error) { - to := time.Duration(u.cfg.ConnectTimeoutMs) * time.Millisecond - ctx, cancel := context.WithTimeout(context.Background(), to) - defer cancel() - - switch u.cfg.DestinationType { - case "service": - return u.cfg.Client.DialService(ctx, u.cfg.DestinationNamespace, - u.cfg.DestinationName) - - case "prepared_query": - return u.cfg.Client.DialPreparedQuery(ctx, u.cfg.DestinationNamespace, - u.cfg.DestinationName) - - default: - return nil, fmt.Errorf("invalid destination type %s", u.cfg.DestinationType) - } -} - -/* -// Upstream represents a service that the proxied application needs to connect -// out to. It provides a dedication local TCP listener (usually listening only -// on loopback) and forwards incoming connections to the proxy to handle. -type Upstream struct { - cfg *UpstreamConfig - wg sync.WaitGroup - - proxy *Proxy - fatalErr error -} - -// NewUpstream creates an upstream ready to attach to a proxy instance with -// Proxy.AddUpstream. An Upstream can only be attached to single Proxy instance -// at once. -func NewUpstream(p *Proxy, cfg *UpstreamConfig) *Upstream { - return &Upstream{ - cfg: cfg, - proxy: p, - shutdown: make(chan struct{}), - } -} - -// UpstreamConfig configures the upstream -type UpstreamConfig struct { - // LocalAddress is the host:port to listen on for local app connections. - LocalAddress string - - // DestinationName is the service name of the destination. - DestinationName string - - // DestinationNamespace is the namespace of the destination. - DestinationNamespace string - - // DestinationType determines which service discovery method is used to find a - // candidate instance to connect to. - DestinationType string -} - -// String returns a string representation for the upstream for debugging or -// use as a unique key. -func (uc *UpstreamConfig) String() string { - return fmt.Sprintf("%s->%s:%s/%s", uc.LocalAddress, uc.DestinationType, - uc.DestinationNamespace, uc.DestinationName) -} - -func (u *Upstream) listen() error { - l, err := net.Listen("tcp", u.cfg.LocalAddress) - if err != nil { - u.fatal(err) - return - } - - for { - conn, err := l.Accept() - if err != nil { - return err - } - - go u.discoverAndConnect(conn) - } -} - -func (u *Upstream) discoverAndConnect(src net.Conn) { - // First, we need an upstream instance from Consul to connect to - dstAddrs, err := u.discoverInstances() - if err != nil { - u.fatal(fmt.Errorf("failed to discover upstream instances: %s", err)) - return - } - - if len(dstAddrs) < 1 { - log.Printf("[INFO] no instances found for %s", len(dstAddrs), u) - } - - // Attempt connection to first one that works - // TODO: configurable number/deadline? - for idx, addr := range dstAddrs { - err := u.proxy.startProxyingConn(src, addr, false) - if err != nil { - log.Printf("[INFO] failed to connect to %s: %s (%d of %d)", addr, err, - idx+1, len(dstAddrs)) - continue - } - return - } - - log.Printf("[INFO] failed to connect to all %d instances for %s", - len(dstAddrs), u) -} - -func (u *Upstream) discoverInstances() ([]string, error) { - switch u.cfg.DestinationType { - case "service": - svcs, _, err := u.cfg.Consul.Health().Service(u.cfg.DestinationName, "", - true, nil) - if err != nil { - return nil, err - } - - addrs := make([]string, len(svcs)) - - // Shuffle order as we go since health endpoint doesn't - perm := rand.Perm(len(addrs)) - for i, se := range svcs { - // Pick location in output array based on next permutation position - j := perm[i] - addrs[j] = fmt.Sprintf("%s:%d", se.Service.Address, se.Service.Port) - } - - return addrs, nil - - case "prepared_query": - pqr, _, err := u.cfg.Consul.PreparedQuery().Execute(u.cfg.DestinationName, - nil) - if err != nil { - return nil, err - } - - addrs := make([]string, 0, len(svcs)) - for _, se := range pqr.Nodes { - addrs = append(addrs, fmt.Sprintf("%s:%d", se.Service.Address, - se.Service.Port)) - } - - // PreparedQuery execution already shuffles the result - return addrs, nil - - default: - u.fatal(fmt.Errorf("invalid destination type %s", u.cfg.DestinationType)) - } -} - -func (u *Upstream) fatal(err Error) { - log.Printf("[ERROR] upstream %s stopping on error: %s", u.cfg.LocalAddress, - err) - u.fatalErr = err -} - -// String returns a string representation for the upstream for debugging or -// use as a unique key. -func (u *Upstream) String() string { - return u.cfg.String() -} -*/ diff --git a/proxy/upstream_test.go b/proxy/upstream_test.go deleted file mode 100644 index 79bca0136..000000000 --- a/proxy/upstream_test.go +++ /dev/null @@ -1,75 +0,0 @@ -package proxy - -import ( - "net" - "testing" - - "github.com/hashicorp/consul/connect" - "github.com/stretchr/testify/require" -) - -func TestUpstream(t *testing.T) { - tests := []struct { - name string - cfg UpstreamConfig - }{ - { - name: "service", - cfg: UpstreamConfig{ - DestinationType: "service", - DestinationNamespace: "default", - DestinationName: "db", - ConnectTimeoutMs: 100, - }, - }, - { - name: "prepared_query", - cfg: UpstreamConfig{ - DestinationType: "prepared_query", - DestinationNamespace: "default", - DestinationName: "geo-db", - ConnectTimeoutMs: 100, - }, - }, - } - - for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { - addrs := TestLocalBindAddrs(t, 2) - - testApp, err := NewTestTCPServer(t, addrs[0]) - require.Nil(t, err) - defer testApp.Close() - - // Create mock client that will "discover" our test tcp server as a target and - // skip TLS altogether. - client := &TestConnectClient{ - Server: testApp, - TLSConfig: connect.TestTLSConfig(t, "ca1", "web"), - } - - // Override cfg params - tt.cfg.LocalBindAddress = addrs[1] - tt.cfg.Client = client - - u := NewUpstream(tt.cfg) - - // Run proxy - r := NewRunner("test", u) - go r.Listen() - defer r.Stop() - - // Proxy and fake remote service are running, play the part of the app - // connecting to a remote connect service over TCP. - conn, err := net.Dial("tcp", tt.cfg.LocalBindAddress) - require.Nil(t, err) - TestEchoConn(t, conn, "") - - // Validate that discovery actually was called as we expected - require.Len(t, client.Calls, 1) - require.Equal(t, tt.cfg.DestinationType, client.Calls[0].typ) - require.Equal(t, tt.cfg.DestinationNamespace, client.Calls[0].ns) - require.Equal(t, tt.cfg.DestinationName, client.Calls[0].name) - }) - } -} From 51b1bc028d2055389c9ea483873185ab151787dc Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Tue, 3 Apr 2018 19:10:59 +0100 Subject: [PATCH 112/539] Rework connect/proxy and command/connect/proxy. End to end demo working again --- agent/connect/testing_ca.go | 2 +- command/connect/proxy/proxy.go | 37 ++- command/connect/proxy/proxy_test.go | 1 - connect/certgen/certgen.go | 2 +- connect/proxy/config.go | 223 ++++++++++++++++++ connect/proxy/config_test.go | 108 +++++++++ connect/proxy/conn.go | 61 +++++ connect/proxy/conn_test.go | 185 +++++++++++++++ connect/proxy/listener.go | 116 +++++++++ connect/proxy/listener_test.go | 91 +++++++ connect/proxy/proxy.go | 134 +++++++++++ connect/proxy/testdata/config-kitchensink.hcl | 32 +++ connect/proxy/testing.go | 105 +++++++++ connect/resolver.go | 19 +- connect/resolver_test.go | 1 - connect/service.go | 71 ++++-- connect/service_test.go | 84 ++++++- connect/testing.go | 85 +++++-- connect/tls.go | 14 +- connect/tls_test.go | 5 +- 20 files changed, 1279 insertions(+), 97 deletions(-) delete mode 100644 command/connect/proxy/proxy_test.go create mode 100644 connect/proxy/config.go create mode 100644 connect/proxy/config_test.go create mode 100644 connect/proxy/conn.go create mode 100644 connect/proxy/conn_test.go create mode 100644 connect/proxy/listener.go create mode 100644 connect/proxy/listener_test.go create mode 100644 connect/proxy/proxy.go create mode 100644 connect/proxy/testdata/config-kitchensink.hcl create mode 100644 connect/proxy/testing.go diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index 3fbcf2e02..e12372589 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -157,7 +157,7 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) (string, string t.Fatalf("error generating serial number: %s", err) } - // Genereate fresh private key + // Generate fresh private key pkSigner, pkPEM := testPrivateKey(t) // Cert template for generation diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index 237f4b7e2..362e70459 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -1,17 +1,15 @@ package proxy import ( - "context" "flag" "fmt" "io" "log" "net/http" - // Expose pprof if configured - _ "net/http/pprof" + _ "net/http/pprof" // Expose pprof if configured "github.com/hashicorp/consul/command/flags" - proxyImpl "github.com/hashicorp/consul/proxy" + proxyImpl "github.com/hashicorp/consul/connect/proxy" "github.com/hashicorp/consul/logger" "github.com/hashicorp/logutils" @@ -46,13 +44,14 @@ type cmd struct { func (c *cmd) init() { c.flags = flag.NewFlagSet("", flag.ContinueOnError) - c.flags.StringVar(&c.cfgFile, "insecure-dev-config", "", + c.flags.StringVar(&c.cfgFile, "dev-config", "", "If set, proxy config is read on startup from this file (in HCL or JSON"+ "format). If a config file is given, the proxy will use that instead of "+ "querying the local agent for it's configuration. It will not reload it "+ "except on startup. In this mode the proxy WILL NOT authorize incoming "+ "connections with the local agent which is totally insecure. This is "+ - "ONLY for development and testing.") + "ONLY for internal development and testing and will probably be removed "+ + "once proxy implementation is more complete..") c.flags.StringVar(&c.proxyID, "proxy-id", "", "The proxy's ID on the local agent.") @@ -121,31 +120,23 @@ func (c *cmd) Run(args []string) int { } } - ctx, cancel := context.WithCancel(context.Background()) + // Hook the shutdownCh up to close the proxy go func() { - err := p.Run(ctx) - if err != nil { - c.UI.Error(fmt.Sprintf("Failed running proxy: %s", err)) - } - // If we exited early due to a fatal error, need to unblock the main - // routine. But we can't close shutdownCh since it might already be closed - // by a signal and there is no way to tell. We also can't send on it to - // unblock main routine since it's typed as receive only. So the best thing - // we can do is cancel the context and have the main routine select on both. - cancel() + <-c.shutdownCh + p.Close() }() - c.UI.Output("Consul Connect proxy running!") + c.UI.Output("Consul Connect proxy starting") c.UI.Output("Log data will now stream in as it occurs:\n") logGate.Flush() - // Wait for shutdown or context cancel (see Run() goroutine above) - select { - case <-c.shutdownCh: - cancel() - case <-ctx.Done(): + // Run the proxy + err = p.Serve() + if err != nil { + c.UI.Error(fmt.Sprintf("Failed running proxy: %s", err)) } + c.UI.Output("Consul Connect proxy shutdown") return 0 } diff --git a/command/connect/proxy/proxy_test.go b/command/connect/proxy/proxy_test.go deleted file mode 100644 index 943b369ff..000000000 --- a/command/connect/proxy/proxy_test.go +++ /dev/null @@ -1 +0,0 @@ -package proxy diff --git a/connect/certgen/certgen.go b/connect/certgen/certgen.go index 6fecf6ae1..89c424576 100644 --- a/connect/certgen/certgen.go +++ b/connect/certgen/certgen.go @@ -27,6 +27,7 @@ // NOTE: THIS IS A QUIRK OF OPENSSL; in Connect we distribute the roots alone // and stable intermediates like the XC cert to the _leaf_. package main // import "github.com/hashicorp/consul/connect/certgen" + import ( "flag" "fmt" @@ -42,7 +43,6 @@ import ( func main() { var numCAs = 2 var services = []string{"web", "db", "cache"} - //var slugRe = regexp.MustCompile("[^a-zA-Z0-9]+") var outDir string flag.StringVar(&outDir, "out-dir", "", diff --git a/connect/proxy/config.go b/connect/proxy/config.go new file mode 100644 index 000000000..a8f83d22c --- /dev/null +++ b/connect/proxy/config.go @@ -0,0 +1,223 @@ +package proxy + +import ( + "fmt" + "io/ioutil" + "log" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/connect" + "github.com/hashicorp/hcl" +) + +// Config is the publicly configurable state for an entire proxy instance. It's +// mostly used as the format for the local-file config mode which is mostly for +// dev/testing. In normal use, different parts of this config are pulled from +// different locations (e.g. command line, agent config endpoint, agent +// certificate endpoints). +type Config struct { + // ProxyID is the identifier for this proxy as registered in Consul. It's only + // guaranteed to be unique per agent. + ProxyID string `json:"proxy_id" hcl:"proxy_id"` + + // Token is the authentication token provided for queries to the local agent. + Token string `json:"token" hcl:"token"` + + // ProxiedServiceID is the identifier of the service this proxy is representing. + ProxiedServiceID string `json:"proxied_service_id" hcl:"proxied_service_id"` + + // ProxiedServiceNamespace is the namespace of the service this proxy is + // representing. + ProxiedServiceNamespace string `json:"proxied_service_namespace" hcl:"proxied_service_namespace"` + + // PublicListener configures the mTLS listener. + PublicListener PublicListenerConfig `json:"public_listener" hcl:"public_listener"` + + // Upstreams configures outgoing proxies for remote connect services. + Upstreams []UpstreamConfig `json:"upstreams" hcl:"upstreams"` + + // DevCAFile allows passing the file path to PEM encoded root certificate + // bundle to be used in development instead of the ones supplied by Connect. + DevCAFile string `json:"dev_ca_file" hcl:"dev_ca_file"` + + // DevServiceCertFile allows passing the file path to PEM encoded service + // certificate (client and server) to be used in development instead of the + // ones supplied by Connect. + DevServiceCertFile string `json:"dev_service_cert_file" hcl:"dev_service_cert_file"` + + // DevServiceKeyFile allows passing the file path to PEM encoded service + // private key to be used in development instead of the ones supplied by + // Connect. + DevServiceKeyFile string `json:"dev_service_key_file" hcl:"dev_service_key_file"` + + // service is a connect.Service instance representing the proxied service. It + // is created internally by the code responsible for setting up config as it + // may depend on other external dependencies + service *connect.Service +} + +// PublicListenerConfig contains the parameters needed for the incoming mTLS +// listener. +type PublicListenerConfig struct { + // BindAddress is the host:port the public mTLS listener will bind to. + BindAddress string `json:"bind_address" hcl:"bind_address"` + + // LocalServiceAddress is the host:port for the proxied application. This + // should be on loopback or otherwise protected as it's plain TCP. + LocalServiceAddress string `json:"local_service_address" hcl:"local_service_address"` + + // LocalConnectTimeout is the timeout for establishing connections with the + // local backend. Defaults to 1000 (1s). + LocalConnectTimeoutMs int `json:"local_connect_timeout_ms" hcl:"local_connect_timeout_ms"` + + // HandshakeTimeout is the timeout for incoming mTLS clients to complete a + // handshake. Setting this low avoids DOS by malicious clients holding + // resources open. Defaults to 10000 (10s). + HandshakeTimeoutMs int `json:"handshake_timeout_ms" hcl:"handshake_timeout_ms"` +} + +// applyDefaults sets zero-valued params to a sane default. +func (plc *PublicListenerConfig) applyDefaults() { + if plc.LocalConnectTimeoutMs == 0 { + plc.LocalConnectTimeoutMs = 1000 + } + if plc.HandshakeTimeoutMs == 0 { + plc.HandshakeTimeoutMs = 10000 + } +} + +// UpstreamConfig configures an upstream (outgoing) listener. +type UpstreamConfig struct { + // LocalAddress is the host:port to listen on for local app connections. + LocalBindAddress string `json:"local_bind_address" hcl:"local_bind_address,attr"` + + // DestinationName is the service name of the destination. + DestinationName string `json:"destination_name" hcl:"destination_name,attr"` + + // DestinationNamespace is the namespace of the destination. + DestinationNamespace string `json:"destination_namespace" hcl:"destination_namespace,attr"` + + // DestinationType determines which service discovery method is used to find a + // candidate instance to connect to. + DestinationType string `json:"destination_type" hcl:"destination_type,attr"` + + // DestinationDatacenter is the datacenter the destination is in. If empty, + // defaults to discovery within the same datacenter. + DestinationDatacenter string `json:"destination_datacenter" hcl:"destination_datacenter,attr"` + + // ConnectTimeout is the timeout for establishing connections with the remote + // service instance. Defaults to 10,000 (10s). + ConnectTimeoutMs int `json:"connect_timeout_ms" hcl:"connect_timeout_ms,attr"` + + // resolver is used to plug in the service discover mechanism. It can be used + // in tests to bypass discovery. In real usage it is used to inject the + // api.Client dependency from the remainder of the config struct parsed from + // the user JSON using the UpstreamResolverFromClient helper. + resolver connect.Resolver +} + +// applyDefaults sets zero-valued params to a sane default. +func (uc *UpstreamConfig) applyDefaults() { + if uc.ConnectTimeoutMs == 0 { + uc.ConnectTimeoutMs = 10000 + } +} + +// String returns a string that uniquely identifies the Upstream. Used for +// identifying the upstream in log output and map keys. +func (uc *UpstreamConfig) String() string { + return fmt.Sprintf("%s->%s:%s/%s", uc.LocalBindAddress, uc.DestinationType, + uc.DestinationNamespace, uc.DestinationName) +} + +// UpstreamResolverFromClient returns a ConsulResolver that can resolve the +// given UpstreamConfig using the provided api.Client dependency. +func UpstreamResolverFromClient(client *api.Client, + cfg UpstreamConfig) connect.Resolver { + + // For now default to service as it has the most natural meaning and the error + // that the service doesn't exist is probably reasonable if misconfigured. We + // should probably handle actual configs that have invalid types at a higher + // level anyway (like when parsing). + typ := connect.ConsulResolverTypeService + if cfg.DestinationType == "prepared_query" { + typ = connect.ConsulResolverTypePreparedQuery + } + return &connect.ConsulResolver{ + Client: client, + Namespace: cfg.DestinationNamespace, + Name: cfg.DestinationName, + Type: typ, + Datacenter: cfg.DestinationDatacenter, + } +} + +// ConfigWatcher is a simple interface to allow dynamic configurations from +// plugggable sources. +type ConfigWatcher interface { + // Watch returns a channel that will deliver new Configs if something external + // provokes it. + Watch() <-chan *Config +} + +// StaticConfigWatcher is a simple ConfigWatcher that delivers a static Config +// once and then never changes it. +type StaticConfigWatcher struct { + ch chan *Config +} + +// NewStaticConfigWatcher returns a ConfigWatcher for a config that never +// changes. It assumes only one "watcher" will ever call Watch. The config is +// delivered on the first call but will never be delivered again to allow +// callers to call repeatedly (e.g. select in a loop). +func NewStaticConfigWatcher(cfg *Config) *StaticConfigWatcher { + sc := &StaticConfigWatcher{ + // Buffer it so we can queue up the config for first delivery. + ch: make(chan *Config, 1), + } + sc.ch <- cfg + return sc +} + +// Watch implements ConfigWatcher on a static configuration for compatibility. +// It returns itself on the channel once and then leaves it open. +func (sc *StaticConfigWatcher) Watch() <-chan *Config { + return sc.ch +} + +// ParseConfigFile parses proxy configuration from a file for local dev. +func ParseConfigFile(filename string) (*Config, error) { + bs, err := ioutil.ReadFile(filename) + if err != nil { + return nil, err + } + + var cfg Config + + err = hcl.Unmarshal(bs, &cfg) + if err != nil { + return nil, err + } + + cfg.PublicListener.applyDefaults() + for idx := range cfg.Upstreams { + cfg.Upstreams[idx].applyDefaults() + } + + return &cfg, nil +} + +// AgentConfigWatcher watches the local Consul agent for proxy config changes. +type AgentConfigWatcher struct { + client *api.Client + proxyID string + logger *log.Logger +} + +// Watch implements ConfigWatcher. +func (w *AgentConfigWatcher) Watch() <-chan *Config { + watch := make(chan *Config) + // TODO implement me, note we need to discover the Service instance to use and + // set it on the Config we return. + return watch +} diff --git a/connect/proxy/config_test.go b/connect/proxy/config_test.go new file mode 100644 index 000000000..96782b12e --- /dev/null +++ b/connect/proxy/config_test.go @@ -0,0 +1,108 @@ +package proxy + +import ( + "testing" + + "github.com/hashicorp/consul/connect" + "github.com/stretchr/testify/require" +) + +func TestParseConfigFile(t *testing.T) { + cfg, err := ParseConfigFile("testdata/config-kitchensink.hcl") + require.Nil(t, err) + + expect := &Config{ + ProxyID: "foo", + Token: "11111111-2222-3333-4444-555555555555", + ProxiedServiceID: "web", + ProxiedServiceNamespace: "default", + PublicListener: PublicListenerConfig{ + BindAddress: ":9999", + LocalServiceAddress: "127.0.0.1:5000", + LocalConnectTimeoutMs: 1000, + HandshakeTimeoutMs: 10000, // From defaults + }, + Upstreams: []UpstreamConfig{ + { + LocalBindAddress: "127.0.0.1:6000", + DestinationName: "db", + DestinationNamespace: "default", + DestinationType: "service", + ConnectTimeoutMs: 10000, + }, + { + LocalBindAddress: "127.0.0.1:6001", + DestinationName: "geo-cache", + DestinationNamespace: "default", + DestinationType: "prepared_query", + ConnectTimeoutMs: 10000, + }, + }, + DevCAFile: "connect/testdata/ca1-ca-consul-internal.cert.pem", + DevServiceCertFile: "connect/testdata/ca1-svc-web.cert.pem", + DevServiceKeyFile: "connect/testdata/ca1-svc-web.key.pem", + } + + require.Equal(t, expect, cfg) +} + +func TestUpstreamResolverFromClient(t *testing.T) { + tests := []struct { + name string + cfg UpstreamConfig + want *connect.ConsulResolver + }{ + { + name: "service", + cfg: UpstreamConfig{ + DestinationNamespace: "foo", + DestinationName: "web", + DestinationDatacenter: "ny1", + DestinationType: "service", + }, + want: &connect.ConsulResolver{ + Namespace: "foo", + Name: "web", + Datacenter: "ny1", + Type: connect.ConsulResolverTypeService, + }, + }, + { + name: "prepared_query", + cfg: UpstreamConfig{ + DestinationNamespace: "foo", + DestinationName: "web", + DestinationDatacenter: "ny1", + DestinationType: "prepared_query", + }, + want: &connect.ConsulResolver{ + Namespace: "foo", + Name: "web", + Datacenter: "ny1", + Type: connect.ConsulResolverTypePreparedQuery, + }, + }, + { + name: "unknown behaves like service", + cfg: UpstreamConfig{ + DestinationNamespace: "foo", + DestinationName: "web", + DestinationDatacenter: "ny1", + DestinationType: "junk", + }, + want: &connect.ConsulResolver{ + Namespace: "foo", + Name: "web", + Datacenter: "ny1", + Type: connect.ConsulResolverTypeService, + }, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + // Client doesn't really matter as long as it's passed through. + got := UpstreamResolverFromClient(nil, tt.cfg) + require.Equal(t, tt.want, got) + }) + } +} diff --git a/connect/proxy/conn.go b/connect/proxy/conn.go new file mode 100644 index 000000000..70019e55c --- /dev/null +++ b/connect/proxy/conn.go @@ -0,0 +1,61 @@ +package proxy + +import ( + "io" + "net" + "sync/atomic" +) + +// Conn represents a single proxied TCP connection. +type Conn struct { + src, dst net.Conn + stopping int32 +} + +// NewConn returns a conn joining the two given net.Conn +func NewConn(src, dst net.Conn) *Conn { + return &Conn{ + src: src, + dst: dst, + stopping: 0, + } +} + +// Close closes both connections. +func (c *Conn) Close() error { + // Note that net.Conn.Close can be called multiple times and atomic store is + // idempotent so no need to ensure we only do this once. + // + // Also note that we don't wait for CopyBytes to return here since we are + // closing the conns which is the only externally visible sideeffect of that + // goroutine running and there should be no way for it to hang or leak once + // the conns are closed so we can save the extra coordination. + atomic.StoreInt32(&c.stopping, 1) + c.src.Close() + c.dst.Close() + return nil +} + +// CopyBytes will continuously copy bytes in both directions between src and dst +// until either connection is closed. +func (c *Conn) CopyBytes() error { + defer c.Close() + + go func() { + // Need this since Copy is only guaranteed to stop when it's source reader + // (second arg) hits EOF or error but either conn might close first possibly + // causing this goroutine to exit but not the outer one. See + // TestConnSrcClosing which will fail if you comment the defer below. + defer c.Close() + io.Copy(c.dst, c.src) + }() + + _, err := io.Copy(c.src, c.dst) + // Note that we don't wait for the other goroutine to finish because it either + // already has due to it's src conn closing, or it will once our defer fires + // and closes the source conn. No need for the extra coordination. + if atomic.LoadInt32(&c.stopping) == 1 { + return nil + } + return err +} diff --git a/connect/proxy/conn_test.go b/connect/proxy/conn_test.go new file mode 100644 index 000000000..a37720ea0 --- /dev/null +++ b/connect/proxy/conn_test.go @@ -0,0 +1,185 @@ +package proxy + +import ( + "bufio" + "io" + "net" + "testing" + "time" + + "github.com/stretchr/testify/require" +) + +// Assert io.Closer implementation +var _ io.Closer = new(Conn) + +// testConnPairSetup creates a TCP connection by listening on a random port, and +// returns both ends. Ready to have data sent down them. It also returns a +// closer function that will close both conns and the listener. +func testConnPairSetup(t *testing.T) (net.Conn, net.Conn, func()) { + t.Helper() + + l, err := net.Listen("tcp", "localhost:0") + require.Nil(t, err) + + ch := make(chan net.Conn, 1) + go func() { + src, err := l.Accept() + require.Nil(t, err) + ch <- src + }() + + dst, err := net.Dial("tcp", l.Addr().String()) + require.Nil(t, err) + + src := <-ch + + stopper := func() { + l.Close() + src.Close() + dst.Close() + } + + return src, dst, stopper +} + +// testConnPipelineSetup creates a pipeline consiting of two TCP connection +// pairs and a Conn that copies bytes between them. Data flow looks like this: +// +// src1 <---> dst1 <== Conn.CopyBytes ==> src2 <---> dst2 +// +// The returned values are the src1 and dst2 which should be able to send and +// receive to each other via the Conn, the Conn itself (not running), and a +// stopper func to close everything. +func testConnPipelineSetup(t *testing.T) (net.Conn, net.Conn, *Conn, func()) { + src1, dst1, stop1 := testConnPairSetup(t) + src2, dst2, stop2 := testConnPairSetup(t) + c := NewConn(dst1, src2) + return src1, dst2, c, func() { + c.Close() + stop1() + stop2() + } +} + +func TestConn(t *testing.T) { + src, dst, c, stop := testConnPipelineSetup(t) + defer stop() + + retCh := make(chan error, 1) + go func() { + retCh <- c.CopyBytes() + }() + + // Now write/read into the other ends of the pipes (src1, dst2) + srcR := bufio.NewReader(src) + dstR := bufio.NewReader(dst) + + _, err := src.Write([]byte("ping 1\n")) + require.Nil(t, err) + _, err = dst.Write([]byte("ping 2\n")) + require.Nil(t, err) + + got, err := dstR.ReadString('\n') + require.Nil(t, err) + require.Equal(t, "ping 1\n", got) + + got, err = srcR.ReadString('\n') + require.Nil(t, err) + require.Equal(t, "ping 2\n", got) + + _, err = src.Write([]byte("pong 1\n")) + require.Nil(t, err) + _, err = dst.Write([]byte("pong 2\n")) + require.Nil(t, err) + + got, err = dstR.ReadString('\n') + require.Nil(t, err) + require.Equal(t, "pong 1\n", got) + + got, err = srcR.ReadString('\n') + require.Nil(t, err) + require.Equal(t, "pong 2\n", got) + + c.Close() + + ret := <-retCh + require.Nil(t, ret, "Close() should not cause error return") +} + +func TestConnSrcClosing(t *testing.T) { + src, dst, c, stop := testConnPipelineSetup(t) + defer stop() + + retCh := make(chan error, 1) + go func() { + retCh <- c.CopyBytes() + }() + + // Wait until we can actually get some bytes through both ways so we know that + // the copy goroutines are running. + srcR := bufio.NewReader(src) + dstR := bufio.NewReader(dst) + + _, err := src.Write([]byte("ping 1\n")) + require.Nil(t, err) + _, err = dst.Write([]byte("ping 2\n")) + require.Nil(t, err) + + got, err := dstR.ReadString('\n') + require.Nil(t, err) + require.Equal(t, "ping 1\n", got) + got, err = srcR.ReadString('\n') + require.Nil(t, err) + require.Equal(t, "ping 2\n", got) + + // If we close the src conn, we expect CopyBytes to return and dst to be + // closed too. No good way to assert that the conn is closed really other than + // assume the retCh receive will hang unless CopyBytes exits and that + // CopyBytes defers Closing both. + testTimer := time.AfterFunc(3*time.Second, func() { + panic("test timeout") + }) + src.Close() + <-retCh + testTimer.Stop() +} + +func TestConnDstClosing(t *testing.T) { + src, dst, c, stop := testConnPipelineSetup(t) + defer stop() + + retCh := make(chan error, 1) + go func() { + retCh <- c.CopyBytes() + }() + + // Wait until we can actually get some bytes through both ways so we know that + // the copy goroutines are running. + srcR := bufio.NewReader(src) + dstR := bufio.NewReader(dst) + + _, err := src.Write([]byte("ping 1\n")) + require.Nil(t, err) + _, err = dst.Write([]byte("ping 2\n")) + require.Nil(t, err) + + got, err := dstR.ReadString('\n') + require.Nil(t, err) + require.Equal(t, "ping 1\n", got) + got, err = srcR.ReadString('\n') + require.Nil(t, err) + require.Equal(t, "ping 2\n", got) + + // If we close the dst conn, we expect CopyBytes to return and src to be + // closed too. No good way to assert that the conn is closed really other than + // assume the retCh receive will hang unless CopyBytes exits and that + // CopyBytes defers Closing both. i.e. if this test doesn't time out it's + // good! + testTimer := time.AfterFunc(3*time.Second, func() { + panic("test timeout") + }) + src.Close() + <-retCh + testTimer.Stop() +} diff --git a/connect/proxy/listener.go b/connect/proxy/listener.go new file mode 100644 index 000000000..c003cb19c --- /dev/null +++ b/connect/proxy/listener.go @@ -0,0 +1,116 @@ +package proxy + +import ( + "context" + "crypto/tls" + "errors" + "log" + "net" + "sync/atomic" + "time" + + "github.com/hashicorp/consul/connect" +) + +// Listener is the implementation of a specific proxy listener. It has pluggable +// Listen and Dial methods to suit public mTLS vs upstream semantics. It handles +// the lifecycle of the listener and all connections opened through it +type Listener struct { + // Service is the connect service instance to use. + Service *connect.Service + + listenFunc func() (net.Listener, error) + dialFunc func() (net.Conn, error) + + stopFlag int32 + stopChan chan struct{} + + logger *log.Logger +} + +// NewPublicListener returns a Listener setup to listen for public mTLS +// connections and proxy them to the configured local application over TCP. +func NewPublicListener(svc *connect.Service, cfg PublicListenerConfig, + logger *log.Logger) *Listener { + return &Listener{ + Service: svc, + listenFunc: func() (net.Listener, error) { + return tls.Listen("tcp", cfg.BindAddress, svc.ServerTLSConfig()) + }, + dialFunc: func() (net.Conn, error) { + return net.DialTimeout("tcp", cfg.LocalServiceAddress, + time.Duration(cfg.LocalConnectTimeoutMs)*time.Millisecond) + }, + stopChan: make(chan struct{}), + logger: logger, + } +} + +// NewUpstreamListener returns a Listener setup to listen locally for TCP +// connections that are proxied to a discovered Connect service instance. +func NewUpstreamListener(svc *connect.Service, cfg UpstreamConfig, + logger *log.Logger) *Listener { + return &Listener{ + Service: svc, + listenFunc: func() (net.Listener, error) { + return net.Listen("tcp", cfg.LocalBindAddress) + }, + dialFunc: func() (net.Conn, error) { + if cfg.resolver == nil { + return nil, errors.New("no resolver provided") + } + ctx, cancel := context.WithTimeout(context.Background(), + time.Duration(cfg.ConnectTimeoutMs)*time.Millisecond) + defer cancel() + return svc.Dial(ctx, cfg.resolver) + }, + stopChan: make(chan struct{}), + logger: logger, + } +} + +// Serve runs the listener until it is stopped. +func (l *Listener) Serve() error { + listen, err := l.listenFunc() + if err != nil { + return err + } + + for { + conn, err := listen.Accept() + if err != nil { + if atomic.LoadInt32(&l.stopFlag) == 1 { + return nil + } + return err + } + + go l.handleConn(conn) + } + return nil +} + +// handleConn is the internal connection handler goroutine. +func (l *Listener) handleConn(src net.Conn) { + defer src.Close() + + dst, err := l.dialFunc() + if err != nil { + l.logger.Printf("[ERR] failed to dial: %s", err) + return + } + // Note no need to defer dst.Close() since conn handles that for us. + conn := NewConn(src, dst) + defer conn.Close() + + err = conn.CopyBytes() + if err != nil { + l.logger.Printf("[ERR] connection failed: %s", err) + return + } +} + +// Close terminates the listener and all active connections. +func (l *Listener) Close() error { + return nil +} diff --git a/connect/proxy/listener_test.go b/connect/proxy/listener_test.go new file mode 100644 index 000000000..ce41c81e5 --- /dev/null +++ b/connect/proxy/listener_test.go @@ -0,0 +1,91 @@ +package proxy + +import ( + "context" + "log" + "net" + "os" + "testing" + + agConnect "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/connect" + "github.com/stretchr/testify/require" +) + +func TestPublicListener(t *testing.T) { + ca := agConnect.TestCA(t, nil) + addrs := TestLocalBindAddrs(t, 2) + + cfg := PublicListenerConfig{ + BindAddress: addrs[0], + LocalServiceAddress: addrs[1], + HandshakeTimeoutMs: 100, + LocalConnectTimeoutMs: 100, + } + + testApp, err := NewTestTCPServer(t, cfg.LocalServiceAddress) + require.Nil(t, err) + defer testApp.Close() + + svc := connect.TestService(t, "db", ca) + + l := NewPublicListener(svc, cfg, log.New(os.Stderr, "", log.LstdFlags)) + + // Run proxy + go func() { + err := l.Serve() + require.Nil(t, err) + }() + defer l.Close() + + // Proxy and backend are running, play the part of a TLS client using same + // cert for now. + conn, err := svc.Dial(context.Background(), &connect.StaticResolver{ + Addr: addrs[0], + CertURI: agConnect.TestSpiffeIDService(t, "db"), + }) + require.Nilf(t, err, "unexpected err: %s", err) + TestEchoConn(t, conn, "") +} + +func TestUpstreamListener(t *testing.T) { + ca := agConnect.TestCA(t, nil) + addrs := TestLocalBindAddrs(t, 1) + + // Run a test server that we can dial. + testSvr := connect.NewTestServer(t, "db", ca) + go func() { + err := testSvr.Serve() + require.Nil(t, err) + }() + defer testSvr.Close() + + cfg := UpstreamConfig{ + DestinationType: "service", + DestinationNamespace: "default", + DestinationName: "db", + ConnectTimeoutMs: 100, + LocalBindAddress: addrs[0], + resolver: &connect.StaticResolver{ + Addr: testSvr.Addr, + CertURI: agConnect.TestSpiffeIDService(t, "db"), + }, + } + + svc := connect.TestService(t, "web", ca) + + l := NewUpstreamListener(svc, cfg, log.New(os.Stderr, "", log.LstdFlags)) + + // Run proxy + go func() { + err := l.Serve() + require.Nil(t, err) + }() + defer l.Close() + + // Proxy and fake remote service are running, play the part of the app + // connecting to a remote connect service over TCP. + conn, err := net.Dial("tcp", cfg.LocalBindAddress) + require.Nilf(t, err, "unexpected err: %s", err) + TestEchoConn(t, conn, "") +} diff --git a/connect/proxy/proxy.go b/connect/proxy/proxy.go new file mode 100644 index 000000000..bda6f3afb --- /dev/null +++ b/connect/proxy/proxy.go @@ -0,0 +1,134 @@ +package proxy + +import ( + "log" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/connect" +) + +// Proxy implements the built-in connect proxy. +type Proxy struct { + proxyID string + client *api.Client + cfgWatcher ConfigWatcher + stopChan chan struct{} + logger *log.Logger +} + +// NewFromConfigFile returns a Proxy instance configured just from a local file. +// This is intended mostly for development and bypasses the normal mechanisms +// for fetching config and certificates from the local agent. +func NewFromConfigFile(client *api.Client, filename string, + logger *log.Logger) (*Proxy, error) { + cfg, err := ParseConfigFile(filename) + if err != nil { + return nil, err + } + + service, err := connect.NewDevServiceFromCertFiles(cfg.ProxiedServiceID, + client, logger, cfg.DevCAFile, cfg.DevServiceCertFile, + cfg.DevServiceKeyFile) + if err != nil { + return nil, err + } + cfg.service = service + + p := &Proxy{ + proxyID: cfg.ProxyID, + client: client, + cfgWatcher: NewStaticConfigWatcher(cfg), + stopChan: make(chan struct{}), + logger: logger, + } + return p, nil +} + +// New returns a Proxy with the given id, consuming the provided (configured) +// agent. It is ready to Run(). +func New(client *api.Client, proxyID string, logger *log.Logger) (*Proxy, error) { + p := &Proxy{ + proxyID: proxyID, + client: client, + cfgWatcher: &AgentConfigWatcher{ + client: client, + proxyID: proxyID, + logger: logger, + }, + stopChan: make(chan struct{}), + logger: logger, + } + return p, nil +} + +// Serve the proxy instance until a fatal error occurs or proxy is closed. +func (p *Proxy) Serve() error { + + var cfg *Config + + // Watch for config changes (initial setup happens on first "change") + for { + select { + case newCfg := <-p.cfgWatcher.Watch(): + p.logger.Printf("[DEBUG] got new config") + if newCfg.service == nil { + p.logger.Printf("[ERR] new config has nil service") + continue + } + if cfg == nil { + // Initial setup + + newCfg.PublicListener.applyDefaults() + l := NewPublicListener(newCfg.service, newCfg.PublicListener, p.logger) + err := p.startListener("public listener", l) + if err != nil { + return err + } + } + + // TODO(banks) update/remove upstreams properly based on a diff with current. Can + // store a map of uc.String() to Listener here and then use it to only + // start one of each and stop/modify if changes occur. + for _, uc := range newCfg.Upstreams { + uc.applyDefaults() + uc.resolver = UpstreamResolverFromClient(p.client, uc) + + l := NewUpstreamListener(newCfg.service, uc, p.logger) + err := p.startListener(uc.String(), l) + if err != nil { + p.logger.Printf("[ERR] failed to start upstream %s: %s", uc.String(), + err) + } + } + cfg = newCfg + + case <-p.stopChan: + return nil + } + } +} + +// startPublicListener is run from the internal state machine loop +func (p *Proxy) startListener(name string, l *Listener) error { + go func() { + err := l.Serve() + if err != nil { + p.logger.Printf("[ERR] %s stopped with error: %s", name, err) + return + } + p.logger.Printf("[INFO] %s stopped", name) + }() + + go func() { + <-p.stopChan + l.Close() + }() + + return nil +} + +// Close stops the proxy and terminates all active connections. It must be +// called only once. +func (p *Proxy) Close() { + close(p.stopChan) +} diff --git a/connect/proxy/testdata/config-kitchensink.hcl b/connect/proxy/testdata/config-kitchensink.hcl new file mode 100644 index 000000000..2bda99791 --- /dev/null +++ b/connect/proxy/testdata/config-kitchensink.hcl @@ -0,0 +1,32 @@ +# Example proxy config with everything specified + +proxy_id = "foo" +token = "11111111-2222-3333-4444-555555555555" + +proxied_service_id = "web" +proxied_service_namespace = "default" + +# Assumes running consul in dev mode from the repo root... +dev_ca_file = "connect/testdata/ca1-ca-consul-internal.cert.pem" +dev_service_cert_file = "connect/testdata/ca1-svc-web.cert.pem" +dev_service_key_file = "connect/testdata/ca1-svc-web.key.pem" + +public_listener { + bind_address = ":9999" + local_service_address = "127.0.0.1:5000" +} + +upstreams = [ + { + local_bind_address = "127.0.0.1:6000" + destination_name = "db" + destination_namespace = "default" + destination_type = "service" + }, + { + local_bind_address = "127.0.0.1:6001" + destination_name = "geo-cache" + destination_namespace = "default" + destination_type = "prepared_query" + } +] diff --git a/connect/proxy/testing.go b/connect/proxy/testing.go new file mode 100644 index 000000000..9ed8c41c4 --- /dev/null +++ b/connect/proxy/testing.go @@ -0,0 +1,105 @@ +package proxy + +import ( + "fmt" + "io" + "log" + "net" + "sync/atomic" + + "github.com/hashicorp/consul/lib/freeport" + "github.com/mitchellh/go-testing-interface" + "github.com/stretchr/testify/require" +) + +// TestLocalBindAddrs returns n localhost address:port strings with free ports +// for binding test listeners to. +func TestLocalBindAddrs(t testing.T, n int) []string { + ports := freeport.GetT(t, n) + addrs := make([]string, n) + for i, p := range ports { + addrs[i] = fmt.Sprintf("localhost:%d", p) + } + return addrs +} + +// TestTCPServer is a simple TCP echo server for use during tests. +type TestTCPServer struct { + l net.Listener + stopped int32 + accepted, closed, active int32 +} + +// NewTestTCPServer opens as a listening socket on the given address and returns +// a TestTCPServer serving requests to it. The server is already started and can +// be stopped by calling Close(). +func NewTestTCPServer(t testing.T, addr string) (*TestTCPServer, error) { + l, err := net.Listen("tcp", addr) + if err != nil { + return nil, err + } + log.Printf("test tcp server listening on %s", addr) + s := &TestTCPServer{ + l: l, + } + go s.accept() + return s, nil +} + +// Close stops the server +func (s *TestTCPServer) Close() { + atomic.StoreInt32(&s.stopped, 1) + if s.l != nil { + s.l.Close() + } +} + +func (s *TestTCPServer) accept() error { + for { + conn, err := s.l.Accept() + if err != nil { + if atomic.LoadInt32(&s.stopped) == 1 { + log.Printf("test tcp echo server %s stopped", s.l.Addr()) + return nil + } + log.Printf("test tcp echo server %s failed: %s", s.l.Addr(), err) + return err + } + + atomic.AddInt32(&s.accepted, 1) + atomic.AddInt32(&s.active, 1) + + go func(c net.Conn) { + io.Copy(c, c) + atomic.AddInt32(&s.closed, 1) + atomic.AddInt32(&s.active, -1) + }(conn) + } +} + +// TestEchoConn attempts to write some bytes to conn and expects to read them +// back within a short timeout (10ms). If prefix is not empty we expect it to be +// poresent at the start of all echoed responses (for example to distinguish +// between multiple echo server instances). +func TestEchoConn(t testing.T, conn net.Conn, prefix string) { + t.Helper() + + // Write some bytes and read them back + n, err := conn.Write([]byte("Hello World")) + require.Equal(t, 11, n) + require.Nil(t, err) + + expectLen := 11 + len(prefix) + + buf := make([]byte, expectLen) + // read until our buffer is full - it might be separate packets if prefix is + // in use. + got := 0 + for got < expectLen { + n, err = conn.Read(buf[got:]) + require.Nilf(t, err, "err: %s", err) + got += n + } + require.Equal(t, expectLen, got) + require.Equal(t, prefix+"Hello World", string(buf[:])) +} diff --git a/connect/resolver.go b/connect/resolver.go index 41dc70e82..9873fcdf1 100644 --- a/connect/resolver.go +++ b/connect/resolver.go @@ -10,7 +10,9 @@ import ( testing "github.com/mitchellh/go-testing-interface" ) -// Resolver is the interface implemented by a service discovery mechanism. +// Resolver is the interface implemented by a service discovery mechanism to get +// the address and identity of an instance to connect to via Connect as a +// client. type Resolver interface { // Resolve returns a single service instance to connect to. Implementations // may attempt to ensure the instance returned is currently available. It is @@ -19,7 +21,10 @@ type Resolver interface { // increases reliability. The context passed can be used to impose timeouts // which may or may not be respected by implementations that make network // calls to resolve the service. The addr returned is a string in any valid - // form for passing directly to `net.Dial("tcp", addr)`. + // form for passing directly to `net.Dial("tcp", addr)`. The certURI + // represents the identity of the service instance. It will be matched against + // the TLS certificate URI SAN presented by the server and the connection + // rejected if they don't match. Resolve(ctx context.Context) (addr string, certURI connect.CertURI, err error) } @@ -33,7 +38,8 @@ type StaticResolver struct { Addr string // CertURL is the _identity_ we expect the server to present in it's TLS - // certificate. It must be an exact match or the connection will be rejected. + // certificate. It must be an exact URI string match or the connection will be + // rejected. CertURI connect.CertURI } @@ -56,13 +62,14 @@ type ConsulResolver struct { // panic. Client *api.Client - // Namespace of the query target + // Namespace of the query target. Namespace string - // Name of the query target + // Name of the query target. Name string - // Type of the query target, + // Type of the query target. Should be one of the defined ConsulResolverType* + // constants. Currently defaults to ConsulResolverTypeService. Type int // Datacenter to resolve in, empty indicates agent's local DC. diff --git a/connect/resolver_test.go b/connect/resolver_test.go index 29a40e3d3..3ab439add 100644 --- a/connect/resolver_test.go +++ b/connect/resolver_test.go @@ -41,7 +41,6 @@ func TestStaticResolver_Resolve(t *testing.T) { } func TestConsulResolver_Resolve(t *testing.T) { - // Setup a local test agent to query agent := agent.NewTestAgent("test-consul", "") defer agent.Shutdown() diff --git a/connect/service.go b/connect/service.go index db83ce5aa..6bbda0807 100644 --- a/connect/service.go +++ b/connect/service.go @@ -3,6 +3,7 @@ package connect import ( "context" "crypto/tls" + "errors" "log" "net" "net/http" @@ -10,6 +11,7 @@ import ( "time" "github.com/hashicorp/consul/api" + "golang.org/x/net/http2" ) // Service represents a Consul service that accepts and/or connects via Connect. @@ -41,10 +43,17 @@ type Service struct { client *api.Client // serverTLSCfg is the (reloadable) TLS config we use for serving. - serverTLSCfg *ReloadableTLSConfig + serverTLSCfg *reloadableTLSConfig // clientTLSCfg is the (reloadable) TLS config we use for dialling. - clientTLSCfg *ReloadableTLSConfig + clientTLSCfg *reloadableTLSConfig + + // httpResolverFromAddr is a function that returns a Resolver from a string + // address for HTTP clients. It's privately pluggable to make testing easier + // but will default to a simple method to parse the host as a Consul DNS host. + // + // TODO(banks): write the proper implementation + httpResolverFromAddr func(addr string) (Resolver, error) logger *log.Logger } @@ -65,8 +74,8 @@ func NewServiceWithLogger(serviceID string, client *api.Client, client: client, logger: logger, } - s.serverTLSCfg = NewReloadableTLSConfig(defaultTLSConfig(serverVerifyCerts)) - s.clientTLSCfg = NewReloadableTLSConfig(defaultTLSConfig(clientVerifyCerts)) + s.serverTLSCfg = newReloadableTLSConfig(defaultTLSConfig(serverVerifyCerts)) + s.clientTLSCfg = newReloadableTLSConfig(defaultTLSConfig(clientVerifyCerts)) // TODO(banks) run the background certificate sync return s, nil @@ -86,12 +95,12 @@ func NewDevServiceFromCertFiles(serviceID string, client *api.Client, return nil, err } - // Note that NewReloadableTLSConfig makes a copy so we can re-use the same + // Note that newReloadableTLSConfig makes a copy so we can re-use the same // base for both client and server with swapped verifiers. tlsCfg.VerifyPeerCertificate = serverVerifyCerts - s.serverTLSCfg = NewReloadableTLSConfig(tlsCfg) + s.serverTLSCfg = newReloadableTLSConfig(tlsCfg) tlsCfg.VerifyPeerCertificate = clientVerifyCerts - s.clientTLSCfg = NewReloadableTLSConfig(tlsCfg) + s.clientTLSCfg = newReloadableTLSConfig(tlsCfg) return s, nil } @@ -121,6 +130,8 @@ func (s *Service) Dial(ctx context.Context, resolver Resolver) (net.Conn, error) if err != nil { return nil, err } + s.logger.Printf("[DEBUG] resolved service instance: %s (%s)", addr, + certURI.URI()) var dialer net.Dialer tcpConn, err := dialer.DialContext(ctx, "tcp", addr) if err != nil { @@ -133,8 +144,8 @@ func (s *Service) Dial(ctx context.Context, resolver Resolver) (net.Conn, error) if ok { tlsConn.SetDeadline(deadline) } - err = tlsConn.Handshake() - if err != nil { + // Perform handshake + if err = tlsConn.Handshake(); err != nil { tlsConn.Close() return nil, err } @@ -149,20 +160,27 @@ func (s *Service) Dial(ctx context.Context, resolver Resolver) (net.Conn, error) tlsConn.Close() return nil, err } - + s.logger.Printf("[DEBUG] successfully connected to %s (%s)", addr, + certURI.URI()) return tlsConn, nil } -// HTTPDialContext is compatible with http.Transport.DialContext. It expects the -// addr hostname to be specified using Consul DNS query syntax, e.g. +// HTTPDialTLS is compatible with http.Transport.DialTLS. It expects the addr +// hostname to be specified using Consul DNS query syntax, e.g. // "web.service.consul". It converts that into the equivalent ConsulResolver and // then call s.Dial with the resolver. This is low level, clients should // typically use HTTPClient directly. -func (s *Service) HTTPDialContext(ctx context.Context, network, +func (s *Service) HTTPDialTLS(network, addr string) (net.Conn, error) { - var r ConsulResolver - // TODO(banks): parse addr into ConsulResolver - return s.Dial(ctx, &r) + if s.httpResolverFromAddr == nil { + return nil, errors.New("no http resolver configured") + } + r, err := s.httpResolverFromAddr(addr) + if err != nil { + return nil, err + } + // TODO(banks): figure out how to do timeouts better. + return s.Dial(context.Background(), r) } // HTTPClient returns an *http.Client configured to dial remote Consul Connect @@ -172,14 +190,27 @@ func (s *Service) HTTPDialContext(ctx context.Context, network, // API rather than just relying on Consul DNS. Hostnames that are not valid // Consul DNS queries will fail. func (s *Service) HTTPClient() *http.Client { + t := &http.Transport{ + // Sadly we can't use DialContext hook since that is expected to return a + // plain TCP connection an http.Client tries to start a TLS handshake over + // it. We need to control the handshake to be able to do our validation. + // So we have to use the older DialTLS which means no context/timeout + // support. + // + // TODO(banks): figure out how users can configure a timeout when using + // this and/or compatibility with http.Request.WithContext. + DialTLS: s.HTTPDialTLS, + } + // Need to manually re-enable http2 support since we set custom DialTLS. + // See https://golang.org/src/net/http/transport.go?s=8692:9036#L228 + http2.ConfigureTransport(t) return &http.Client{ - Transport: &http.Transport{ - DialContext: s.HTTPDialContext, - }, + Transport: t, } } // Close stops the service and frees resources. -func (s *Service) Close() { +func (s *Service) Close() error { // TODO(banks): stop background activity if started + return nil } diff --git a/connect/service_test.go b/connect/service_test.go index a2adfe7f1..7bc4c97f2 100644 --- a/connect/service_test.go +++ b/connect/service_test.go @@ -2,14 +2,22 @@ package connect import ( "context" + "crypto/tls" "fmt" + "io" + "io/ioutil" + "net/http" "testing" "time" "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/testutil/retry" "github.com/stretchr/testify/require" ) +// Assert io.Closer implementation +var _ io.Closer = new(Service) + func TestService_Dial(t *testing.T) { ca := connect.TestCA(t, nil) @@ -53,30 +61,26 @@ func TestService_Dial(t *testing.T) { t.Run(tt.name, func(t *testing.T) { require := require.New(t) - s, err := NewService("web", nil) - require.Nil(err) - - // Force TLSConfig - s.clientTLSCfg = NewReloadableTLSConfig(TestTLSConfig(t, "web", ca)) + s := TestService(t, "web", ca) ctx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond) defer cancel() - testSvc := NewTestService(t, tt.presentService, ca) - testSvc.TimeoutHandshake = !tt.handshake + testSvr := NewTestServer(t, tt.presentService, ca) + testSvr.TimeoutHandshake = !tt.handshake if tt.accept { go func() { - err := testSvc.Serve() + err := testSvr.Serve() require.Nil(err) }() - defer testSvc.Close() + defer testSvr.Close() } // Always expect to be connecting to a "DB" resolver := &StaticResolver{ - Addr: testSvc.Addr, + Addr: testSvr.Addr, CertURI: connect.TestSpiffeIDService(t, "db"), } @@ -92,6 +96,7 @@ func TestService_Dial(t *testing.T) { if tt.wantErr == "" { require.Nil(err) + require.IsType(&tls.Conn{}, conn) } else { require.NotNil(err) require.Contains(err.Error(), tt.wantErr) @@ -103,3 +108,62 @@ func TestService_Dial(t *testing.T) { }) } } + +func TestService_ServerTLSConfig(t *testing.T) { + // TODO(banks): it's mostly meaningless to test this now since we directly set + // the tlsCfg in our TestService helper which is all we'd be asserting on here + // not the actual implementation. Once agent tls fetching is built, it becomes + // more meaningful to actually verify it's returning the correct config. +} + +func TestService_HTTPClient(t *testing.T) { + require := require.New(t) + ca := connect.TestCA(t, nil) + + s := TestService(t, "web", ca) + + // Run a test HTTP server + testSvr := NewTestServer(t, "backend", ca) + defer testSvr.Close() + go func() { + err := testSvr.ServeHTTPS(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + w.Write([]byte("Hello, I am Backend")) + })) + require.Nil(t, err) + }() + + // TODO(banks): this will talk http2 on both client and server. I hit some + // compatibility issues when testing though need to make sure that the http + // server with our TLSConfig can actually support HTTP/1.1 as well. Could make + // this a table test with all 4 permutations of client/server http version + // support. + + // Still get connection refused some times so retry on those + retry.Run(t, func(r *retry.R) { + // Hook the service resolver to avoid needing full agent setup. + s.httpResolverFromAddr = func(addr string) (Resolver, error) { + // Require in this goroutine seems to block causing a timeout on the Get. + //require.Equal("https://backend.service.consul:443", addr) + return &StaticResolver{ + Addr: testSvr.Addr, + CertURI: connect.TestSpiffeIDService(t, "backend"), + }, nil + } + + client := s.HTTPClient() + client.Timeout = 1 * time.Second + + resp, err := client.Get("https://backend.service.consul/foo") + r.Check(err) + defer resp.Body.Close() + + bodyBytes, err := ioutil.ReadAll(resp.Body) + r.Check(err) + + got := string(bodyBytes) + want := "Hello, I am Backend" + if got != want { + r.Fatalf("got %s, want %s", got, want) + } + }) +} diff --git a/connect/testing.go b/connect/testing.go index f6fa438cf..235ff6001 100644 --- a/connect/testing.go +++ b/connect/testing.go @@ -5,26 +5,33 @@ import ( "crypto/x509" "fmt" "io" + "log" "net" + "net/http" "sync/atomic" "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/lib/freeport" testing "github.com/mitchellh/go-testing-interface" - "github.com/stretchr/testify/require" ) -// testVerifier creates a helper verifyFunc that can be set in a tls.Config and -// records calls made, passing back the certificates presented via the returned -// channel. The channel is buffered so up to 128 verification calls can be made -// without reading the chan before verification blocks. -func testVerifier(t testing.T, returnErr error) (verifyFunc, chan [][]byte) { - ch := make(chan [][]byte, 128) - return func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error { - ch <- rawCerts - return returnErr - }, ch +// TestService returns a Service instance based on a static TLS Config. +func TestService(t testing.T, service string, ca *structs.CARoot) *Service { + t.Helper() + + // Don't need to talk to client since we are setting TLSConfig locally + svc, err := NewService(service, nil) + if err != nil { + t.Fatal(err) + } + + svc.serverTLSCfg = newReloadableTLSConfig( + TestTLSConfigWithVerifier(t, service, ca, serverVerifyCerts)) + svc.clientTLSCfg = newReloadableTLSConfig( + TestTLSConfigWithVerifier(t, service, ca, clientVerifyCerts)) + + return svc } // TestTLSConfig returns a *tls.Config suitable for use during tests. @@ -32,7 +39,16 @@ func TestTLSConfig(t testing.T, service string, ca *structs.CARoot) *tls.Config t.Helper() // Insecure default (nil verifier) - cfg := defaultTLSConfig(nil) + return TestTLSConfigWithVerifier(t, service, ca, nil) +} + +// TestTLSConfigWithVerifier returns a *tls.Config suitable for use during +// tests, it will use the given verifyFunc to verify tls certificates. +func TestTLSConfigWithVerifier(t testing.T, service string, ca *structs.CARoot, + verifier verifyFunc) *tls.Config { + t.Helper() + + cfg := defaultTLSConfig(verifier) cfg.Certificates = []tls.Certificate{TestSvcKeyPair(t, service, ca)} cfg.RootCAs = TestCAPool(t, ca) cfg.ClientCAs = TestCAPool(t, ca) @@ -55,7 +71,9 @@ func TestSvcKeyPair(t testing.T, service string, ca *structs.CARoot) tls.Certifi t.Helper() certPEM, keyPEM := connect.TestLeaf(t, service, ca) cert, err := tls.X509KeyPair([]byte(certPEM), []byte(keyPEM)) - require.Nil(t, err) + if err != nil { + t.Fatal(err) + } return cert } @@ -65,13 +83,15 @@ func TestPeerCertificates(t testing.T, service string, ca *structs.CARoot) []*x5 t.Helper() certPEM, _ := connect.TestLeaf(t, service, ca) cert, err := connect.ParseCert(certPEM) - require.Nil(t, err) + if err != nil { + t.Fatal(err) + } return []*x509.Certificate{cert} } -// TestService runs a service listener that can be used to test clients. It's +// TestServer runs a service listener that can be used to test clients. It's // behaviour can be controlled by the struct members. -type TestService struct { +type TestServer struct { // The service name to serve. Service string // The (test) CA to use for generating certs. @@ -91,11 +111,11 @@ type TestService struct { stopChan chan struct{} } -// NewTestService returns a TestService. It should be closed when test is +// NewTestServer returns a TestServer. It should be closed when test is // complete. -func NewTestService(t testing.T, service string, ca *structs.CARoot) *TestService { +func NewTestServer(t testing.T, service string, ca *structs.CARoot) *TestServer { ports := freeport.GetT(t, 1) - return &TestService{ + return &TestServer{ Service: service, CA: ca, stopChan: make(chan struct{}), @@ -104,14 +124,16 @@ func NewTestService(t testing.T, service string, ca *structs.CARoot) *TestServic } } -// Serve runs a TestService and blocks until it is closed or errors. -func (s *TestService) Serve() error { +// Serve runs a tcp echo server and blocks until it is closed or errors. If +// TimeoutHandshake is set it won't start TLS handshake on new connections. +func (s *TestServer) Serve() error { // Just accept TCP conn but so we can control timing of accept/handshake l, err := net.Listen("tcp", s.Addr) if err != nil { return err } s.l = l + log.Printf("test connect service listening on %s", s.Addr) for { conn, err := s.l.Accept() @@ -122,12 +144,14 @@ func (s *TestService) Serve() error { return err } - // Ignore the conn if we are not actively ha + // Ignore the conn if we are not actively handshaking if !s.TimeoutHandshake { // Upgrade conn to TLS conn = tls.Server(conn, s.TLSCfg) // Run an echo service + log.Printf("test connect service accepted conn from %s, "+ + " running echo service", conn.RemoteAddr()) go io.Copy(conn, conn) } @@ -141,8 +165,20 @@ func (s *TestService) Serve() error { return nil } -// Close stops a TestService -func (s *TestService) Close() { +// ServeHTTPS runs an HTTPS server with the given config. It invokes the passed +// Handler for all requests. +func (s *TestServer) ServeHTTPS(h http.Handler) error { + srv := http.Server{ + Addr: s.Addr, + TLSConfig: s.TLSCfg, + Handler: h, + } + log.Printf("starting test connect HTTPS server on %s", s.Addr) + return srv.ListenAndServeTLS("", "") +} + +// Close stops a TestServer +func (s *TestServer) Close() error { old := atomic.SwapInt32(&s.stopFlag, 1) if old == 0 { if s.l != nil { @@ -150,4 +186,5 @@ func (s *TestService) Close() { } close(s.stopChan) } + return nil } diff --git a/connect/tls.go b/connect/tls.go index 8d3bc3a94..89d5ccb54 100644 --- a/connect/tls.go +++ b/connect/tls.go @@ -42,27 +42,27 @@ func defaultTLSConfig(verify verifyFunc) *tls.Config { } } -// ReloadableTLSConfig exposes a tls.Config that can have it's certificates +// reloadableTLSConfig exposes a tls.Config that can have it's certificates // reloaded. On a server, this uses GetConfigForClient to pass the current // tls.Config or client certificate for each acceptted connection. On a client, // this uses GetClientCertificate to provide the current client certificate. -type ReloadableTLSConfig struct { +type reloadableTLSConfig struct { mu sync.Mutex // cfg is the current config to use for new connections cfg *tls.Config } -// NewReloadableTLSConfig returns a reloadable config currently set to base. -func NewReloadableTLSConfig(base *tls.Config) *ReloadableTLSConfig { - c := &ReloadableTLSConfig{} +// newReloadableTLSConfig returns a reloadable config currently set to base. +func newReloadableTLSConfig(base *tls.Config) *reloadableTLSConfig { + c := &reloadableTLSConfig{} c.SetTLSConfig(base) return c } // TLSConfig returns a *tls.Config that will dynamically load certs. It's // suitable for use in either a client or server. -func (c *ReloadableTLSConfig) TLSConfig() *tls.Config { +func (c *reloadableTLSConfig) TLSConfig() *tls.Config { c.mu.Lock() cfgCopy := c.cfg c.mu.Unlock() @@ -71,7 +71,7 @@ func (c *ReloadableTLSConfig) TLSConfig() *tls.Config { // SetTLSConfig sets the config used for future connections. It is safe to call // from any goroutine. -func (c *ReloadableTLSConfig) SetTLSConfig(cfg *tls.Config) error { +func (c *reloadableTLSConfig) SetTLSConfig(cfg *tls.Config) error { copy := cfg.Clone() copy.GetClientCertificate = func(*tls.CertificateRequestInfo) (*tls.Certificate, error) { current := c.TLSConfig() diff --git a/connect/tls_test.go b/connect/tls_test.go index 3605f22db..64c473c1e 100644 --- a/connect/tls_test.go +++ b/connect/tls_test.go @@ -10,10 +10,9 @@ import ( func TestReloadableTLSConfig(t *testing.T) { require := require.New(t) - verify, _ := testVerifier(t, nil) - base := defaultTLSConfig(verify) + base := defaultTLSConfig(nil) - c := NewReloadableTLSConfig(base) + c := newReloadableTLSConfig(base) // The dynamic config should be the one we loaded (with some different hooks) got := c.TLSConfig() From adc5589329a8b3b1809208d18ce6e0b8c002641c Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 5 Apr 2018 12:41:49 +0100 Subject: [PATCH 113/539] Allow duplicate source or destination, but enforce uniqueness across all four. --- agent/consul/state/intention.go | 44 ++++++++++++- agent/consul/state/intention_test.go | 96 +++++++++++++++++++++++++--- agent/structs/intention.go | 23 ++++++- agent/structs/intention_test.go | 24 +++++++ 4 files changed, 174 insertions(+), 13 deletions(-) diff --git a/agent/consul/state/intention.go b/agent/consul/state/intention.go index bc8bb0213..907bdf1ab 100644 --- a/agent/consul/state/intention.go +++ b/agent/consul/state/intention.go @@ -29,7 +29,9 @@ func intentionsTableSchema() *memdb.TableSchema { "destination": &memdb.IndexSchema{ Name: "destination", AllowMissing: true, - Unique: true, + // This index is not unique since we need uniqueness across the whole + // 4-tuple. + Unique: false, Indexer: &memdb.CompoundIndex{ Indexes: []memdb.Indexer{ &memdb.StringFieldIndex{ @@ -46,6 +48,25 @@ func intentionsTableSchema() *memdb.TableSchema { "source": &memdb.IndexSchema{ Name: "source", AllowMissing: true, + // This index is not unique since we need uniqueness across the whole + // 4-tuple. + Unique: false, + Indexer: &memdb.CompoundIndex{ + Indexes: []memdb.Indexer{ + &memdb.StringFieldIndex{ + Field: "SourceNS", + Lowercase: true, + }, + &memdb.StringFieldIndex{ + Field: "SourceName", + Lowercase: true, + }, + }, + }, + }, + "source_destination": &memdb.IndexSchema{ + Name: "source_destination", + AllowMissing: true, Unique: true, Indexer: &memdb.CompoundIndex{ Indexes: []memdb.Indexer{ @@ -57,6 +78,14 @@ func intentionsTableSchema() *memdb.TableSchema { Field: "SourceName", Lowercase: true, }, + &memdb.StringFieldIndex{ + Field: "DestinationNS", + Lowercase: true, + }, + &memdb.StringFieldIndex{ + Field: "DestinationName", + Lowercase: true, + }, }, }, }, @@ -142,7 +171,7 @@ func (s *Store) intentionSetTxn(tx *memdb.Txn, idx uint64, ixn *structs.Intentio // Check for an existing intention existing, err := tx.First(intentionsTableName, "id", ixn.ID) if err != nil { - return fmt.Errorf("failed intention looup: %s", err) + return fmt.Errorf("failed intention lookup: %s", err) } if existing != nil { oldIxn := existing.(*structs.Intention) @@ -153,6 +182,17 @@ func (s *Store) intentionSetTxn(tx *memdb.Txn, idx uint64, ixn *structs.Intentio } ixn.ModifyIndex = idx + // Check for duplicates on the 4-tuple. + duplicate, err := tx.First(intentionsTableName, "source_destination", + ixn.SourceNS, ixn.SourceName, ixn.DestinationNS, ixn.DestinationName) + if err != nil { + return fmt.Errorf("failed intention lookup: %s", err) + } + if duplicate != nil { + dupIxn := duplicate.(*structs.Intention) + return fmt.Errorf("duplicate intention found: %s", dupIxn.String()) + } + // We always force meta to be non-nil so that we its an empty map. // This makes it easy for API responses to not nil-check this everywhere. if ixn.Meta == nil { diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go index d4c63647a..743f698af 100644 --- a/agent/consul/state/intention_test.go +++ b/agent/consul/state/intention_test.go @@ -7,6 +7,7 @@ import ( "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" ) func TestStore_IntentionGet_none(t *testing.T) { @@ -32,21 +33,29 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { // Build a valid intention ixn := &structs.Intention{ - ID: testUUID(), - Meta: map[string]string{}, + ID: testUUID(), + SourceNS: "default", + SourceName: "*", + DestinationNS: "default", + DestinationName: "web", + Meta: map[string]string{}, } // Inserting a with empty ID is disallowed. assert.Nil(s.IntentionSet(1, ixn)) // Make sure the index got updated. - assert.Equal(s.maxIndex(intentionsTableName), uint64(1)) + assert.Equal(uint64(1), s.maxIndex(intentionsTableName)) assert.True(watchFired(ws), "watch fired") // Read it back out and verify it. expected := &structs.Intention{ - ID: ixn.ID, - Meta: map[string]string{}, + ID: ixn.ID, + SourceNS: "default", + SourceName: "*", + DestinationNS: "default", + DestinationName: "web", + Meta: map[string]string{}, RaftIndex: structs.RaftIndex{ CreateIndex: 1, ModifyIndex: 1, @@ -64,7 +73,7 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { assert.Nil(s.IntentionSet(2, ixn)) // Make sure the index got updated. - assert.Equal(s.maxIndex(intentionsTableName), uint64(2)) + assert.Equal(uint64(2), s.maxIndex(intentionsTableName)) assert.True(watchFired(ws), "watch fired") // Read it back and verify the data was updated @@ -75,6 +84,24 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { assert.Nil(err) assert.Equal(expected.ModifyIndex, idx) assert.Equal(expected, actual) + + // Attempt to insert another intention with duplicate 4-tuple + ixn = &structs.Intention{ + ID: testUUID(), + SourceNS: "default", + SourceName: "*", + DestinationNS: "default", + DestinationName: "web", + Meta: map[string]string{}, + } + + // Duplicate 4-tuple should cause an error + ws = memdb.NewWatchSet() + assert.NotNil(s.IntentionSet(3, ixn)) + + // Make sure the index did NOT get updated. + assert.Equal(uint64(2), s.maxIndex(intentionsTableName)) + assert.False(watchFired(ws), "watch not fired") } func TestStore_IntentionSet_emptyId(t *testing.T) { @@ -305,6 +332,31 @@ func TestStore_IntentionMatch_table(t *testing.T) { }, }, }, + + { + "single exact namespace/name with duplicate destinations", + [][]string{ + // 4-tuple specifies src and destination to test duplicate destinations + // with different sources. We flip them around to test in both + // directions. The first pair are the ones searched on in both cases so + // the duplicates need to be there. + {"foo", "bar", "foo", "*"}, + {"foo", "bar", "bar", "*"}, + {"*", "*", "*", "*"}, + }, + [][]string{ + {"foo", "bar"}, + }, + [][][]string{ + { + // Note the first two have the same precedence so we rely on arbitrary + // lexicographical tie-break behaviour. + {"foo", "bar", "bar", "*"}, + {"foo", "bar", "foo", "*"}, + {"*", "*", "*", "*"}, + }, + }, + }, } // testRunner implements the test for a single case, but can be @@ -321,9 +373,17 @@ func TestStore_IntentionMatch_table(t *testing.T) { case structs.IntentionMatchDestination: ixn.DestinationNS = v[0] ixn.DestinationName = v[1] + if len(v) == 4 { + ixn.SourceNS = v[2] + ixn.SourceName = v[3] + } case structs.IntentionMatchSource: ixn.SourceNS = v[0] ixn.SourceName = v[1] + if len(v) == 4 { + ixn.DestinationNS = v[2] + ixn.DestinationName = v[3] + } } assert.Nil(s.IntentionSet(idx, ixn)) @@ -345,7 +405,7 @@ func TestStore_IntentionMatch_table(t *testing.T) { assert.Nil(err) // Should have equal lengths - assert.Len(matches, len(tc.Expected)) + require.Len(t, matches, len(tc.Expected)) // Verify matches for i, expected := range tc.Expected { @@ -353,9 +413,27 @@ func TestStore_IntentionMatch_table(t *testing.T) { for _, ixn := range matches[i] { switch typ { case structs.IntentionMatchDestination: - actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) + if len(expected) > 1 && len(expected[0]) == 4 { + actual = append(actual, []string{ + ixn.DestinationNS, + ixn.DestinationName, + ixn.SourceNS, + ixn.SourceName, + }) + } else { + actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) + } case structs.IntentionMatchSource: - actual = append(actual, []string{ixn.SourceNS, ixn.SourceName}) + if len(expected) > 1 && len(expected[0]) == 4 { + actual = append(actual, []string{ + ixn.SourceNS, + ixn.SourceName, + ixn.DestinationNS, + ixn.DestinationName, + }) + } else { + actual = append(actual, []string{ixn.SourceNS, ixn.SourceName}) + } } } diff --git a/agent/structs/intention.go b/agent/structs/intention.go index d801635c9..316c9632b 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -166,7 +166,7 @@ func (x *Intention) GetACLPrefix() (string, bool) { // String returns a human-friendly string for this intention. func (x *Intention) String() string { - return fmt.Sprintf("%s %s/%s => %s/%s (ID: %s", + return fmt.Sprintf("%s %s/%s => %s/%s (ID: %s)", strings.ToUpper(string(x.Action)), x.SourceNS, x.SourceName, x.DestinationNS, x.DestinationName, @@ -305,7 +305,26 @@ func (s IntentionPrecedenceSorter) Less(i, j int) bool { // Next test the # of exact values in source aExact = s.countExact(a.SourceNS, a.SourceName) bExact = s.countExact(b.SourceNS, b.SourceName) - return aExact > bExact + if aExact != bExact { + return aExact > bExact + } + + // Tie break on lexicographic order of the 4-tuple in canonical form (SrcNS, + // Src, DstNS, Dst). This is arbitrary but it keeps sorting deterministic + // which is a nice property for consistency. It is arguably open to abuse if + // implementations rely on this however by definition the order among + // same-precedence rules is arbitrary and doesn't affect whether an allow or + // deny rule is acted on since all applicable rules are checked. + if a.SourceNS != b.SourceNS { + return a.SourceNS < b.SourceNS + } + if a.SourceName != b.SourceName { + return a.SourceName < b.SourceName + } + if a.DestinationNS != b.DestinationNS { + return a.DestinationNS < b.DestinationNS + } + return a.DestinationName < b.DestinationName } // countExact counts the number of exact values (not wildcards) in diff --git a/agent/structs/intention_test.go b/agent/structs/intention_test.go index 948ae920e..cda88632f 100644 --- a/agent/structs/intention_test.go +++ b/agent/structs/intention_test.go @@ -192,6 +192,30 @@ func TestIntentionPrecedenceSorter(t *testing.T) { {"*", "*", "*", "*"}, }, }, + { + "tiebreak deterministically", + [][]string{ + {"a", "*", "a", "b"}, + {"a", "*", "a", "a"}, + {"b", "a", "a", "a"}, + {"a", "b", "a", "a"}, + {"a", "a", "b", "a"}, + {"a", "a", "a", "b"}, + {"a", "a", "a", "a"}, + }, + [][]string{ + // Exact matches first in lexicographical order (arbitrary but + // deterministic) + {"a", "a", "a", "a"}, + {"a", "a", "a", "b"}, + {"a", "a", "b", "a"}, + {"a", "b", "a", "a"}, + {"b", "a", "a", "a"}, + // Wildcards next, lexicographical + {"a", "*", "a", "a"}, + {"a", "*", "a", "b"}, + }, + }, } for _, tc := range cases { From 280382c25fa13632626e6e798f4c9054150cadea Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 5 Apr 2018 12:53:42 +0100 Subject: [PATCH 114/539] Add tests all the way up through the endpoints to ensure duplicate src/destination is supported and so ultimately deny/allow nesting works. Also adds a sanity check test for `api.Agent().ConnectAuthorize()` and a fix for a trivial bug in it. --- agent/agent_endpoint_test.go | 78 +++++++++++++++++++++++++ agent/consul/intention_endpoint_test.go | 35 +++++++---- agent/intentions_endpoint_test.go | 33 +++++++---- api/agent.go | 2 +- api/agent_test.go | 23 ++++++++ 5 files changed, 148 insertions(+), 23 deletions(-) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 1b017fa78..1c6f7d830 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2293,6 +2293,84 @@ func TestAgentConnectAuthorize_deny(t *testing.T) { assert.Contains(obj.Reason, "Matched") } +func TestAgentConnectAuthorize_denyWildcard(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + target := "db" + + // Create some intentions + { + // Deny wildcard to DB + req := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + req.Intention.SourceNS = structs.IntentionDefaultNamespace + req.Intention.SourceName = "*" + req.Intention.DestinationNS = structs.IntentionDefaultNamespace + req.Intention.DestinationName = target + req.Intention.Action = structs.IntentionActionDeny + + var reply string + assert.Nil(a.RPC("Intention.Apply", &req, &reply)) + } + { + // Allow web to DB + req := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + req.Intention.SourceNS = structs.IntentionDefaultNamespace + req.Intention.SourceName = "web" + req.Intention.DestinationNS = structs.IntentionDefaultNamespace + req.Intention.DestinationName = target + req.Intention.Action = structs.IntentionActionAllow + + var reply string + assert.Nil(a.RPC("Intention.Apply", &req, &reply)) + } + + // Web should be allowed + { + args := &structs.ConnectAuthorizeRequest{ + Target: target, + ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), + } + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + assert.True(obj.Authorized) + assert.Contains(obj.Reason, "Matched") + } + + // API should be denied + { + args := &structs.ConnectAuthorizeRequest{ + Target: target, + ClientCertURI: connect.TestSpiffeIDService(t, "api").URI().String(), + } + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + assert.False(obj.Authorized) + assert.Contains(obj.Reason, "Matched") + } +} + // Test that authorize fails without service:write for the target service. func TestAgentConnectAuthorize_serviceWrite(t *testing.T) { t.Parallel() diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index a1e1ae751..dfac4fc45 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -830,12 +830,13 @@ func TestIntentionMatch_good(t *testing.T) { // Create some records { insert := [][]string{ - {"foo", "*"}, - {"foo", "bar"}, - {"foo", "baz"}, // shouldn't match - {"bar", "bar"}, // shouldn't match - {"bar", "*"}, // shouldn't match - {"*", "*"}, + {"foo", "*", "foo", "*"}, + {"foo", "*", "foo", "bar"}, + {"foo", "*", "foo", "baz"}, // shouldn't match + {"foo", "*", "bar", "bar"}, // shouldn't match + {"foo", "*", "bar", "*"}, // shouldn't match + {"foo", "*", "*", "*"}, + {"bar", "*", "foo", "bar"}, // duplicate destination different source } for _, v := range insert { @@ -843,10 +844,10 @@ func TestIntentionMatch_good(t *testing.T) { Datacenter: "dc1", Op: structs.IntentionOpCreate, Intention: &structs.Intention{ - SourceNS: "default", - SourceName: "test", - DestinationNS: v[0], - DestinationName: v[1], + SourceNS: v[0], + SourceName: v[1], + DestinationNS: v[2], + DestinationName: v[3], Action: structs.IntentionActionAllow, }, } @@ -874,10 +875,20 @@ func TestIntentionMatch_good(t *testing.T) { assert.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Match", req, &resp)) assert.Len(resp.Matches, 1) - expected := [][]string{{"foo", "bar"}, {"foo", "*"}, {"*", "*"}} + expected := [][]string{ + {"bar", "*", "foo", "bar"}, + {"foo", "*", "foo", "bar"}, + {"foo", "*", "foo", "*"}, + {"foo", "*", "*", "*"}, + } var actual [][]string for _, ixn := range resp.Matches[0] { - actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) + actual = append(actual, []string{ + ixn.SourceNS, + ixn.SourceName, + ixn.DestinationNS, + ixn.DestinationName, + }) } assert.Equal(expected, actual) } diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index 4df0bf312..d4d68f26c 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -74,12 +74,13 @@ func TestIntentionsMatch_basic(t *testing.T) { // Create some intentions { insert := [][]string{ - {"foo", "*"}, - {"foo", "bar"}, - {"foo", "baz"}, // shouldn't match - {"bar", "bar"}, // shouldn't match - {"bar", "*"}, // shouldn't match - {"*", "*"}, + {"foo", "*", "foo", "*"}, + {"foo", "*", "foo", "bar"}, + {"foo", "*", "foo", "baz"}, // shouldn't match + {"foo", "*", "bar", "bar"}, // shouldn't match + {"foo", "*", "bar", "*"}, // shouldn't match + {"foo", "*", "*", "*"}, + {"bar", "*", "foo", "bar"}, // duplicate destination different source } for _, v := range insert { @@ -88,8 +89,10 @@ func TestIntentionsMatch_basic(t *testing.T) { Op: structs.IntentionOpCreate, Intention: structs.TestIntention(t), } - ixn.Intention.DestinationNS = v[0] - ixn.Intention.DestinationName = v[1] + ixn.Intention.SourceNS = v[0] + ixn.Intention.SourceName = v[1] + ixn.Intention.DestinationNS = v[2] + ixn.Intention.DestinationName = v[3] // Create var reply string @@ -108,9 +111,19 @@ func TestIntentionsMatch_basic(t *testing.T) { assert.Len(value, 1) var actual [][]string - expected := [][]string{{"foo", "bar"}, {"foo", "*"}, {"*", "*"}} + expected := [][]string{ + {"bar", "*", "foo", "bar"}, + {"foo", "*", "foo", "bar"}, + {"foo", "*", "foo", "*"}, + {"foo", "*", "*", "*"}, + } for _, ixn := range value["foo/bar"] { - actual = append(actual, []string{ixn.DestinationNS, ixn.DestinationName}) + actual = append(actual, []string{ + ixn.SourceNS, + ixn.SourceName, + ixn.DestinationNS, + ixn.DestinationName, + }) } assert.Equal(expected, actual) diff --git a/api/agent.go b/api/agent.go index 50d334d71..6b662fa2c 100644 --- a/api/agent.go +++ b/api/agent.go @@ -530,7 +530,7 @@ func (a *Agent) ConnectAuthorize(auth *AgentAuthorizeParams) (*AgentAuthorize, e if err != nil { return nil, err } - resp.Body.Close() + defer resp.Body.Close() var out AgentAuthorize if err := decodeBody(resp, &out); err != nil { diff --git a/api/agent_test.go b/api/agent_test.go index 653512be9..6186bffe3 100644 --- a/api/agent_test.go +++ b/api/agent_test.go @@ -996,3 +996,26 @@ func TestAPI_AgentConnectCARoots_empty(t *testing.T) { require.Equal(uint64(0), meta.LastIndex) require.Len(list.Roots, 0) } + +// TODO(banks): once we have CA stuff setup properly we can probably make this +// much more complete. This is just a sanity check that the agent code basically +// works. +func TestAPI_AgentConnectAuthorize(t *testing.T) { + t.Parallel() + + require := require.New(t) + c, s := makeClient(t) + defer s.Stop() + + agent := c.Agent() + params := &AgentAuthorizeParams{ + Target: "foo", + ClientCertSerial: "fake", + // Importing connect.TestSpiffeIDService creates an import cycle + ClientCertURI: "spiffe://123.consul/ns/default/dc/ny1/svc/web", + } + auth, err := agent.ConnectAuthorize(params) + require.Nil(err) + require.True(auth.Authorized) + require.Equal(auth.Reason, "ACLs disabled, access is allowed by default") +} From 78e48fd547a5e4b4abffd6e665de29c4ae492a0c Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Mon, 16 Apr 2018 16:00:20 +0100 Subject: [PATCH 115/539] Added connect proxy config and local agent state setup on boot. --- agent/agent.go | 79 +++++++++++++++ agent/agent_test.go | 102 +++++++++++++++++++ agent/config/builder.go | 82 +++++++++++++++ agent/config/config.go | 43 ++++++++ agent/config/runtime.go | 35 +++++++ agent/config/runtime_test.go | 79 ++++++++++++++- agent/local/state.go | 181 ++++++++++++++++++++++++++++++++-- agent/local/state_test.go | 129 ++++++++++++++++++++++++ agent/structs/connect.go | 76 ++++++++++++++ agent/structs/connect_test.go | 115 +++++++++++++++++++++ 10 files changed, 911 insertions(+), 10 deletions(-) create mode 100644 agent/structs/connect_test.go diff --git a/agent/agent.go b/agent/agent.go index 4410ff293..b988029ce 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -246,6 +246,8 @@ func LocalConfig(cfg *config.RuntimeConfig) local.Config { NodeID: cfg.NodeID, NodeName: cfg.NodeName, TaggedAddresses: map[string]string{}, + ProxyBindMinPort: cfg.ConnectProxyBindMinPort, + ProxyBindMaxPort: cfg.ConnectProxyBindMaxPort, } for k, v := range cfg.TaggedAddresses { lc.TaggedAddresses[k] = v @@ -328,6 +330,9 @@ func (a *Agent) Start() error { if err := a.loadServices(c); err != nil { return err } + if err := a.loadProxies(c); err != nil { + return err + } if err := a.loadChecks(c); err != nil { return err } @@ -1973,6 +1978,58 @@ func (a *Agent) RemoveCheck(checkID types.CheckID, persist bool) error { return nil } +// AddProxy adds a new local Connect Proxy instance to be managed by the agent. +// +// It REQUIRES that the service that is being proxied is already present in the +// local state. Note that this is only used for agent-managed proxies so we can +// ensure that we always make this true. For externally managed and registered +// proxies we explicitly allow the proxy to be registered first to make +// bootstrap ordering of a new service simpler but the same is not true here +// since this is only ever called when setting up a _managed_ proxy which was +// registered as part of a service registration either from config or HTTP API +// call. +func (a *Agent) AddProxy(proxy *structs.ConnectManagedProxy, persist bool) error { + // Lookup the target service token in state if there is one. + token := a.State.ServiceToken(proxy.TargetServiceID) + + // Add the proxy to local state first since we may need to assign a port which + // needs to be coordinate under state lock. AddProxy will generate the + // NodeService for the proxy populated with the allocated (or configured) port + // and an ID, but it doesn't add it to the agent directly since that could + // deadlock and we may need to coordinate adding it and persisting etc. + proxyService, err := a.State.AddProxy(proxy, token) + if err != nil { + return err + } + + // TODO(banks): register proxy health checks. + err = a.AddService(proxyService, nil, persist, token) + if err != nil { + // Remove the state too + a.State.RemoveProxy(proxyService.ID) + return err + } + + // TODO(banks): persist some of the local proxy state (not the _proxy_ token). + return nil +} + +// RemoveProxy stops and removes a local proxy instance. +func (a *Agent) RemoveProxy(proxyID string, persist bool) error { + // Validate proxyID + if proxyID == "" { + return fmt.Errorf("proxyID missing") + } + + if err := a.State.RemoveProxy(proxyID); err != nil { + return err + } + + // TODO(banks): unpersist proxy + + return nil +} + func (a *Agent) cancelCheckMonitors(checkID types.CheckID) { // Stop any monitors delete(a.checkReapAfter, checkID) @@ -2366,6 +2423,25 @@ func (a *Agent) unloadChecks() error { return nil } +// loadProxies will load connect proxy definitions from configuration and +// persisted definitions on disk, and load them into the local agent. +func (a *Agent) loadProxies(conf *config.RuntimeConfig) error { + for _, proxy := range conf.ConnectProxies { + if err := a.AddProxy(proxy, false); err != nil { + return fmt.Errorf("failed adding proxy: %s", err) + } + } + + // TODO(banks): persist proxy state and re-load it here? + return nil +} + +// unloadProxies will deregister all proxies known to the local agent. +func (a *Agent) unloadProxies() error { + // TODO(banks): implement me + return nil +} + // snapshotCheckState is used to snapshot the current state of the health // checks. This is done before we reload our checks, so that we can properly // restore into the same state. @@ -2514,6 +2590,9 @@ func (a *Agent) ReloadConfig(newCfg *config.RuntimeConfig) error { if err := a.loadServices(newCfg); err != nil { return fmt.Errorf("Failed reloading services: %s", err) } + if err := a.loadProxies(newCfg); err != nil { + return fmt.Errorf("Failed reloading proxies: %s", err) + } if err := a.loadChecks(newCfg); err != nil { return fmt.Errorf("Failed reloading checks: %s", err) } diff --git a/agent/agent_test.go b/agent/agent_test.go index df1593bd9..2ee42d7db 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -15,6 +15,8 @@ import ( "testing" "time" + "github.com/stretchr/testify/require" + "github.com/hashicorp/consul/agent/checks" "github.com/hashicorp/consul/agent/consul" "github.com/hashicorp/consul/agent/structs" @@ -2235,3 +2237,103 @@ func TestAgent_reloadWatchesHTTPS(t *testing.T) { t.Fatalf("bad: %s", err) } } + +func TestAgent_AddProxy(t *testing.T) { + t.Parallel() + a := NewTestAgent(t.Name(), ` + node_name = "node1" + `) + defer a.Shutdown() + + // Register a target service we can use + reg := &structs.NodeService{ + Service: "web", + Port: 8080, + } + require.NoError(t, a.AddService(reg, nil, false, "")) + + tests := []struct { + desc string + proxy *structs.ConnectManagedProxy + wantErr bool + }{ + { + desc: "basic proxy adding, unregistered service", + proxy: &structs.ConnectManagedProxy{ + ExecMode: structs.ProxyExecModeDaemon, + Command: "consul connect proxy", + Config: map[string]interface{}{ + "foo": "bar", + }, + TargetServiceID: "db", // non-existent service. + }, + // Target service must be registered. + wantErr: true, + }, + { + desc: "basic proxy adding, unregistered service", + proxy: &structs.ConnectManagedProxy{ + ExecMode: structs.ProxyExecModeDaemon, + Command: "consul connect proxy", + Config: map[string]interface{}{ + "foo": "bar", + }, + TargetServiceID: "web", + }, + wantErr: false, + }, + } + + for _, tt := range tests { + t.Run(tt.desc, func(t *testing.T) { + require := require.New(t) + + err := a.AddProxy(tt.proxy, false) + if tt.wantErr { + require.Error(err) + return + } + require.NoError(err) + + // Test the ID was created as we expect. + got := a.State.Proxy("web-proxy") + require.Equal(tt.proxy, got) + }) + } +} + +func TestAgent_RemoveProxy(t *testing.T) { + t.Parallel() + a := NewTestAgent(t.Name(), ` + node_name = "node1" + `) + defer a.Shutdown() + require := require.New(t) + + // Register a target service we can use + reg := &structs.NodeService{ + Service: "web", + Port: 8080, + } + require.NoError(a.AddService(reg, nil, false, "")) + + // Add a proxy for web + pReg := &structs.ConnectManagedProxy{ + TargetServiceID: "web", + } + require.NoError(a.AddProxy(pReg, false)) + + // Test the ID was created as we expect. + gotProxy := a.State.Proxy("web-proxy") + require.Equal(pReg, gotProxy) + + err := a.RemoveProxy("web-proxy", false) + require.NoError(err) + + gotProxy = a.State.Proxy("web-proxy") + require.Nil(gotProxy) + + // Removing invalid proxy should be an error + err = a.RemoveProxy("foobar", false) + require.Error(err) +} diff --git a/agent/config/builder.go b/agent/config/builder.go index 6048dab92..a6338ae14 100644 --- a/agent/config/builder.go +++ b/agent/config/builder.go @@ -322,8 +322,15 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) { } var services []*structs.ServiceDefinition + var proxies []*structs.ConnectManagedProxy for _, service := range c.Services { services = append(services, b.serviceVal(&service)) + // Register any connect proxies requested + if proxy := b.connectManagedProxyVal(&service); proxy != nil { + proxies = append(proxies, proxy) + } + // TODO(banks): support connect-native registrations (v.Connect.Enabled == + // true) } if c.Service != nil { services = append(services, b.serviceVal(c.Service)) @@ -520,6 +527,9 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) { consulRaftHeartbeatTimeout := b.durationVal("consul.raft.heartbeat_timeout", c.Consul.Raft.HeartbeatTimeout) * time.Duration(performanceRaftMultiplier) consulRaftLeaderLeaseTimeout := b.durationVal("consul.raft.leader_lease_timeout", c.Consul.Raft.LeaderLeaseTimeout) * time.Duration(performanceRaftMultiplier) + // Connect proxy defaults. + proxyBindMinPort, proxyBindMaxPort := b.connectProxyPortRange(c.Connect) + // ---------------------------------------------------------------- // build runtime config // @@ -638,6 +648,9 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) { CheckUpdateInterval: b.durationVal("check_update_interval", c.CheckUpdateInterval), Checks: checks, ClientAddrs: clientAddrs, + ConnectProxies: proxies, + ConnectProxyBindMinPort: proxyBindMinPort, + ConnectProxyBindMaxPort: proxyBindMaxPort, DataDir: b.stringVal(c.DataDir), Datacenter: strings.ToLower(b.stringVal(c.Datacenter)), DevMode: b.boolVal(b.Flags.DevMode), @@ -1010,6 +1023,75 @@ func (b *Builder) serviceVal(v *ServiceDefinition) *structs.ServiceDefinition { } } +func (b *Builder) connectManagedProxyVal(v *ServiceDefinition) *structs.ConnectManagedProxy { + if v.Connect == nil || v.Connect.Proxy == nil { + return nil + } + + p := v.Connect.Proxy + + targetID := b.stringVal(v.ID) + if targetID == "" { + targetID = b.stringVal(v.Name) + } + + execMode := structs.ProxyExecModeDaemon + if p.ExecMode != nil { + switch *p.ExecMode { + case "daemon": + execMode = structs.ProxyExecModeDaemon + case "script": + execMode = structs.ProxyExecModeScript + default: + b.err = multierror.Append(fmt.Errorf( + "service[%s]: invalid connect proxy exec_mode: %s", targetID, + *p.ExecMode)) + return nil + } + } + + return &structs.ConnectManagedProxy{ + ExecMode: execMode, + Command: b.stringVal(p.Command), + Config: p.Config, + // ProxyService will be setup when the agent registers the configured + // proxies and starts them etc. We could do it here but we may need to do + // things like probe the OS for a free port etc. And we have enough info to + // resolve all this later. + ProxyService: nil, + TargetServiceID: targetID, + } +} + +func (b *Builder) connectProxyPortRange(v *Connect) (int, int) { + // Choose this default range just because. There are zero "safe" ranges that + // don't have something somewhere that uses them which is why this is + // configurable. We rely on the host not having any of these ports for non + // agent managed proxies. I went with 20k because I know of at least one + // super-common server memcached that defaults to the 10k range. + start := 20000 + end := 20256 // 256 proxies on a host is enough for anyone ;) + + if v == nil || v.ProxyDefaults == nil { + return start, end + } + + min, max := v.ProxyDefaults.BindMinPort, v.ProxyDefaults.BindMaxPort + if min == nil && max == nil { + return start, end + } + + // If either was set show a warning if the overall range was invalid + if min == nil || max == nil || *max < *min { + b.warn("Connect proxy_defaults bind_min_port and bind_max_port must both "+ + "be set with max >= min. To disable automatic port allocation set both "+ + "to 0. Using default range %d..%d.", start, end) + return start, end + } + + return *min, *max +} + func (b *Builder) boolVal(v *bool) bool { if v == nil { return false diff --git a/agent/config/config.go b/agent/config/config.go index 79d274d0d..f652c9076 100644 --- a/agent/config/config.go +++ b/agent/config/config.go @@ -159,6 +159,7 @@ type Config struct { CheckUpdateInterval *string `json:"check_update_interval,omitempty" hcl:"check_update_interval" mapstructure:"check_update_interval"` Checks []CheckDefinition `json:"checks,omitempty" hcl:"checks" mapstructure:"checks"` ClientAddr *string `json:"client_addr,omitempty" hcl:"client_addr" mapstructure:"client_addr"` + Connect *Connect `json:"connect,omitempty" hcl:"connect" mapstructure:"connect"` DNS DNS `json:"dns_config,omitempty" hcl:"dns_config" mapstructure:"dns_config"` DNSDomain *string `json:"domain,omitempty" hcl:"domain" mapstructure:"domain"` DNSRecursors []string `json:"recursors,omitempty" hcl:"recursors" mapstructure:"recursors"` @@ -324,6 +325,7 @@ type ServiceDefinition struct { Checks []CheckDefinition `json:"checks,omitempty" hcl:"checks" mapstructure:"checks"` Token *string `json:"token,omitempty" hcl:"token" mapstructure:"token"` EnableTagOverride *bool `json:"enable_tag_override,omitempty" hcl:"enable_tag_override" mapstructure:"enable_tag_override"` + Connect *ServiceConnect `json:"connect,omitempty" hcl:"connect" mapstructure:"connect"` } type CheckDefinition struct { @@ -349,6 +351,47 @@ type CheckDefinition struct { DeregisterCriticalServiceAfter *string `json:"deregister_critical_service_after,omitempty" hcl:"deregister_critical_service_after" mapstructure:"deregister_critical_service_after"` } +// ServiceConnect is the connect block within a service registration +type ServiceConnect struct { + // TODO(banks) add way to specify that the app is connect-native + // Proxy configures a connect proxy instance for the service + Proxy *ServiceConnectProxy `json:"proxy,omitempty" hcl:"proxy" mapstructure:"proxy"` +} + +type ServiceConnectProxy struct { + Command *string `json:"command,omitempty" hcl:"command" mapstructure:"command"` + ExecMode *string `json:"exec_mode,omitempty" hcl:"exec_mode" mapstructure:"exec_mode"` + Config map[string]interface{} `json:"config,omitempty" hcl:"config" mapstructure:"config"` +} + +// Connect is the agent-global connect configuration. +type Connect struct { + // Enabled opts the agent into connect. It should be set on all clients and + // servers in a cluster for correct connect operation. TODO(banks) review that. + Enabled bool `json:"enabled,omitempty" hcl:"enabled" mapstructure:"enabled"` + ProxyDefaults *ConnectProxyDefaults `json:"proxy_defaults,omitempty" hcl:"proxy_defaults" mapstructure:"proxy_defaults"` +} + +// ConnectProxyDefaults is the agent-global connect proxy configuration. +type ConnectProxyDefaults struct { + // BindMinPort, BindMaxPort are the inclusive lower and upper bounds on the + // port range allocated to the agent to assign to connect proxies that have no + // bind_port specified. + BindMinPort *int `json:"bind_min_port,omitempty" hcl:"bind_min_port" mapstructure:"bind_min_port"` + BindMaxPort *int `json:"bind_max_port,omitempty" hcl:"bind_max_port" mapstructure:"bind_max_port"` + // ExecMode is used where a registration doesn't include an exec_mode. + // Defaults to daemon. + ExecMode *string `json:"exec_mode,omitempty" hcl:"exec_mode" mapstructure:"exec_mode"` + // DaemonCommand is used to start proxy in exec_mode = daemon if not specified + // at registration time. + DaemonCommand *string `json:"daemon_command,omitempty" hcl:"daemon_command" mapstructure:"daemon_command"` + // ScriptCommand is used to start proxy in exec_mode = script if not specified + // at registration time. + ScriptCommand *string `json:"script_command,omitempty" hcl:"script_command" mapstructure:"script_command"` + // Config is merged into an Config specified at registration time. + Config map[string]interface{} `json:"config,omitempty" hcl:"config" mapstructure:"config"` +} + type DNS struct { AllowStale *bool `json:"allow_stale,omitempty" hcl:"allow_stale" mapstructure:"allow_stale"` ARecordLimit *int `json:"a_record_limit,omitempty" hcl:"a_record_limit" mapstructure:"a_record_limit"` diff --git a/agent/config/runtime.go b/agent/config/runtime.go index 66e7e79e7..55c15d14e 100644 --- a/agent/config/runtime.go +++ b/agent/config/runtime.go @@ -616,6 +616,41 @@ type RuntimeConfig struct { // flag: -client string ClientAddrs []*net.IPAddr + // ConnectEnabled opts the agent into connect. It should be set on all clients + // and servers in a cluster for correct connect operation. TODO(banks) review + // that. + ConnectEnabled bool + + // ConnectProxies is a list of configured proxies taken from the "connect" + // block of service registrations. + ConnectProxies []*structs.ConnectManagedProxy + + // ConnectProxyBindMinPort is the inclusive start of the range of ports + // allocated to the agent for starting proxy listeners on where no explicit + // port is specified. + ConnectProxyBindMinPort int + + // ConnectProxyBindMaxPort is the inclusive end of the range of ports + // allocated to the agent for starting proxy listeners on where no explicit + // port is specified. + ConnectProxyBindMaxPort int + + // ConnectProxyDefaultExecMode is used where a registration doesn't include an + // exec_mode. Defaults to daemon. + ConnectProxyDefaultExecMode *string + + // ConnectProxyDefaultDaemonCommand is used to start proxy in exec_mode = + // daemon if not specified at registration time. + ConnectProxyDefaultDaemonCommand *string + + // ConnectProxyDefaultScriptCommand is used to start proxy in exec_mode = + // script if not specified at registration time. + ConnectProxyDefaultScriptCommand *string + + // ConnectProxyDefaultConfig is merged with any config specified at + // registration time to allow global control of defaults. + ConnectProxyDefaultConfig map[string]interface{} + // DNSAddrs contains the list of TCP and UDP addresses the DNS server will // bind to. If the DNS endpoint is disabled (ports.dns <= 0) the list is // empty. diff --git a/agent/config/runtime_test.go b/agent/config/runtime_test.go index 060215c35..e990f0689 100644 --- a/agent/config/runtime_test.go +++ b/agent/config/runtime_test.go @@ -2353,6 +2353,21 @@ func TestFullConfig(t *testing.T) { ], "check_update_interval": "16507s", "client_addr": "93.83.18.19", + "connect": { + "enabled": true, + "proxy_defaults": { + "bind_min_port": 2000, + "bind_max_port": 3000, + "exec_mode": "script", + "daemon_command": "consul connect proxy", + "script_command": "proxyctl.sh", + "config": { + "foo": "bar", + "connect_timeout_ms": 1000, + "pedantic_mode": true + } + } + }, "data_dir": "` + dataDir + `", "datacenter": "rzo029wg", "disable_anonymous_signature": true, @@ -2613,7 +2628,16 @@ func TestFullConfig(t *testing.T) { "ttl": "11222s", "deregister_critical_service_after": "68482s" } - ] + ], + "connect": { + "proxy": { + "exec_mode": "daemon", + "command": "awesome-proxy", + "config": { + "foo": "qux" + } + } + } } ], "session_ttl_min": "26627s", @@ -2786,6 +2810,21 @@ func TestFullConfig(t *testing.T) { ] check_update_interval = "16507s" client_addr = "93.83.18.19" + connect { + enabled = true + proxy_defaults { + bind_min_port = 2000 + bind_max_port = 3000 + exec_mode = "script" + daemon_command = "consul connect proxy" + script_command = "proxyctl.sh" + config = { + foo = "bar" + connect_timeout_ms = 1000 + pedantic_mode = true + } + } + } data_dir = "` + dataDir + `" datacenter = "rzo029wg" disable_anonymous_signature = true @@ -3047,6 +3086,15 @@ func TestFullConfig(t *testing.T) { deregister_critical_service_after = "68482s" } ] + connect { + proxy { + exec_mode = "daemon" + command = "awesome-proxy" + config = { + foo = "qux" + } + } + } } ] session_ttl_min = "26627s" @@ -3355,8 +3403,23 @@ func TestFullConfig(t *testing.T) { DeregisterCriticalServiceAfter: 13209 * time.Second, }, }, - CheckUpdateInterval: 16507 * time.Second, - ClientAddrs: []*net.IPAddr{ipAddr("93.83.18.19")}, + CheckUpdateInterval: 16507 * time.Second, + ClientAddrs: []*net.IPAddr{ipAddr("93.83.18.19")}, + ConnectProxies: []*structs.ConnectManagedProxy{ + { + ExecMode: structs.ProxyExecModeDaemon, + Command: "awesome-proxy", + Config: map[string]interface{}{ + "foo": "qux", // Overriden by service + // Note globals are not merged here but on rendering to the proxy + // endpoint. That's because proxies can be added later too so merging + // at config time is redundant if we have to do it later anyway. + }, + TargetServiceID: "MRHVMZuD", + }, + }, + ConnectProxyBindMinPort: 2000, + ConnectProxyBindMaxPort: 3000, DNSAddrs: []net.Addr{tcpAddr("93.95.95.81:7001"), udpAddr("93.95.95.81:7001")}, DNSARecordLimit: 29907, DNSAllowStale: true, @@ -4018,6 +4081,14 @@ func TestSanitize(t *testing.T) { } ], "ClientAddrs": [], + "ConnectEnabled": false, + "ConnectProxies": [], + "ConnectProxyBindMaxPort": 0, + "ConnectProxyBindMinPort": 0, + "ConnectProxyDefaultConfig": {}, + "ConnectProxyDefaultDaemonCommand": null, + "ConnectProxyDefaultExecMode": null, + "ConnectProxyDefaultScriptCommand": null, "ConsulCoordinateUpdateBatchSize": 0, "ConsulCoordinateUpdateMaxBatches": 0, "ConsulCoordinateUpdatePeriod": "15s", @@ -4150,9 +4221,11 @@ func TestSanitize(t *testing.T) { "Checks": [], "EnableTagOverride": false, "ID": "", + "Kind": "", "Meta": {}, "Name": "foo", "Port": 0, + "ProxyDestination": "", "Tags": [], "Token": "hidden" } diff --git a/agent/local/state.go b/agent/local/state.go index f19e88a76..47a006943 100644 --- a/agent/local/state.go +++ b/agent/local/state.go @@ -3,6 +3,7 @@ package local import ( "fmt" "log" + "math/rand" "reflect" "strconv" "strings" @@ -10,6 +11,8 @@ import ( "sync/atomic" "time" + "github.com/hashicorp/go-uuid" + "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/agent/token" @@ -27,6 +30,8 @@ type Config struct { NodeID types.NodeID NodeName string TaggedAddresses map[string]string + ProxyBindMinPort int + ProxyBindMaxPort int } // ServiceState describes the state of a service record. @@ -107,6 +112,21 @@ type rpc interface { RPC(method string, args interface{}, reply interface{}) error } +// ManagedProxy represents the local state for a registered proxy instance. +type ManagedProxy struct { + Proxy *structs.ConnectManagedProxy + + // ProxyToken is a special local-only security token that grants the bearer + // access to the proxy's config as well as allowing it to request certificates + // on behalf of the TargetService. Certain connect endpoints will validate + // against this token and if it matches will then use the TargetService.Token + // to actually authenticate the upstream RPC on behalf of the service. This + // token is passed securely to the proxy process via ENV vars and should never + // be exposed any other way. Unmanaged proxies will never see this and need to + // use service-scoped ACL tokens distributed externally. + ProxyToken string +} + // State is used to represent the node's services, // and checks. We use it to perform anti-entropy with the // catalog representation @@ -150,17 +170,28 @@ type State struct { // tokens contains the ACL tokens tokens *token.Store + + // managedProxies is a map of all manged connect proxies registered locally on + // this agent. This is NOT kept in sync with servers since it's agent-local + // config only. Proxy instances have separate service registrations in the + // services map above which are kept in sync via anti-entropy. Un-managed + // proxies (that registered themselves separately from the service + // registration) do not appear here as the agent doesn't need to manage their + // process nor config. The _do_ still exist in services above though as + // services with Kind == connect-proxy. + managedProxies map[string]*ManagedProxy } -// NewLocalState creates a new local state for the agent. +// NewState creates a new local state for the agent. func NewState(c Config, lg *log.Logger, tokens *token.Store) *State { l := &State{ - config: c, - logger: lg, - services: make(map[string]*ServiceState), - checks: make(map[types.CheckID]*CheckState), - metadata: make(map[string]string), - tokens: tokens, + config: c, + logger: lg, + services: make(map[string]*ServiceState), + checks: make(map[types.CheckID]*CheckState), + metadata: make(map[string]string), + tokens: tokens, + managedProxies: make(map[string]*ManagedProxy), } l.SetDiscardCheckOutput(c.DiscardCheckOutput) return l @@ -529,6 +560,142 @@ func (l *State) CriticalCheckStates() map[types.CheckID]*CheckState { return m } +// AddProxy is used to add a connect proxy entry to the local state. This +// assumes the proxy's NodeService is already registered via Agent.AddService +// (since that has to do other book keeping). The token passed here is the ACL +// token the service used to register itself so must have write on service +// record. +func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*structs.NodeService, error) { + if proxy == nil { + return nil, fmt.Errorf("no proxy") + } + + // Lookup the local service + target := l.Service(proxy.TargetServiceID) + if target == nil { + return nil, fmt.Errorf("target service ID %s not registered", + proxy.TargetServiceID) + } + + // Get bind info from config + cfg, err := proxy.ParseConfig() + if err != nil { + return nil, err + } + + // Construct almost all of the NodeService that needs to be registered by the + // caller outside of the lock. + svc := &structs.NodeService{ + Kind: structs.ServiceKindConnectProxy, + ID: target.ID + "-proxy", + Service: target.ID + "-proxy", + ProxyDestination: target.Service, + Address: cfg.BindAddress, + Port: cfg.BindPort, + } + + pToken, err := uuid.GenerateUUID() + if err != nil { + return nil, err + } + + // Lock now. We can't lock earlier as l.Service would deadlock and shouldn't + // anyway to minimise the critical section. + l.Lock() + defer l.Unlock() + + // Allocate port if needed (min and max inclusive) + rangeLen := l.config.ProxyBindMaxPort - l.config.ProxyBindMinPort + 1 + if svc.Port < 1 && l.config.ProxyBindMinPort > 0 && rangeLen > 0 { + // This should be a really short list so don't bother optimising lookup yet. + OUTER: + for _, offset := range rand.Perm(rangeLen) { + p := l.config.ProxyBindMinPort + offset + // See if this port was already allocated to another proxy + for _, other := range l.managedProxies { + if other.Proxy.ProxyService.Port == p { + // allready taken, skip to next random pick in the range + continue OUTER + } + } + // We made it through all existing proxies without a match so claim this one + svc.Port = p + break + } + } + // If no ports left (or auto ports disabled) fail + if svc.Port < 1 { + return nil, fmt.Errorf("no port provided for proxy bind_port and none "+ + " left in the allocated range [%d, %d]", l.config.ProxyBindMinPort, + l.config.ProxyBindMaxPort) + } + + proxy.ProxyService = svc + + // All set, add the proxy and return the service + l.managedProxies[svc.ID] = &ManagedProxy{ + Proxy: proxy, + ProxyToken: pToken, + } + + // No need to trigger sync as proxy state is local only. + return svc, nil +} + +// RemoveProxy is used to remove a proxy entry from the local state. +func (l *State) RemoveProxy(id string) error { + l.Lock() + defer l.Unlock() + + p := l.managedProxies[id] + if p == nil { + return fmt.Errorf("Proxy %s does not exist", id) + } + delete(l.managedProxies, id) + + // No need to trigger sync as proxy state is local only. + return nil +} + +// Proxy returns the local proxy state. +func (l *State) Proxy(id string) *structs.ConnectManagedProxy { + l.RLock() + defer l.RUnlock() + + p := l.managedProxies[id] + if p == nil { + return nil + } + return p.Proxy +} + +// Proxies returns the locally registered proxies. +func (l *State) Proxies() map[string]*structs.ConnectManagedProxy { + l.RLock() + defer l.RUnlock() + + m := make(map[string]*structs.ConnectManagedProxy) + for id, p := range l.managedProxies { + m[id] = p.Proxy + } + return m +} + +// ProxyToken returns the local proxy token for a given proxy. Note this is not +// an ACL token so it won't fallback to using the agent-configured default ACL +// token. If the proxy doesn't exist an error is returned, otherwise the token +// is guaranteed to exist. +func (l *State) ProxyToken(id string) (string, error) { + l.RLock() + defer l.RUnlock() + + p := l.managedProxies[id] + if p == nil { + return "", fmt.Errorf("proxy %s not registered", id) + } + return p.ProxyToken, nil +} + // Metadata returns the local node metadata fields that the // agent is aware of and are being kept in sync with the server func (l *State) Metadata() map[string]string { diff --git a/agent/local/state_test.go b/agent/local/state_test.go index d0c006a95..6950cd477 100644 --- a/agent/local/state_test.go +++ b/agent/local/state_test.go @@ -3,10 +3,14 @@ package local_test import ( "errors" "fmt" + "log" + "os" "reflect" "testing" "time" + "github.com/stretchr/testify/require" + "github.com/hashicorp/consul/agent" "github.com/hashicorp/consul/agent/config" "github.com/hashicorp/consul/agent/local" @@ -1664,3 +1668,128 @@ func checksInSync(state *local.State, wantChecks int) error { } return nil } + +func TestStateProxyManagement(t *testing.T) { + t.Parallel() + + state := local.NewState(local.Config{ + ProxyPortRangeStart: 20000, + ProxyPortRangeEnd: 20002, + }, log.New(os.Stderr, "", log.LstdFlags), &token.Store{}) + + // Stub state syncing + state.TriggerSyncChanges = func() {} + + p1 := structs.ConnectManagedProxy{ + ExecMode: structs.ProxyExecModeDaemon, + Command: "consul connect proxy", + TargetServiceID: "web", + } + + require := require.New(t) + assert := assert.New(t) + + _, err := state.AddProxy(&p1, "fake-token") + require.Error(err, "should fail as the target service isn't registered") + + // Sanity check done, lets add a couple of target services to the state + err = state.AddService(&structs.NodeService{ + Service: "web", + }, "fake-token-web") + require.NoError(err) + err = state.AddService(&structs.NodeService{ + Service: "cache", + }, "fake-token-cache") + require.NoError(err) + require.NoError(err) + err = state.AddService(&structs.NodeService{ + Service: "db", + }, "fake-token-db") + require.NoError(err) + + // Should work now + svc, err := state.AddProxy(&p1, "fake-token") + require.NoError(err) + + assert.Equal("web-proxy", svc.ID) + assert.Equal("web-proxy", svc.Service) + assert.Equal(structs.ServiceKindConnectProxy, svc.Kind) + assert.Equal("web", svc.ProxyDestination) + assert.Equal("", svc.Address, "should have empty address by default") + // Port is non-deterministic but could be either of 20000 or 20001 + assert.Contains([]int{20000, 20001}, svc.Port) + + // Second proxy should claim other port + p2 := p1 + p2.TargetServiceID = "cache" + svc2, err := state.AddProxy(&p2, "fake-token") + require.NoError(err) + assert.Contains([]int{20000, 20001}, svc2.Port) + assert.NotEqual(svc.Port, svc2.Port) + + // Just saving this for later... + p2Token, err := state.ProxyToken(svc2.ID) + require.NoError(err) + + // Third proxy should fail as all ports are used + p3 := p1 + p3.TargetServiceID = "db" + _, err = state.AddProxy(&p3, "fake-token") + require.Error(err) + + // But if we set a port explicitly it should be OK + p3.Config = map[string]interface{}{ + "bind_port": 1234, + "bind_address": "0.0.0.0", + } + svc3, err := state.AddProxy(&p3, "fake-token") + require.NoError(err) + require.Equal("0.0.0.0", svc3.Address) + require.Equal(1234, svc3.Port) + + // Remove one of the auto-assigned proxies + err = state.RemoveProxy(svc2.ID) + require.NoError(err) + + // Should be able to create a new proxy for that service with the port (it + // should have been "freed"). + p4 := p2 + svc4, err := state.AddProxy(&p4, "fake-token") + require.NoError(err) + assert.Contains([]int{20000, 20001}, svc2.Port) + assert.Equal(svc4.Port, svc2.Port, "should get the same port back that we freed") + + // Remove a proxy that doesn't exist should error + err = state.RemoveProxy("nope") + require.Error(err) + + assert.Equal(&p4, state.Proxy(p4.ProxyService.ID), + "should fetch the right proxy details") + assert.Nil(state.Proxy("nope")) + + proxies := state.Proxies() + assert.Len(proxies, 3) + assert.Equal(&p1, proxies[svc.ID]) + assert.Equal(&p4, proxies[svc4.ID]) + assert.Equal(&p3, proxies[svc3.ID]) + + tokens := make([]string, 4) + tokens[0], err = state.ProxyToken(svc.ID) + require.NoError(err) + // p2 not registered anymore but lets make sure p4 got a new token when it + // re-registered with same ID. + tokens[1] = p2Token + tokens[2], err = state.ProxyToken(svc3.ID) + require.NoError(err) + tokens[3], err = state.ProxyToken(svc4.ID) + require.NoError(err) + + // Quick check all are distinct + for i := 0; i < len(tokens)-1; i++ { + assert.Len(tokens[i], 36) // Sanity check for UUIDish thing. + for j := i + 1; j < len(tokens); j++ { + assert.NotEqual(tokens[i], tokens[j], "tokens for proxy %d and %d match", + i+1, j+1) + } + } +} diff --git a/agent/structs/connect.go b/agent/structs/connect.go index 7f08615d3..6f11c5fe3 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -1,5 +1,9 @@ package structs +import ( + "github.com/mitchellh/mapstructure" +) + // ConnectAuthorizeRequest is the structure of a request to authorize // a connection. type ConnectAuthorizeRequest struct { @@ -15,3 +19,75 @@ type ConnectAuthorizeRequest struct { ClientCertURI string ClientCertSerial string } + +// ProxyExecMode encodes the mode for running a managed connect proxy. +type ProxyExecMode int + +const ( + // ProxyExecModeDaemon executes a proxy process as a supervised daemon. + ProxyExecModeDaemon ProxyExecMode = iota + + // ProxyExecModeScript executes a proxy config script on each change to it's + // config. + ProxyExecModeScript +) + +// ConnectManagedProxy represents the agent-local state for a configured proxy +// instance. This is never stored or sent to the servers and is only used to +// store the config for the proxy that the agent needs to track. For now it's +// really generic with only the fields the agent needs to act on defined while +// the rest of the proxy config is passed as opaque bag of attributes to support +// arbitrary config params for third-party proxy integrations. "External" +// proxies by definition register themselves and manage their own config +// externally so are never represented in agent state. +type ConnectManagedProxy struct { + // ExecMode is one of daemon or script. + ExecMode ProxyExecMode + + // Command is the command to execute. Empty defaults to self-invoking the same + // consul binary with proxy subcomand for ProxyExecModeDaemon and is an error + // for ProxyExecModeScript. + Command string + + // Config is the arbitrary configuration data provided with the registration. + Config map[string]interface{} + + // ProxyService is a pointer to the local proxy's service record for + // convenience. The proxies ID and name etc. can be read from there. It may be + // nil if the agent is starting up and hasn't registered the service yet. + ProxyService *NodeService + + // TargetServiceID is the ID of the target service on the localhost. It may + // not exist yet since bootstrapping is allowed to happen in either order. + TargetServiceID string +} + +// ConnectManagedProxyConfig represents the parts of the proxy config the agent +// needs to understand. It's bad UX to make the user specify these separately +// just to make parsing simpler for us so this encapsulates the fields in +// ConnectManagedProxy.Config that we care about. They are all optoinal anyway +// and this is used to decode them with mapstructure. +type ConnectManagedProxyConfig struct { + BindAddress string `mapstructure:"bind_address"` + BindPort int `mapstructure:"bind_port"` +} + +// ParseConfig attempts to read the fields we care about from the otherwise +// opaque config map. They are all optional but it may fail if one is specified +// but an invalid value. +func (p *ConnectManagedProxy) ParseConfig() (*ConnectManagedProxyConfig, error) { + var cfg ConnectManagedProxyConfig + d, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{ + ErrorUnused: false, + WeaklyTypedInput: true, // allow string port etc. + Result: &cfg, + }) + if err != nil { + return nil, err + } + err = d.Decode(p.Config) + if err != nil { + return nil, err + } + return &cfg, nil +} diff --git a/agent/structs/connect_test.go b/agent/structs/connect_test.go new file mode 100644 index 000000000..905ae09ef --- /dev/null +++ b/agent/structs/connect_test.go @@ -0,0 +1,115 @@ +package structs + +import ( + "reflect" + "testing" +) + +func TestConnectManagedProxy_ParseConfig(t *testing.T) { + tests := []struct { + name string + config map[string]interface{} + want *ConnectManagedProxyConfig + wantErr bool + }{ + { + name: "empty", + config: nil, + want: &ConnectManagedProxyConfig{}, + wantErr: false, + }, + { + name: "specified", + config: map[string]interface{}{ + "bind_address": "127.0.0.1", + "bind_port": 1234, + }, + want: &ConnectManagedProxyConfig{ + BindAddress: "127.0.0.1", + BindPort: 1234, + }, + wantErr: false, + }, + { + name: "stringy port", + config: map[string]interface{}{ + "bind_address": "127.0.0.1", + "bind_port": "1234", + }, + want: &ConnectManagedProxyConfig{ + BindAddress: "127.0.0.1", + BindPort: 1234, + }, + wantErr: false, + }, + { + name: "empty addr", + config: map[string]interface{}{ + "bind_address": "", + "bind_port": "1234", + }, + want: &ConnectManagedProxyConfig{ + BindAddress: "", + BindPort: 1234, + }, + wantErr: false, + }, + { + name: "empty port", + config: map[string]interface{}{ + "bind_address": "127.0.0.1", + "bind_port": "", + }, + want: nil, + wantErr: true, + }, + { + name: "junk address", + config: map[string]interface{}{ + "bind_address": 42, + "bind_port": "", + }, + want: nil, + wantErr: true, + }, + { + name: "zero port, missing addr", + config: map[string]interface{}{ + "bind_port": 0, + }, + want: &ConnectManagedProxyConfig{ + BindPort: 0, + }, + wantErr: false, + }, + { + name: "extra fields present", + config: map[string]interface{}{ + "bind_port": 1234, + "flamingos": true, + "upstream": []map[string]interface{}{ + {"foo": "bar"}, + }, + }, + want: &ConnectManagedProxyConfig{ + BindPort: 1234, + }, + wantErr: false, + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + p := &ConnectManagedProxy{ + Config: tt.config, + } + got, err := p.ParseConfig() + if (err != nil) != tt.wantErr { + t.Errorf("ConnectManagedProxy.ParseConfig() error = %v, wantErr %v", err, tt.wantErr) + return + } + if !reflect.DeepEqual(got, tt.want) { + t.Errorf("ConnectManagedProxy.ParseConfig() = %v, want %v", got, tt.want) + } + }) + } +} From c2266b134ae03f29ef2ebac156359e58823b2900 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Tue, 17 Apr 2018 13:29:02 +0100 Subject: [PATCH 116/539] HTTP agent registration allows proxy to be defined. --- agent/agent.go | 12 +++-- agent/agent_endpoint.go | 14 +++++ agent/agent_endpoint_test.go | 79 +++++++++++++++++++++++++++-- agent/config/builder.go | 50 ++++-------------- agent/config/runtime.go | 4 -- agent/config/runtime_test.go | 28 +++++----- agent/structs/service_definition.go | 62 ++++++++++++++++++++++ 7 files changed, 182 insertions(+), 67 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index b988029ce..03f7677d0 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2426,9 +2426,15 @@ func (a *Agent) unloadChecks() error { // loadProxies will load connect proxy definitions from configuration and // persisted definitions on disk, and load them into the local agent. func (a *Agent) loadProxies(conf *config.RuntimeConfig) error { - for _, proxy := range conf.ConnectProxies { - if err := a.AddProxy(proxy, false); err != nil { - return fmt.Errorf("failed adding proxy: %s", err) + for _, svc := range conf.Services { + if svc.Connect != nil { + proxy, err := svc.ConnectManagedProxy() + if err != nil { + return fmt.Errorf("failed adding proxy: %s", err) + } + if err := a.AddProxy(proxy, false); err != nil { + return fmt.Errorf("failed adding proxy: %s", err) + } } } diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 722909467..43013785f 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -589,10 +589,24 @@ func (s *HTTPServer) AgentRegisterService(resp http.ResponseWriter, req *http.Re return nil, err } + // Get any proxy registrations + proxy, err := args.ConnectManagedProxy() + if err != nil { + resp.WriteHeader(http.StatusBadRequest) + fmt.Fprintf(resp, err.Error()) + return nil, nil + } + // Add the service. if err := s.agent.AddService(ns, chkTypes, true, token); err != nil { return nil, err } + // Add proxy (which will add proxy service so do it before we trigger sync) + if proxy != nil { + if err := s.agent.AddProxy(proxy, true); err != nil { + return nil, err + } + } s.syncChanges() return nil, nil } diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 1c6f7d830..9d8591126 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -26,6 +26,7 @@ import ( "github.com/hashicorp/serf/serf" "github.com/pascaldekloe/goe/verify" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" ) func makeReadOnlyAgentACL(t *testing.T, srv *HTTPServer) string { @@ -1369,10 +1370,78 @@ func TestAgent_RegisterService_InvalidAddress(t *testing.T) { } } -// This tests local agent service registration of a connect proxy. This -// verifies that it is put in the local state store properly for syncing -// later. -func TestAgent_RegisterService_ConnectProxy(t *testing.T) { +// This tests local agent service registration with a managed proxy. +func TestAgent_RegisterService_ManagedConnectProxy(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + require := require.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Register a proxy. Note that the destination doesn't exist here on + // this agent or in the catalog at all. This is intended and part + // of the design. + args := &structs.ServiceDefinition{ + Name: "web", + Port: 8000, + // This is needed just because empty check struct (not pointer) get json + // encoded as object with zero values and then decoded back to object with + // zero values _except that the header map is an empty map not a nil map_. + // So our check to see if s.Check.Empty() returns false since DeepEqual + // considers empty maps and nil maps to be different types. Then the request + // fails validation because the Check definition isn't valid... This is jank + // we should fix but it's another yak I don't want to shave right now. + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{ + ExecMode: "script", + Command: "proxy.sh", + Config: map[string]interface{}{ + "foo": "bar", + }, + }, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=abc123", jsonReader(args)) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentRegisterService(resp, req) + assert.NoError(err) + assert.Nil(obj) + require.Equal(200, resp.Code, "request failed with body: %s", + resp.Body.String()) + + // Ensure the target service + _, ok := a.State.Services()["web"] + assert.True(ok, "has service") + + // Ensure the proxy service was registered + proxySvc, ok := a.State.Services()["web-proxy"] + require.True(ok, "has proxy service") + assert.Equal(structs.ServiceKindConnectProxy, proxySvc.Kind) + assert.Equal("web", proxySvc.ProxyDestination) + assert.NotEmpty(proxySvc.Port, "a port should have been assigned") + + // Ensure proxy itself was registered + proxy := a.State.Proxy("web-proxy") + require.NotNil(proxy) + assert.Equal(structs.ProxyExecModeScript, proxy.ExecMode) + assert.Equal("proxy.sh", proxy.Command) + assert.Equal(args.Connect.Proxy.Config, proxy.Config) + + // Ensure the token was configured + assert.Equal("abc123", a.State.ServiceToken("web")) + assert.Equal("abc123", a.State.ServiceToken("web-proxy")) +} + +// This tests local agent service registration of a unmanaged connect proxy. +// This verifies that it is put in the local state store properly for syncing +// later. Note that _managed_ connect proxies are registered as part of the +// target service's registration. +func TestAgent_RegisterService_UnmanagedConnectProxy(t *testing.T) { t.Parallel() assert := assert.New(t) @@ -1411,7 +1480,7 @@ func TestAgent_RegisterService_ConnectProxy(t *testing.T) { // This tests that connect proxy validation is done for local agent // registration. This doesn't need to test validation exhaustively since // that is done via a table test in the structs package. -func TestAgent_RegisterService_ConnectProxyInvalid(t *testing.T) { +func TestAgent_RegisterService_UnmanagedConnectProxyInvalid(t *testing.T) { t.Parallel() assert := assert.New(t) diff --git a/agent/config/builder.go b/agent/config/builder.go index a6338ae14..ec36e9ab0 100644 --- a/agent/config/builder.go +++ b/agent/config/builder.go @@ -322,15 +322,8 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) { } var services []*structs.ServiceDefinition - var proxies []*structs.ConnectManagedProxy for _, service := range c.Services { services = append(services, b.serviceVal(&service)) - // Register any connect proxies requested - if proxy := b.connectManagedProxyVal(&service); proxy != nil { - proxies = append(proxies, proxy) - } - // TODO(banks): support connect-native registrations (v.Connect.Enabled == - // true) } if c.Service != nil { services = append(services, b.serviceVal(c.Service)) @@ -648,7 +641,6 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) { CheckUpdateInterval: b.durationVal("check_update_interval", c.CheckUpdateInterval), Checks: checks, ClientAddrs: clientAddrs, - ConnectProxies: proxies, ConnectProxyBindMinPort: proxyBindMinPort, ConnectProxyBindMaxPort: proxyBindMaxPort, DataDir: b.stringVal(c.DataDir), @@ -1020,46 +1012,26 @@ func (b *Builder) serviceVal(v *ServiceDefinition) *structs.ServiceDefinition { Token: b.stringVal(v.Token), EnableTagOverride: b.boolVal(v.EnableTagOverride), Checks: checks, + Connect: b.serviceConnectVal(v.Connect), } } -func (b *Builder) connectManagedProxyVal(v *ServiceDefinition) *structs.ConnectManagedProxy { - if v.Connect == nil || v.Connect.Proxy == nil { +func (b *Builder) serviceConnectVal(v *ServiceConnect) *structs.ServiceDefinitionConnect { + if v == nil { return nil } - p := v.Connect.Proxy - - targetID := b.stringVal(v.ID) - if targetID == "" { - targetID = b.stringVal(v.Name) - } - - execMode := structs.ProxyExecModeDaemon - if p.ExecMode != nil { - switch *p.ExecMode { - case "daemon": - execMode = structs.ProxyExecModeDaemon - case "script": - execMode = structs.ProxyExecModeScript - default: - b.err = multierror.Append(fmt.Errorf( - "service[%s]: invalid connect proxy exec_mode: %s", targetID, - *p.ExecMode)) - return nil + var proxy *structs.ServiceDefinitionConnectProxy + if v.Proxy != nil { + proxy = &structs.ServiceDefinitionConnectProxy{ + ExecMode: b.stringVal(v.Proxy.ExecMode), + Command: b.stringVal(v.Proxy.Command), + Config: v.Proxy.Config, } } - return &structs.ConnectManagedProxy{ - ExecMode: execMode, - Command: b.stringVal(p.Command), - Config: p.Config, - // ProxyService will be setup when the agent registers the configured - // proxies and starts them etc. We could do it here but we may need to do - // things like probe the OS for a free port etc. And we have enough info to - // resolve all this later. - ProxyService: nil, - TargetServiceID: targetID, + return &structs.ServiceDefinitionConnect{ + Proxy: proxy, } } diff --git a/agent/config/runtime.go b/agent/config/runtime.go index 55c15d14e..b31630d27 100644 --- a/agent/config/runtime.go +++ b/agent/config/runtime.go @@ -621,10 +621,6 @@ type RuntimeConfig struct { // that. ConnectEnabled bool - // ConnectProxies is a list of configured proxies taken from the "connect" - // block of service registrations. - ConnectProxies []*structs.ConnectManagedProxy - // ConnectProxyBindMinPort is the inclusive start of the range of ports // allocated to the agent for starting proxy listeners on where no explicit // port is specified. diff --git a/agent/config/runtime_test.go b/agent/config/runtime_test.go index e990f0689..773b7a036 100644 --- a/agent/config/runtime_test.go +++ b/agent/config/runtime_test.go @@ -3403,21 +3403,8 @@ func TestFullConfig(t *testing.T) { DeregisterCriticalServiceAfter: 13209 * time.Second, }, }, - CheckUpdateInterval: 16507 * time.Second, - ClientAddrs: []*net.IPAddr{ipAddr("93.83.18.19")}, - ConnectProxies: []*structs.ConnectManagedProxy{ - { - ExecMode: structs.ProxyExecModeDaemon, - Command: "awesome-proxy", - Config: map[string]interface{}{ - "foo": "qux", // Overriden by service - // Note globals are not merged here but on rendering to the proxy - // endpoint. That's because proxies can be added later too so merging - // at config time is redundant if we have to do it later anyway. - }, - TargetServiceID: "MRHVMZuD", - }, - }, + CheckUpdateInterval: 16507 * time.Second, + ClientAddrs: []*net.IPAddr{ipAddr("93.83.18.19")}, ConnectProxyBindMinPort: 2000, ConnectProxyBindMaxPort: 3000, DNSAddrs: []net.Addr{tcpAddr("93.95.95.81:7001"), udpAddr("93.95.95.81:7001")}, @@ -3592,6 +3579,15 @@ func TestFullConfig(t *testing.T) { DeregisterCriticalServiceAfter: 68482 * time.Second, }, }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{ + ExecMode: "daemon", + Command: "awesome-proxy", + Config: map[string]interface{}{ + "foo": "qux", + }, + }, + }, }, { ID: "dLOXpSCI", @@ -4082,7 +4078,6 @@ func TestSanitize(t *testing.T) { ], "ClientAddrs": [], "ConnectEnabled": false, - "ConnectProxies": [], "ConnectProxyBindMaxPort": 0, "ConnectProxyBindMinPort": 0, "ConnectProxyDefaultConfig": {}, @@ -4219,6 +4214,7 @@ func TestSanitize(t *testing.T) { "Timeout": "0s" }, "Checks": [], + "Connect": null, "EnableTagOverride": false, "ID": "", "Kind": "", diff --git a/agent/structs/service_definition.go b/agent/structs/service_definition.go index a10f1527f..ad77d8e3b 100644 --- a/agent/structs/service_definition.go +++ b/agent/structs/service_definition.go @@ -1,5 +1,9 @@ package structs +import ( + "fmt" +) + // ServiceDefinition is used to JSON decode the Service definitions. For // documentation on specific fields see NodeService which is better documented. type ServiceDefinition struct { @@ -15,6 +19,7 @@ type ServiceDefinition struct { Token string EnableTagOverride bool ProxyDestination string + Connect *ServiceDefinitionConnect } func (s *ServiceDefinition) NodeService() *NodeService { @@ -35,6 +40,45 @@ func (s *ServiceDefinition) NodeService() *NodeService { return ns } +// ConnectManagedProxy returns a ConnectManagedProxy from the ServiceDefinition +// if one is configured validly. Note that is may return nil if no proxy is +// configured and will also return nil error in this case too as it's an +// expected case. The error returned indicates that there was an attempt to +// configure a proxy made but that it was invalid input, e.g. invalid +// "exec_mode". +func (s *ServiceDefinition) ConnectManagedProxy() (*ConnectManagedProxy, error) { + if s.Connect == nil || s.Connect.Proxy == nil { + return nil, nil + } + + // NodeService performs some simple normalization like copying ID from Name + // which we shouldn't hard code ourselves here... + ns := s.NodeService() + + execMode := ProxyExecModeDaemon + switch s.Connect.Proxy.ExecMode { + case "": + execMode = ProxyExecModeDaemon + case "daemon": + execMode = ProxyExecModeDaemon + case "script": + execMode = ProxyExecModeScript + default: + return nil, fmt.Errorf("invalid exec mode: %s", s.Connect.Proxy.ExecMode) + } + + p := &ConnectManagedProxy{ + ExecMode: execMode, + Command: s.Connect.Proxy.Command, + Config: s.Connect.Proxy.Config, + // ProxyService will be setup when the agent registers the configured + // proxies and starts them etc. + TargetServiceID: ns.ID, + } + + return p, nil +} + func (s *ServiceDefinition) CheckTypes() (checks CheckTypes, err error) { if !s.Check.Empty() { err := s.Check.Validate() @@ -51,3 +95,21 @@ func (s *ServiceDefinition) CheckTypes() (checks CheckTypes, err error) { } return checks, nil } + +// ServiceDefinitionConnect is the connect block within a service registration. +// Note this is duplicated in config.ServiceConnect and needs to be kept in +// sync. +type ServiceDefinitionConnect struct { + // TODO(banks) add way to specify that the app is connect-native + // Proxy configures a connect proxy instance for the service + Proxy *ServiceDefinitionConnectProxy `json:"proxy,omitempty" hcl:"proxy" mapstructure:"proxy"` +} + +// ServiceDefinitionConnectProxy is the connect proxy config within a service +// registration. Note this is duplicated in config.ServiceConnectProxy and needs +// to be kept in sync. +type ServiceDefinitionConnectProxy struct { + Command string `json:"command,omitempty" hcl:"command" mapstructure:"command"` + ExecMode string `json:"exec_mode,omitempty" hcl:"exec_mode" mapstructure:"exec_mode"` + Config map[string]interface{} `json:"config,omitempty" hcl:"config" mapstructure:"config"` +} From 44afb5c69906856a02b414258e359535876e1a19 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 18 Apr 2018 21:05:30 +0100 Subject: [PATCH 117/539] Agent Connect Proxy config endpoint with hash-based blocking --- agent/agent_endpoint.go | 119 +++++++++++++++++++++++++ agent/agent_endpoint_test.go | 168 ++++++++++++++++++++++++++++++++++- agent/local/state.go | 47 +++++----- agent/local/state_test.go | 72 +++++++++++---- agent/structs/connect.go | 25 ++++++ 5 files changed, 386 insertions(+), 45 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 43013785f..e7cec596a 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -7,6 +7,7 @@ import ( "net/url" "strconv" "strings" + "time" "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/checks" @@ -26,6 +27,7 @@ import ( // NOTE(mitcehllh): This is temporary while certs are stubbed out. "github.com/mitchellh/go-testing-interface" + "github.com/mitchellh/hashstructure" ) type Self struct { @@ -896,6 +898,123 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. return &reply, nil } +// GET /v1/agent/connect/proxy/:proxy_service_id +// +// Returns the local proxy config for the identified proxy. Requires token= +// param with the correct local ProxyToken (not ACL token). +func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Get the proxy ID. Note that this is the ID of a proxy's service instance. + id := strings.TrimPrefix(req.URL.Path, "/v1/agent/connect/proxy/") + + // Maybe block + var queryOpts structs.QueryOptions + if parseWait(resp, req, &queryOpts) { + // parseWait returns an error itself + return nil, nil + } + + // Parse hash specially since it's only this endpoint that uses it currently. + // Eventually this should happen in parseWait and end up in QueryOptions but I + // didn't want to make very general changes right away. + hash := req.URL.Query().Get("hash") + + return s.agentLocalBlockingQuery(hash, &queryOpts, + func(updateCh chan struct{}) (string, interface{}, error) { + // Retrieve the proxy specified + proxy := s.agent.State.Proxy(id) + if proxy == nil { + resp.WriteHeader(http.StatusNotFound) + fmt.Fprintf(resp, "unknown proxy service ID: %s", id) + return "", nil, nil + } + + // Lookup the target service as a convenience + target := s.agent.State.Service(proxy.Proxy.TargetServiceID) + if target == nil { + // Not found since this endpoint is only useful for agent-managed proxies so + // service missing means the service was deregistered racily with this call. + resp.WriteHeader(http.StatusNotFound) + fmt.Fprintf(resp, "unknown target service ID: %s", proxy.Proxy.TargetServiceID) + return "", nil, nil + } + + // Setup "watch" on the proxy being modified and respond on chan if it is. + go func() { + select { + case <-updateCh: + // blocking query timedout or was cancelled. Abort + return + case <-proxy.WatchCh: + // Proxy was updated or removed, report it + updateCh <- struct{}{} + } + }() + + hash, err := hashstructure.Hash(proxy.Proxy, nil) + if err != nil { + return "", nil, err + } + contentHash := fmt.Sprintf("%x", hash) + + reply := &structs.ConnectManageProxyResponse{ + ProxyServiceID: proxy.Proxy.ProxyService.ID, + TargetServiceID: target.ID, + TargetServiceName: target.Service, + ContentHash: contentHash, + ExecMode: proxy.Proxy.ExecMode.String(), + Command: proxy.Proxy.Command, + Config: proxy.Proxy.Config, + } + return contentHash, reply, nil + }) + return nil, nil +} + +type agentLocalBlockingFunc func(updateCh chan struct{}) (string, interface{}, error) + +func (s *HTTPServer) agentLocalBlockingQuery(hash string, + queryOpts *structs.QueryOptions, fn agentLocalBlockingFunc) (interface{}, error) { + + var timer *time.Timer + + if hash != "" { + // TODO(banks) at least define these defaults somewhere in a const. Would be + // nice not to duplicate the ones in consul/rpc.go too... + wait := queryOpts.MaxQueryTime + if wait == 0 { + wait = 5 * time.Minute + } + if wait > 10*time.Minute { + wait = 10 * time.Minute + } + // Apply a small amount of jitter to the request. + wait += lib.RandomStagger(wait / 16) + timer = time.NewTimer(wait) + } + + ch := make(chan struct{}) + + for { + curHash, curResp, err := fn(ch) + if err != nil { + return curResp, err + } + // Hash was passed and matches current one, wait for update or timeout. + if timer != nil && hash == curHash { + select { + case <-ch: + // Update happened, loop to fetch a new value + continue + case <-timer.C: + // Timeout, stop the watcher goroutine and return what we have + close(ch) + break + } + } + return curResp, err + } +} + // AgentConnectAuthorize // // POST /v1/agent/connect/authorize diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 9d8591126..4e73556ec 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -24,6 +24,7 @@ import ( "github.com/hashicorp/consul/testutil/retry" "github.com/hashicorp/consul/types" "github.com/hashicorp/serf/serf" + "github.com/mitchellh/copystructure" "github.com/pascaldekloe/goe/verify" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" @@ -1428,9 +1429,9 @@ func TestAgent_RegisterService_ManagedConnectProxy(t *testing.T) { // Ensure proxy itself was registered proxy := a.State.Proxy("web-proxy") require.NotNil(proxy) - assert.Equal(structs.ProxyExecModeScript, proxy.ExecMode) - assert.Equal("proxy.sh", proxy.Command) - assert.Equal(args.Connect.Proxy.Config, proxy.Config) + assert.Equal(structs.ProxyExecModeScript, proxy.Proxy.ExecMode) + assert.Equal("proxy.sh", proxy.Proxy.Command) + assert.Equal(args.Connect.Proxy.Config, proxy.Proxy.Config) // Ensure the token was configured assert.Equal("abc123", a.State.ServiceToken("web")) @@ -2200,6 +2201,167 @@ func TestAgentConnectCALeafCert_good(t *testing.T) { // TODO(mitchellh): verify the private key matches the cert } +func TestAgentConnectProxy(t *testing.T) { + t.Parallel() + + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Define a local service with a managed proxy. It's registered in the test + // loop to make sure agent state is predictable whatever order tests execute + // since some alter this service config. + reg := &structs.ServiceDefinition{ + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{ + Config: map[string]interface{}{ + "bind_port": 1234, + "connect_timeout_ms": 500, + "upstreams": []map[string]interface{}{ + { + "destination_name": "db", + "local_port": 3131, + }, + }, + }, + }, + }, + } + + expectedResponse := &structs.ConnectManageProxyResponse{ + ProxyServiceID: "test-proxy", + TargetServiceID: "test", + TargetServiceName: "test", + ContentHash: "a15dccb216d38a6e", + ExecMode: "daemon", + Command: "", + Config: map[string]interface{}{ + "upstreams": []interface{}{ + map[string]interface{}{ + "destination_name": "db", + "local_port": float64(3131), + }, + }, + "bind_port": float64(1234), + "connect_timeout_ms": float64(500), + }, + } + + ur, err := copystructure.Copy(expectedResponse) + require.NoError(t, err) + updatedResponse := ur.(*structs.ConnectManageProxyResponse) + updatedResponse.ContentHash = "22bc9233a52c08fd" + upstreams := updatedResponse.Config["upstreams"].([]interface{}) + upstreams = append(upstreams, + map[string]interface{}{ + "destination_name": "cache", + "local_port": float64(4242), + }) + updatedResponse.Config["upstreams"] = upstreams + + tests := []struct { + name string + url string + updateFunc func() + wantWait time.Duration + wantCode int + wantErr bool + wantResp *structs.ConnectManageProxyResponse + }{ + { + name: "simple fetch", + url: "/v1/agent/connect/proxy/test-proxy", + wantCode: 200, + wantErr: false, + wantResp: expectedResponse, + }, + { + name: "blocking fetch timeout, no change", + url: "/v1/agent/connect/proxy/test-proxy?hash=a15dccb216d38a6e&wait=100ms", + wantWait: 100 * time.Millisecond, + wantCode: 200, + wantErr: false, + wantResp: expectedResponse, + }, + { + name: "blocking fetch old hash should return immediately", + url: "/v1/agent/connect/proxy/test-proxy?hash=123456789abcd&wait=10m", + wantCode: 200, + wantErr: false, + wantResp: expectedResponse, + }, + { + name: "blocking fetch returns change", + url: "/v1/agent/connect/proxy/test-proxy?hash=a15dccb216d38a6e", + updateFunc: func() { + time.Sleep(100 * time.Millisecond) + // Re-register with new proxy config + r2, err := copystructure.Copy(reg) + require.NoError(t, err) + reg2 := r2.(*structs.ServiceDefinition) + reg2.Connect.Proxy.Config = updatedResponse.Config + req, _ := http.NewRequest("PUT", "/v1/agent/service/register", jsonReader(r2)) + resp := httptest.NewRecorder() + _, err = a.srv.AgentRegisterService(resp, req) + require.NoError(t, err) + require.Equal(t, 200, resp.Code, "body: %s", resp.Body.String()) + }, + wantWait: 100 * time.Millisecond, + wantCode: 200, + wantErr: false, + wantResp: updatedResponse, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + assert := assert.New(t) + require := require.New(t) + + // Register the basic service to ensure it's in a known state to start. + { + req, _ := http.NewRequest("PUT", "/v1/agent/service/register", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + req, _ := http.NewRequest("GET", tt.url, nil) + resp := httptest.NewRecorder() + if tt.updateFunc != nil { + go tt.updateFunc() + } + start := time.Now() + obj, err := a.srv.AgentConnectProxyConfig(resp, req) + elapsed := time.Now().Sub(start) + + if tt.wantErr { + require.Error(err) + } else { + require.NoError(err) + } + if tt.wantCode != 0 { + require.Equal(tt.wantCode, resp.Code, "body: %s", resp.Body.String()) + } + if tt.wantWait != 0 { + assert.True(elapsed >= tt.wantWait, "should have waited at least %s, "+ + "took %s", tt.wantWait, elapsed) + } else { + assert.True(elapsed < 10*time.Millisecond, "should not have waited, "+ + "took %s", elapsed) + } + + assert.Equal(tt.wantResp, obj) + }) + } +} + func TestAgentConnectAuthorize_badBody(t *testing.T) { t.Parallel() diff --git a/agent/local/state.go b/agent/local/state.go index 47a006943..839b3cdb2 100644 --- a/agent/local/state.go +++ b/agent/local/state.go @@ -125,6 +125,10 @@ type ManagedProxy struct { // be exposed any other way. Unmanaged proxies will never see this and need to // use service-scoped ACL tokens distributed externally. ProxyToken string + + // WatchCh is a close-only chan that is closed when the proxy is removed or + // updated. + WatchCh chan struct{} } // State is used to represent the node's services, @@ -171,7 +175,7 @@ type State struct { // tokens contains the ACL tokens tokens *token.Store - // managedProxies is a map of all manged connect proxies registered locally on + // managedProxies is a map of all managed connect proxies registered locally on // this agent. This is NOT kept in sync with servers since it's agent-local // config only. Proxy instances have separate service registrations in the // services map above which are kept in sync via anti-entropy. Un-managed @@ -633,9 +637,17 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*str proxy.ProxyService = svc // All set, add the proxy and return the service + if old, ok := l.managedProxies[svc.ID]; ok { + // Notify watchers of the existing proxy config that it's changing. Note + // this is safe here even before the map is updated since we still hold the + // state lock and the watcher can't re-read the new config until we return + // anyway. + close(old.WatchCh) + } l.managedProxies[svc.ID] = &ManagedProxy{ Proxy: proxy, ProxyToken: pToken, + WatchCh: make(chan struct{}), } // No need to trigger sync as proxy state is local only. @@ -653,49 +665,32 @@ func (l *State) RemoveProxy(id string) error { } delete(l.managedProxies, id) + // Notify watchers of the existing proxy config that it's changed. + close(p.WatchCh) + // No need to trigger sync as proxy state is local only. return nil } // Proxy returns the local proxy state. -func (l *State) Proxy(id string) *structs.ConnectManagedProxy { +func (l *State) Proxy(id string) *ManagedProxy { l.RLock() defer l.RUnlock() - - p := l.managedProxies[id] - if p == nil { - return nil - } - return p.Proxy + return l.managedProxies[id] } // Proxies returns the locally registered proxies. -func (l *State) Proxies() map[string]*structs.ConnectManagedProxy { +func (l *State) Proxies() map[string]*ManagedProxy { l.RLock() defer l.RUnlock() - m := make(map[string]*structs.ConnectManagedProxy) + m := make(map[string]*ManagedProxy) for id, p := range l.managedProxies { - m[id] = p.Proxy + m[id] = p } return m } -// ProxyToken returns the local proxy token for a given proxy. Note this is not -// an ACL token so it won't fallback to using the agent-configured default ACL -// token. If the proxy doesn't exist an error is returned, otherwise the token -// is guaranteed to exist. -func (l *State) ProxyToken(id string) (string, error) { - l.RLock() - defer l.RUnlock() - - p := l.managedProxies[id] - if p == nil { - return "", fmt.Errorf("proxy %s not registered", id) - } - return p.ProxyToken, nil -} - // Metadata returns the local node metadata fields that the // agent is aware of and are being kept in sync with the server func (l *State) Metadata() map[string]string { diff --git a/agent/local/state_test.go b/agent/local/state_test.go index 6950cd477..a8890a540 100644 --- a/agent/local/state_test.go +++ b/agent/local/state_test.go @@ -6,6 +6,7 @@ import ( "log" "os" "reflect" + "sync" "testing" "time" @@ -1673,8 +1674,8 @@ func TestStateProxyManagement(t *testing.T) { t.Parallel() state := local.NewState(local.Config{ - ProxyPortRangeStart: 20000, - ProxyPortRangeEnd: 20002, + ProxyBindMinPort: 20000, + ProxyBindMaxPort: 20001, }, log.New(os.Stderr, "", log.LstdFlags), &token.Store{}) // Stub state syncing @@ -1707,6 +1708,20 @@ func TestStateProxyManagement(t *testing.T) { }, "fake-token-db") require.NoError(err) + // Record initial local modify index + lastModifyIndex := state.LocalModifyIndex() + assertModIndexUpdate := func(id string) { + t.Helper() + nowIndex := state.LocalModifyIndex() + assert.True(lastModifyIndex < nowIndex) + if id != "" { + p := state.Proxy(id) + require.NotNil(p) + assert.True(lastModifyIndex < p.ModifyIndex) + } + lastModifyIndex = nowIndex + } + // Should work now svc, err := state.AddProxy(&p1, "fake-token") require.NoError(err) @@ -1718,6 +1733,7 @@ func TestStateProxyManagement(t *testing.T) { assert.Equal("", svc.Address, "should have empty address by default") // Port is non-deterministic but could be either of 20000 or 20001 assert.Contains([]int{20000, 20001}, svc.Port) + assertModIndexUpdate(svc.ID) // Second proxy should claim other port p2 := p1 @@ -1726,10 +1742,10 @@ func TestStateProxyManagement(t *testing.T) { require.NoError(err) assert.Contains([]int{20000, 20001}, svc2.Port) assert.NotEqual(svc.Port, svc2.Port) + assertModIndexUpdate(svc2.ID) - // Just saving this for later... - p2Token, err := state.ProxyToken(svc2.ID) - require.NoError(err) + // Store this for later + p2token := state.Proxy(svc2.ID).ProxyToken // Third proxy should fail as all ports are used p3 := p1 @@ -1746,6 +1762,32 @@ func TestStateProxyManagement(t *testing.T) { require.NoError(err) require.Equal("0.0.0.0", svc3.Address) require.Equal(1234, svc3.Port) + assertModIndexUpdate(svc3.ID) + + // Update config of an already registered proxy should work + p3updated := p3 + p3updated.Config["foo"] = "bar" + // Setup multiple watchers who should all witness the change + gotP3 := state.Proxy(svc3.ID) + require.NotNil(gotP3) + var watchWg sync.WaitGroup + for i := 0; i < 3; i++ { + watchWg.Add(1) + go func() { + <-gotP3.WatchCh + watchWg.Done() + }() + } + svc3, err = state.AddProxy(&p3updated, "fake-token") + require.NoError(err) + require.Equal("0.0.0.0", svc3.Address) + require.Equal(1234, svc3.Port) + gotProxy3 := state.Proxy(svc3.ID) + require.NotNil(gotProxy3) + require.Equal(p3updated.Config, gotProxy3.Proxy.Config) + assertModIndexUpdate(svc3.ID) // update must change mod index + // All watchers should have fired so this should not hang the test! + watchWg.Wait() // Remove one of the auto-assigned proxies err = state.RemoveProxy(svc2.ID) @@ -1758,31 +1800,29 @@ func TestStateProxyManagement(t *testing.T) { require.NoError(err) assert.Contains([]int{20000, 20001}, svc2.Port) assert.Equal(svc4.Port, svc2.Port, "should get the same port back that we freed") + assertModIndexUpdate(svc4.ID) // Remove a proxy that doesn't exist should error err = state.RemoveProxy("nope") require.Error(err) - assert.Equal(&p4, state.Proxy(p4.ProxyService.ID), + assert.Equal(&p4, state.Proxy(p4.ProxyService.ID).Proxy, "should fetch the right proxy details") assert.Nil(state.Proxy("nope")) proxies := state.Proxies() assert.Len(proxies, 3) - assert.Equal(&p1, proxies[svc.ID]) - assert.Equal(&p4, proxies[svc4.ID]) - assert.Equal(&p3, proxies[svc3.ID]) + assert.Equal(&p1, proxies[svc.ID].Proxy) + assert.Equal(&p4, proxies[svc4.ID].Proxy) + assert.Equal(&p3, proxies[svc3.ID].Proxy) tokens := make([]string, 4) - tokens[0], err = state.ProxyToken(svc.ID) - require.NoError(err) + tokens[0] = state.Proxy(svc.ID).ProxyToken // p2 not registered anymore but lets make sure p4 got a new token when it // re-registered with same ID. - tokens[1] = p2Token - tokens[2], err = state.ProxyToken(svc3.ID) - require.NoError(err) - tokens[3], err = state.ProxyToken(svc4.ID) - require.NoError(err) + tokens[1] = p2token + tokens[2] = state.Proxy(svc2.ID).ProxyToken + tokens[3] = state.Proxy(svc3.ID).ProxyToken // Quick check all are distinct for i := 0; i < len(tokens)-1; i++ { diff --git a/agent/structs/connect.go b/agent/structs/connect.go index 6f11c5fe3..d879718b2 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -32,6 +32,18 @@ const ( ProxyExecModeScript ) +// String implements Stringer +func (m ProxyExecMode) String() string { + switch m { + case ProxyExecModeDaemon: + return "daemon" + case ProxyExecModeScript: + return "script" + default: + return "unknown" + } +} + // ConnectManagedProxy represents the agent-local state for a configured proxy // instance. This is never stored or sent to the servers and is only used to // store the config for the proxy that the agent needs to track. For now it's @@ -91,3 +103,16 @@ func (p *ConnectManagedProxy) ParseConfig() (*ConnectManagedProxyConfig, error) } return &cfg, nil } + +// ConnectManageProxyResponse is the public response object we return for +// queries on local proxy config state. It's similar to ConnectManagedProxy but +// with some fields re-arranged. +type ConnectManageProxyResponse struct { + ProxyServiceID string + TargetServiceID string + TargetServiceName string + ContentHash string + ExecMode string + Command string + Config map[string]interface{} +} From cbd860665120fd9b2faeddbe78aaeba99be33ece Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 18 Apr 2018 21:48:58 +0100 Subject: [PATCH 118/539] Add X-Consul-ContentHash header; implement removing all proxies; add load/unload test. --- agent/agent.go | 6 +++++- agent/agent_endpoint.go | 10 ++++++++-- agent/agent_endpoint_test.go | 2 ++ 3 files changed, 15 insertions(+), 3 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 03f7677d0..ce6a26d0c 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2444,7 +2444,11 @@ func (a *Agent) loadProxies(conf *config.RuntimeConfig) error { // unloadProxies will deregister all proxies known to the local agent. func (a *Agent) unloadProxies() error { - // TODO(banks): implement me + for id := range a.State.Proxies() { + if err := a.RemoveProxy(id, false); err != nil { + return fmt.Errorf("Failed deregistering proxy '%s': %s", id, err) + } + } return nil } diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index e7cec596a..b3e3741a8 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -918,7 +918,7 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http // didn't want to make very general changes right away. hash := req.URL.Query().Get("hash") - return s.agentLocalBlockingQuery(hash, &queryOpts, + return s.agentLocalBlockingQuery(resp, hash, &queryOpts, func(updateCh chan struct{}) (string, interface{}, error) { // Retrieve the proxy specified proxy := s.agent.State.Proxy(id) @@ -972,7 +972,11 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http type agentLocalBlockingFunc func(updateCh chan struct{}) (string, interface{}, error) -func (s *HTTPServer) agentLocalBlockingQuery(hash string, +// agentLocalBlockingQuery performs a blocking query in a generic way against +// local agent state that has no RPC or raft to back it. It uses `hash` paramter +// instead of an `index`. The resp is needed to write the `X-Consul-ContentHash` +// header back on return no Status nor body content is ever written to it. +func (s *HTTPServer) agentLocalBlockingQuery(resp http.ResponseWriter, hash string, queryOpts *structs.QueryOptions, fn agentLocalBlockingFunc) (interface{}, error) { var timer *time.Timer @@ -1011,6 +1015,8 @@ func (s *HTTPServer) agentLocalBlockingQuery(hash string, break } } + + resp.Header().Set("X-Consul-ContentHash", curHash) return curResp, err } } diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 4e73556ec..b34ac508a 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2358,6 +2358,8 @@ func TestAgentConnectProxy(t *testing.T) { } assert.Equal(tt.wantResp, obj) + + assert.Equal(tt.wantResp.ContentHash, resp.Header().Get("X-Consul-ContentHash")) }) } } From aed5e5b03e6041fe27d4762a9b8e99740b179a7d Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 18 Apr 2018 22:03:51 +0100 Subject: [PATCH 119/539] Super ugly hack to get TeamCity build to work for this PR without adding a vendor that is being added elsewhere and will conflict... --- GNUmakefile | 2 ++ agent/agent.go | 3 ++ agent/agent_test.go | 69 +++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 74 insertions(+) diff --git a/GNUmakefile b/GNUmakefile index bebe8bce5..030fa003d 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -40,6 +40,8 @@ bin: tools dev: changelogfmt vendorfmt dev-build dev-build: + @echo "--> TEMPORARY HACK: installing hashstructure to make CI pass until we vendor it upstream" + go get github.com/mitchellh/hashstructure @echo "--> Building consul" mkdir -p pkg/$(GOOS)_$(GOARCH)/ bin/ go install -ldflags '$(GOLDFLAGS)' -tags '$(GOTAGS)' diff --git a/agent/agent.go b/agent/agent.go index ce6a26d0c..277bdd046 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2588,6 +2588,9 @@ func (a *Agent) ReloadConfig(newCfg *config.RuntimeConfig) error { // First unload all checks, services, and metadata. This lets us begin the reload // with a clean slate. + if err := a.unloadProxies(); err != nil { + return fmt.Errorf("Failed unloading proxies: %s", err) + } if err := a.unloadServices(); err != nil { return fmt.Errorf("Failed unloading services: %s", err) } diff --git a/agent/agent_test.go b/agent/agent_test.go index 2ee42d7db..bf24425bc 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -1640,6 +1640,75 @@ func TestAgent_unloadServices(t *testing.T) { } } +func TestAgent_loadProxies(t *testing.T) { + t.Parallel() + a := NewTestAgent(t.Name(), ` + service = { + id = "rabbitmq" + name = "rabbitmq" + port = 5672 + token = "abc123" + connect { + proxy { + config { + bind_port = 1234 + } + } + } + } + `) + defer a.Shutdown() + + services := a.State.Services() + if _, ok := services["rabbitmq"]; !ok { + t.Fatalf("missing service") + } + if token := a.State.ServiceToken("rabbitmq"); token != "abc123" { + t.Fatalf("bad: %s", token) + } + if _, ok := services["rabbitmq-proxy"]; !ok { + t.Fatalf("missing proxy service") + } + if token := a.State.ServiceToken("rabbitmq-proxy"); token != "abc123" { + t.Fatalf("bad: %s", token) + } + proxies := a.State.Proxies() + if _, ok := proxies["rabbitmq-proxy"]; !ok { + t.Fatalf("missing proxy") + } +} + +func TestAgent_unloadProxies(t *testing.T) { + t.Parallel() + a := NewTestAgent(t.Name(), ` + service = { + id = "rabbitmq" + name = "rabbitmq" + port = 5672 + token = "abc123" + connect { + proxy { + config { + bind_port = 1234 + } + } + } + } + `) + defer a.Shutdown() + + // Sanity check it's there + require.NotNil(t, a.State.Proxy("rabbitmq-proxy")) + + // Unload all proxies + if err := a.unloadProxies(); err != nil { + t.Fatalf("err: %s", err) + } + if len(a.State.Proxies()) != 0 { + t.Fatalf("should have unloaded proxies") + } +} + func TestAgent_Service_MaintenanceMode(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), "") From 8a4410b549c36f1fde582830cf9c48988bead671 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 19 Apr 2018 11:15:32 +0100 Subject: [PATCH 120/539] Refactor localBlockingQuery to use memdb.WatchSet. Much simpler and correct as a bonus! --- agent/agent_endpoint.go | 57 +++++++++++++++++------------------------ 1 file changed, 24 insertions(+), 33 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index b3e3741a8..24685ee92 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -9,6 +9,9 @@ import ( "strings" "time" + "github.com/hashicorp/go-memdb" + "github.com/mitchellh/hashstructure" + "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/checks" "github.com/hashicorp/consul/agent/config" @@ -27,7 +30,6 @@ import ( // NOTE(mitcehllh): This is temporary while certs are stubbed out. "github.com/mitchellh/go-testing-interface" - "github.com/mitchellh/hashstructure" ) type Self struct { @@ -919,7 +921,7 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http hash := req.URL.Query().Get("hash") return s.agentLocalBlockingQuery(resp, hash, &queryOpts, - func(updateCh chan struct{}) (string, interface{}, error) { + func(ws memdb.WatchSet) (string, interface{}, error) { // Retrieve the proxy specified proxy := s.agent.State.Proxy(id) if proxy == nil { @@ -938,17 +940,8 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http return "", nil, nil } - // Setup "watch" on the proxy being modified and respond on chan if it is. - go func() { - select { - case <-updateCh: - // blocking query timedout or was cancelled. Abort - return - case <-proxy.WatchCh: - // Proxy was updated or removed, report it - updateCh <- struct{}{} - } - }() + // Watch the proxy for changes + ws.Add(proxy.WatchCh) hash, err := hashstructure.Hash(proxy.Proxy, nil) if err != nil { @@ -970,7 +963,7 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http return nil, nil } -type agentLocalBlockingFunc func(updateCh chan struct{}) (string, interface{}, error) +type agentLocalBlockingFunc func(ws memdb.WatchSet) (string, interface{}, error) // agentLocalBlockingQuery performs a blocking query in a generic way against // local agent state that has no RPC or raft to back it. It uses `hash` paramter @@ -979,7 +972,10 @@ type agentLocalBlockingFunc func(updateCh chan struct{}) (string, interface{}, e func (s *HTTPServer) agentLocalBlockingQuery(resp http.ResponseWriter, hash string, queryOpts *structs.QueryOptions, fn agentLocalBlockingFunc) (interface{}, error) { - var timer *time.Timer + // If we are not blocking we can skip tracking and allocating - nil WatchSet + // is still valid to call Add on and will just be a no op. + var ws memdb.WatchSet + var timeout *time.Timer if hash != "" { // TODO(banks) at least define these defaults somewhere in a const. Would be @@ -993,31 +989,26 @@ func (s *HTTPServer) agentLocalBlockingQuery(resp http.ResponseWriter, hash stri } // Apply a small amount of jitter to the request. wait += lib.RandomStagger(wait / 16) - timer = time.NewTimer(wait) + timeout = time.NewTimer(wait) + ws = memdb.NewWatchSet() } - ch := make(chan struct{}) - for { - curHash, curResp, err := fn(ch) + curHash, curResp, err := fn(ws) if err != nil { return curResp, err } - // Hash was passed and matches current one, wait for update or timeout. - if timer != nil && hash == curHash { - select { - case <-ch: - // Update happened, loop to fetch a new value - continue - case <-timer.C: - // Timeout, stop the watcher goroutine and return what we have - close(ch) - break - } + // Return immediately if there is no timeout, the hash is different or the + // Watch returns true (indicating timeout fired). Note that Watch on a nil + // WatchSet immediately returns false which would incorrectly cause this to + // loop and repeat again, however we rely on the invariant that ws == nil + // IFF timeout == nil in which case the Watch call is never invoked. + if timeout == nil || hash != curHash || ws.Watch(timeout.C) { + resp.Header().Set("X-Consul-ContentHash", curHash) + return curResp, err } - - resp.Header().Set("X-Consul-ContentHash", curHash) - return curResp, err + // Watch returned false indicating a change was detected, loop and repeat + // the callback to load the new value. } } From 9d11cd9bf4c226e29edc44c178ac48c8c29c15e7 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 19 Apr 2018 12:06:32 +0100 Subject: [PATCH 121/539] Fix various test failures and vet warnings. Intention de-duplication in previously merged PR actualy failed some tests that were not caught be me or CI. I ran the test files for state changes but they happened not to trigger this case so I made sure they did first and then fixed. That fixed some upstream intention endpoint tests that I'd not run as part of testing the previous fix. --- agent/agent_endpoint.go | 1 - agent/agent_test.go | 4 +- agent/consul/state/intention.go | 5 ++- agent/consul/state/intention_test.go | 62 +++++++++++++++------------- agent/local/state_test.go | 36 +++------------- connect/proxy/listener.go | 1 - connect/testing.go | 2 - connect/tls_test.go | 4 +- 8 files changed, 48 insertions(+), 67 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 24685ee92..c19b776ac 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -960,7 +960,6 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http } return contentHash, reply, nil }) - return nil, nil } type agentLocalBlockingFunc func(ws memdb.WatchSet) (string, interface{}, error) diff --git a/agent/agent_test.go b/agent/agent_test.go index bf24425bc..c22ce56ba 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -2366,7 +2366,7 @@ func TestAgent_AddProxy(t *testing.T) { // Test the ID was created as we expect. got := a.State.Proxy("web-proxy") - require.Equal(tt.proxy, got) + require.Equal(tt.proxy, got.Proxy) }) } } @@ -2394,7 +2394,7 @@ func TestAgent_RemoveProxy(t *testing.T) { // Test the ID was created as we expect. gotProxy := a.State.Proxy("web-proxy") - require.Equal(pReg, gotProxy) + require.Equal(pReg, gotProxy.Proxy) err := a.RemoveProxy("web-proxy", false) require.NoError(err) diff --git a/agent/consul/state/intention.go b/agent/consul/state/intention.go index 907bdf1ab..91a61ffe1 100644 --- a/agent/consul/state/intention.go +++ b/agent/consul/state/intention.go @@ -190,7 +190,10 @@ func (s *Store) intentionSetTxn(tx *memdb.Txn, idx uint64, ixn *structs.Intentio } if duplicate != nil { dupIxn := duplicate.(*structs.Intention) - return fmt.Errorf("duplicate intention found: %s", dupIxn.String()) + // Same ID is OK - this is an update + if dupIxn.ID != ixn.ID { + return fmt.Errorf("duplicate intention found: %s", dupIxn.String()) + } } // We always force meta to be non-nil so that we its an empty map. diff --git a/agent/consul/state/intention_test.go b/agent/consul/state/intention_test.go index 743f698af..fbf43c19b 100644 --- a/agent/consul/state/intention_test.go +++ b/agent/consul/state/intention_test.go @@ -42,7 +42,7 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { } // Inserting a with empty ID is disallowed. - assert.Nil(s.IntentionSet(1, ixn)) + assert.NoError(s.IntentionSet(1, ixn)) // Make sure the index got updated. assert.Equal(uint64(1), s.maxIndex(intentionsTableName)) @@ -64,13 +64,18 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { ws = memdb.NewWatchSet() idx, actual, err := s.IntentionGet(ws, ixn.ID) - assert.Nil(err) + assert.NoError(err) assert.Equal(expected.CreateIndex, idx) assert.Equal(expected, actual) // Change a value and test updating ixn.SourceNS = "foo" - assert.Nil(s.IntentionSet(2, ixn)) + assert.NoError(s.IntentionSet(2, ixn)) + + // Change a value that isn't in the unique 4 tuple and check we don't + // incorrectly consider this a duplicate when updating. + ixn.Action = structs.IntentionActionDeny + assert.NoError(s.IntentionSet(2, ixn)) // Make sure the index got updated. assert.Equal(uint64(2), s.maxIndex(intentionsTableName)) @@ -78,10 +83,11 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { // Read it back and verify the data was updated expected.SourceNS = ixn.SourceNS + expected.Action = structs.IntentionActionDeny expected.ModifyIndex = 2 ws = memdb.NewWatchSet() idx, actual, err = s.IntentionGet(ws, ixn.ID) - assert.Nil(err) + assert.NoError(err) assert.Equal(expected.ModifyIndex, idx) assert.Equal(expected, actual) @@ -97,7 +103,7 @@ func TestStore_IntentionSetGet_basic(t *testing.T) { // Duplicate 4-tuple should cause an error ws = memdb.NewWatchSet() - assert.NotNil(s.IntentionSet(3, ixn)) + assert.Error(s.IntentionSet(3, ixn)) // Make sure the index did NOT get updated. assert.Equal(uint64(2), s.maxIndex(intentionsTableName)) @@ -110,11 +116,11 @@ func TestStore_IntentionSet_emptyId(t *testing.T) { ws := memdb.NewWatchSet() _, _, err := s.IntentionGet(ws, testUUID()) - assert.Nil(err) + assert.NoError(err) // Inserting a with empty ID is disallowed. err = s.IntentionSet(1, &structs.Intention{}) - assert.NotNil(err) + assert.Error(err) assert.Contains(err.Error(), ErrMissingIntentionID.Error()) // Index is not updated if nothing is saved. @@ -134,16 +140,16 @@ func TestStore_IntentionSet_updateCreatedAt(t *testing.T) { } // Insert - assert.Nil(s.IntentionSet(1, &ixn)) + assert.NoError(s.IntentionSet(1, &ixn)) // Change a value and test updating ixnUpdate := ixn ixnUpdate.CreatedAt = now.Add(10 * time.Second) - assert.Nil(s.IntentionSet(2, &ixnUpdate)) + assert.NoError(s.IntentionSet(2, &ixnUpdate)) // Read it back and verify _, actual, err := s.IntentionGet(nil, ixn.ID) - assert.Nil(err) + assert.NoError(err) assert.Equal(now, actual.CreatedAt) } @@ -157,11 +163,11 @@ func TestStore_IntentionSet_metaNil(t *testing.T) { } // Insert - assert.Nil(s.IntentionSet(1, &ixn)) + assert.NoError(s.IntentionSet(1, &ixn)) // Read it back and verify _, actual, err := s.IntentionGet(nil, ixn.ID) - assert.Nil(err) + assert.NoError(err) assert.NotNil(actual.Meta) } @@ -176,11 +182,11 @@ func TestStore_IntentionSet_metaSet(t *testing.T) { } // Insert - assert.Nil(s.IntentionSet(1, &ixn)) + assert.NoError(s.IntentionSet(1, &ixn)) // Read it back and verify _, actual, err := s.IntentionGet(nil, ixn.ID) - assert.Nil(err) + assert.NoError(err) assert.Equal(ixn.Meta, actual.Meta) } @@ -191,18 +197,18 @@ func TestStore_IntentionDelete(t *testing.T) { // Call Get to populate the watch set ws := memdb.NewWatchSet() _, _, err := s.IntentionGet(ws, testUUID()) - assert.Nil(err) + assert.NoError(err) // Create ixn := &structs.Intention{ID: testUUID()} - assert.Nil(s.IntentionSet(1, ixn)) + assert.NoError(s.IntentionSet(1, ixn)) // Make sure the index got updated. assert.Equal(s.maxIndex(intentionsTableName), uint64(1)) assert.True(watchFired(ws), "watch fired") // Delete - assert.Nil(s.IntentionDelete(2, ixn.ID)) + assert.NoError(s.IntentionDelete(2, ixn.ID)) // Make sure the index got updated. assert.Equal(s.maxIndex(intentionsTableName), uint64(2)) @@ -210,7 +216,7 @@ func TestStore_IntentionDelete(t *testing.T) { // Sanity check to make sure it's not there. idx, actual, err := s.IntentionGet(nil, ixn.ID) - assert.Nil(err) + assert.NoError(err) assert.Equal(idx, uint64(2)) assert.Nil(actual) } @@ -222,7 +228,7 @@ func TestStore_IntentionsList(t *testing.T) { // Querying with no results returns nil. ws := memdb.NewWatchSet() idx, res, err := s.Intentions(ws) - assert.Nil(err) + assert.NoError(err) assert.Nil(res) assert.Equal(idx, uint64(0)) @@ -244,7 +250,7 @@ func TestStore_IntentionsList(t *testing.T) { // Create for i, ixn := range ixns { - assert.Nil(s.IntentionSet(uint64(1+i), ixn)) + assert.NoError(s.IntentionSet(uint64(1+i), ixn)) } assert.True(watchFired(ws), "watch fired") @@ -268,7 +274,7 @@ func TestStore_IntentionsList(t *testing.T) { }, } idx, actual, err := s.Intentions(nil) - assert.Nil(err) + assert.NoError(err) assert.Equal(idx, uint64(2)) assert.Equal(expected, actual) } @@ -386,7 +392,7 @@ func TestStore_IntentionMatch_table(t *testing.T) { } } - assert.Nil(s.IntentionSet(idx, ixn)) + assert.NoError(s.IntentionSet(idx, ixn)) idx++ } @@ -402,7 +408,7 @@ func TestStore_IntentionMatch_table(t *testing.T) { // Match _, matches, err := s.IntentionMatch(nil, args) - assert.Nil(err) + assert.NoError(err) // Should have equal lengths require.Len(t, matches, len(tc.Expected)) @@ -478,7 +484,7 @@ func TestStore_Intention_Snapshot_Restore(t *testing.T) { // Now create for i, ixn := range ixns { - assert.Nil(s.IntentionSet(uint64(4+i), ixn)) + assert.NoError(s.IntentionSet(uint64(4+i), ixn)) } // Snapshot the queries. @@ -486,7 +492,7 @@ func TestStore_Intention_Snapshot_Restore(t *testing.T) { defer snap.Close() // Alter the real state store. - assert.Nil(s.IntentionDelete(7, ixns[0].ID)) + assert.NoError(s.IntentionDelete(7, ixns[0].ID)) // Verify the snapshot. assert.Equal(snap.LastIndex(), uint64(6)) @@ -520,7 +526,7 @@ func TestStore_Intention_Snapshot_Restore(t *testing.T) { }, } dump, err := snap.Intentions() - assert.Nil(err) + assert.NoError(err) assert.Equal(expected, dump) // Restore the values into a new state store. @@ -528,13 +534,13 @@ func TestStore_Intention_Snapshot_Restore(t *testing.T) { s := testStateStore(t) restore := s.Restore() for _, ixn := range dump { - assert.Nil(restore.Intention(ixn)) + assert.NoError(restore.Intention(ixn)) } restore.Commit() // Read the restored values back out and verify that they match. idx, actual, err := s.Intentions(nil) - assert.Nil(err) + assert.NoError(err) assert.Equal(idx, uint64(6)) assert.Equal(expected, actual) }() diff --git a/agent/local/state_test.go b/agent/local/state_test.go index a8890a540..16975d963 100644 --- a/agent/local/state_test.go +++ b/agent/local/state_test.go @@ -6,10 +6,11 @@ import ( "log" "os" "reflect" - "sync" "testing" "time" + "github.com/hashicorp/go-memdb" + "github.com/stretchr/testify/require" "github.com/hashicorp/consul/agent" @@ -1708,20 +1709,6 @@ func TestStateProxyManagement(t *testing.T) { }, "fake-token-db") require.NoError(err) - // Record initial local modify index - lastModifyIndex := state.LocalModifyIndex() - assertModIndexUpdate := func(id string) { - t.Helper() - nowIndex := state.LocalModifyIndex() - assert.True(lastModifyIndex < nowIndex) - if id != "" { - p := state.Proxy(id) - require.NotNil(p) - assert.True(lastModifyIndex < p.ModifyIndex) - } - lastModifyIndex = nowIndex - } - // Should work now svc, err := state.AddProxy(&p1, "fake-token") require.NoError(err) @@ -1733,7 +1720,6 @@ func TestStateProxyManagement(t *testing.T) { assert.Equal("", svc.Address, "should have empty address by default") // Port is non-deterministic but could be either of 20000 or 20001 assert.Contains([]int{20000, 20001}, svc.Port) - assertModIndexUpdate(svc.ID) // Second proxy should claim other port p2 := p1 @@ -1742,7 +1728,6 @@ func TestStateProxyManagement(t *testing.T) { require.NoError(err) assert.Contains([]int{20000, 20001}, svc2.Port) assert.NotEqual(svc.Port, svc2.Port) - assertModIndexUpdate(svc2.ID) // Store this for later p2token := state.Proxy(svc2.ID).ProxyToken @@ -1762,7 +1747,6 @@ func TestStateProxyManagement(t *testing.T) { require.NoError(err) require.Equal("0.0.0.0", svc3.Address) require.Equal(1234, svc3.Port) - assertModIndexUpdate(svc3.ID) // Update config of an already registered proxy should work p3updated := p3 @@ -1770,14 +1754,8 @@ func TestStateProxyManagement(t *testing.T) { // Setup multiple watchers who should all witness the change gotP3 := state.Proxy(svc3.ID) require.NotNil(gotP3) - var watchWg sync.WaitGroup - for i := 0; i < 3; i++ { - watchWg.Add(1) - go func() { - <-gotP3.WatchCh - watchWg.Done() - }() - } + var ws memdb.WatchSet + ws.Add(gotP3.WatchCh) svc3, err = state.AddProxy(&p3updated, "fake-token") require.NoError(err) require.Equal("0.0.0.0", svc3.Address) @@ -1785,9 +1763,8 @@ func TestStateProxyManagement(t *testing.T) { gotProxy3 := state.Proxy(svc3.ID) require.NotNil(gotProxy3) require.Equal(p3updated.Config, gotProxy3.Proxy.Config) - assertModIndexUpdate(svc3.ID) // update must change mod index - // All watchers should have fired so this should not hang the test! - watchWg.Wait() + assert.False(ws.Watch(time.After(500*time.Millisecond)), + "watch should have fired so ws.Watch should not timeout") // Remove one of the auto-assigned proxies err = state.RemoveProxy(svc2.ID) @@ -1800,7 +1777,6 @@ func TestStateProxyManagement(t *testing.T) { require.NoError(err) assert.Contains([]int{20000, 20001}, svc2.Port) assert.Equal(svc4.Port, svc2.Port, "should get the same port back that we freed") - assertModIndexUpdate(svc4.ID) // Remove a proxy that doesn't exist should error err = state.RemoveProxy("nope") diff --git a/connect/proxy/listener.go b/connect/proxy/listener.go index c003cb19c..51ab761ca 100644 --- a/connect/proxy/listener.go +++ b/connect/proxy/listener.go @@ -87,7 +87,6 @@ func (l *Listener) Serve() error { go l.handleConn(conn) } - return nil } // handleConn is the internal connection handler goroutine. diff --git a/connect/testing.go b/connect/testing.go index 235ff6001..9f6e4f781 100644 --- a/connect/testing.go +++ b/connect/testing.go @@ -161,8 +161,6 @@ func (s *TestServer) Serve() error { c.Close() }(conn) } - - return nil } // ServeHTTPS runs an HTTPS server with the given config. It invokes the passed diff --git a/connect/tls_test.go b/connect/tls_test.go index 64c473c1e..d13b78661 100644 --- a/connect/tls_test.go +++ b/connect/tls_test.go @@ -16,7 +16,7 @@ func TestReloadableTLSConfig(t *testing.T) { // The dynamic config should be the one we loaded (with some different hooks) got := c.TLSConfig() - expect := *base + expect := base.Clone() // Equal and even cmp.Diff fail on tls.Config due to unexported fields in // each. Compare a few things to prove it's returning the bits we // specifically set. @@ -39,7 +39,7 @@ func TestReloadableTLSConfig(t *testing.T) { // Change the passed config to ensure SetTLSConfig made a copy otherwise this // is racey. - expect = *new + expect = new.Clone() new.Certificates = nil // The dynamic config should be the one we loaded (with some different hooks) From d8ac823ab16fdc6c06a6a499dc5d6f0d0fc6ad19 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 19 Apr 2018 13:01:20 +0100 Subject: [PATCH 122/539] Make test output more useful now we uses testify with multi-line error messages --- GNUmakefile | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/GNUmakefile b/GNUmakefile index 030fa003d..d77342892 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -80,10 +80,14 @@ test: other-consul dev-build vet @# _something_ to stop them terminating us due to inactivity... { go test $(GOTEST_FLAGS) -tags '$(GOTAGS)' -timeout 5m $(GOTEST_PKGS) 2>&1 ; echo $$? > exit-code ; } | tee test.log | egrep '^(ok|FAIL)\s*github.com/hashicorp/consul' @echo "Exit code: $$(cat exit-code)" >> test.log - @grep -A5 'DATA RACE' test.log || true + @# This prints all the race report between ====== lines + @awk '/^WARNING: DATA RACE/ {do_print=1; print "=================="} do_print==1 {print} /^={10,}/ {do_print=0}' test.log || true @grep -A10 'panic: test timed out' test.log || true - @grep -A1 -- '--- SKIP:' test.log || true - @grep -A1 -- '--- FAIL:' test.log || true + @# Prints all the failure output until the next non-indented line - testify + @# helpers often output multiple lines for readability but useless if we can't + @# see them. + @awk '/--- SKIP/ {do_print=1} /^[^[:space:]]/ {do_print=0} do_print==1 {print}' test.log || true + @awk '/--- FAIL/ {do_print=1} /^[^[:space:]]/ {do_print=0} do_print==1 {print}' test.log || true @grep '^FAIL' test.log || true @if [ "$$(cat exit-code)" == "0" ] ; then echo "PASS" ; exit 0 ; else exit 1 ; fi From a90f69faa4aaae8badf64c87e7a255d6889d04ca Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Fri, 20 Apr 2018 14:24:24 +0100 Subject: [PATCH 123/539] Adds `api` client code and tests for new Proxy Config endpoint, registering with proxy and seeing proxy config in /agent/services list. --- GNUmakefile | 12 ++-- agent/agent_endpoint.go | 57 +++++++++++++------ agent/agent_endpoint_test.go | 64 +++++++++++---------- agent/structs/connect.go | 13 ----- agent/structs/service_definition.go | 8 +-- api/agent.go | 72 +++++++++++++++++++++++- api/agent_test.go | 87 ++++++++++++++++++++++++++++- api/api.go | 9 +++ 8 files changed, 251 insertions(+), 71 deletions(-) diff --git a/GNUmakefile b/GNUmakefile index d77342892..2c412d9e5 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -82,12 +82,14 @@ test: other-consul dev-build vet @echo "Exit code: $$(cat exit-code)" >> test.log @# This prints all the race report between ====== lines @awk '/^WARNING: DATA RACE/ {do_print=1; print "=================="} do_print==1 {print} /^={10,}/ {do_print=0}' test.log || true - @grep -A10 'panic: test timed out' test.log || true + @grep -A10 'panic: ' test.log || true @# Prints all the failure output until the next non-indented line - testify - @# helpers often output multiple lines for readability but useless if we can't - @# see them. - @awk '/--- SKIP/ {do_print=1} /^[^[:space:]]/ {do_print=0} do_print==1 {print}' test.log || true - @awk '/--- FAIL/ {do_print=1} /^[^[:space:]]/ {do_print=0} do_print==1 {print}' test.log || true + @# helpers often output multiple lines for readability but useless if we can't + @# see them. Un-intuitive order of matches is necessary. No || true because + @# awk always returns true even if there is no match and it breaks non-bash + @# shells locally. + @awk '/^[^[:space:]]/ {do_print=0} /--- SKIP/ {do_print=1} do_print==1 {print}' test.log + @awk '/^[^[:space:]]/ {do_print=0} /--- FAIL/ {do_print=1} do_print==1 {print}' test.log @grep '^FAIL' test.log || true @if [ "$$(cat exit-code)" == "0" ] ; then echo "PASS" ; exit 0 ; else exit 1 ; fi diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index c19b776ac..c1bf6fbe1 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -162,25 +162,48 @@ func (s *HTTPServer) AgentServices(resp http.ResponseWriter, req *http.Request) return nil, err } + proxies := s.agent.State.Proxies() + + // Convert into api.AgentService since that includes Connect config but so far + // NodeService doesn't need to internally. They are otherwise identical since + // that is the struct used in client for reading the one we output here + // anyway. + agentSvcs := make(map[string]*api.AgentService) + // Use empty list instead of nil for id, s := range services { - if s.Tags == nil || s.Meta == nil { - clone := *s - if s.Tags == nil { - clone.Tags = make([]string, 0) - } else { - clone.Tags = s.Tags - } - if s.Meta == nil { - clone.Meta = make(map[string]string) - } else { - clone.Meta = s.Meta - } - services[id] = &clone + as := &api.AgentService{ + Kind: api.ServiceKind(s.Kind), + ID: s.ID, + Service: s.Service, + Tags: s.Tags, + Port: s.Port, + Address: s.Address, + EnableTagOverride: s.EnableTagOverride, + CreateIndex: s.CreateIndex, + ModifyIndex: s.ModifyIndex, + ProxyDestination: s.ProxyDestination, } + if as.Tags == nil { + as.Tags = []string{} + } + if as.Meta == nil { + as.Meta = map[string]string{} + } + // Attach Connect configs if the exist + if proxy, ok := proxies[id+"-proxy"]; ok { + as.Connect = &api.AgentServiceConnect{ + Proxy: &api.AgentServiceConnectProxy{ + ExecMode: api.ProxyExecMode(proxy.Proxy.ExecMode.String()), + Command: proxy.Proxy.Command, + Config: proxy.Proxy.Config, + }, + } + } + agentSvcs[id] = as } - return services, nil + return agentSvcs, nil } func (s *HTTPServer) AgentChecks(resp http.ResponseWriter, req *http.Request) (interface{}, error) { @@ -904,7 +927,7 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. // // Returns the local proxy config for the identified proxy. Requires token= // param with the correct local ProxyToken (not ACL token). -func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http.Request) (interface{}, error) { +func (s *HTTPServer) ConnectProxyConfig(resp http.ResponseWriter, req *http.Request) (interface{}, error) { // Get the proxy ID. Note that this is the ID of a proxy's service instance. id := strings.TrimPrefix(req.URL.Path, "/v1/agent/connect/proxy/") @@ -949,12 +972,12 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http } contentHash := fmt.Sprintf("%x", hash) - reply := &structs.ConnectManageProxyResponse{ + reply := &api.ConnectProxyConfig{ ProxyServiceID: proxy.Proxy.ProxyService.ID, TargetServiceID: target.ID, TargetServiceName: target.Service, ContentHash: contentHash, - ExecMode: proxy.Proxy.ExecMode.String(), + ExecMode: api.ProxyExecMode(proxy.Proxy.ExecMode.String()), Command: proxy.Proxy.Command, Config: proxy.Proxy.Config, } diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index b34ac508a..32cb6ab98 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -57,25 +57,39 @@ func TestAgent_Services(t *testing.T) { Tags: []string{"master"}, Port: 5000, } - a.State.AddService(srv1, "") + require.NoError(t, a.State.AddService(srv1, "")) + + // Add a managed proxy for that service + prxy1 := &structs.ConnectManagedProxy{ + ExecMode: structs.ProxyExecModeScript, + Command: "proxy.sh", + Config: map[string]interface{}{ + "bind_port": 1234, + "foo": "bar", + }, + TargetServiceID: "mysql", + } + _, err := a.State.AddProxy(prxy1, "") + require.NoError(t, err) req, _ := http.NewRequest("GET", "/v1/agent/services", nil) obj, err := a.srv.AgentServices(nil, req) if err != nil { t.Fatalf("Err: %v", err) } - val := obj.(map[string]*structs.NodeService) - if len(val) != 1 { - t.Fatalf("bad services: %v", obj) - } - if val["mysql"].Port != 5000 { - t.Fatalf("bad service: %v", obj) - } + val := obj.(map[string]*api.AgentService) + assert.Lenf(t, val, 1, "bad services: %v", obj) + assert.Equal(t, 5000, val["mysql"].Port) + assert.NotNil(t, val["mysql"].Connect) + assert.NotNil(t, val["mysql"].Connect.Proxy) + assert.Equal(t, prxy1.ExecMode.String(), string(val["mysql"].Connect.Proxy.ExecMode)) + assert.Equal(t, prxy1.Command, val["mysql"].Connect.Proxy.Command) + assert.Equal(t, prxy1.Config, val["mysql"].Connect.Proxy.Config) } // This tests that the agent services endpoint (/v1/agent/services) returns // Connect proxies. -func TestAgent_Services_ConnectProxy(t *testing.T) { +func TestAgent_Services_ExternalConnectProxy(t *testing.T) { t.Parallel() assert := assert.New(t) @@ -94,10 +108,10 @@ func TestAgent_Services_ConnectProxy(t *testing.T) { req, _ := http.NewRequest("GET", "/v1/agent/services", nil) obj, err := a.srv.AgentServices(nil, req) assert.Nil(err) - val := obj.(map[string]*structs.NodeService) + val := obj.(map[string]*api.AgentService) assert.Len(val, 1) actual := val["db-proxy"] - assert.Equal(structs.ServiceKindConnectProxy, actual.Kind) + assert.Equal(api.ServiceKindConnectProxy, actual.Kind) assert.Equal("db", actual.ProxyDestination) } @@ -120,7 +134,7 @@ func TestAgent_Services_ACLFilter(t *testing.T) { if err != nil { t.Fatalf("Err: %v", err) } - val := obj.(map[string]*structs.NodeService) + val := obj.(map[string]*api.AgentService) if len(val) != 0 { t.Fatalf("bad: %v", obj) } @@ -132,7 +146,7 @@ func TestAgent_Services_ACLFilter(t *testing.T) { if err != nil { t.Fatalf("Err: %v", err) } - val := obj.(map[string]*structs.NodeService) + val := obj.(map[string]*api.AgentService) if len(val) != 1 { t.Fatalf("bad: %v", obj) } @@ -1383,21 +1397,11 @@ func TestAgent_RegisterService_ManagedConnectProxy(t *testing.T) { // Register a proxy. Note that the destination doesn't exist here on // this agent or in the catalog at all. This is intended and part // of the design. - args := &structs.ServiceDefinition{ + args := &api.AgentServiceRegistration{ Name: "web", Port: 8000, - // This is needed just because empty check struct (not pointer) get json - // encoded as object with zero values and then decoded back to object with - // zero values _except that the header map is an empty map not a nil map_. - // So our check to see if s.Check.Empty() returns false since DeepEqual - // considers empty maps and nil maps to be different types. Then the request - // fails validation because the Check definition isn't valid... This is jank - // we should fix but it's another yak I don't want to shave right now. - Check: structs.CheckType{ - TTL: 15 * time.Second, - }, - Connect: &structs.ServiceDefinitionConnect{ - Proxy: &structs.ServiceDefinitionConnectProxy{ + Connect: &api.AgentServiceConnect{ + Proxy: &api.AgentServiceConnectProxy{ ExecMode: "script", Command: "proxy.sh", Config: map[string]interface{}{ @@ -2233,7 +2237,7 @@ func TestAgentConnectProxy(t *testing.T) { }, } - expectedResponse := &structs.ConnectManageProxyResponse{ + expectedResponse := &api.ConnectProxyConfig{ ProxyServiceID: "test-proxy", TargetServiceID: "test", TargetServiceName: "test", @@ -2254,7 +2258,7 @@ func TestAgentConnectProxy(t *testing.T) { ur, err := copystructure.Copy(expectedResponse) require.NoError(t, err) - updatedResponse := ur.(*structs.ConnectManageProxyResponse) + updatedResponse := ur.(*api.ConnectProxyConfig) updatedResponse.ContentHash = "22bc9233a52c08fd" upstreams := updatedResponse.Config["upstreams"].([]interface{}) upstreams = append(upstreams, @@ -2271,7 +2275,7 @@ func TestAgentConnectProxy(t *testing.T) { wantWait time.Duration wantCode int wantErr bool - wantResp *structs.ConnectManageProxyResponse + wantResp *api.ConnectProxyConfig }{ { name: "simple fetch", @@ -2338,7 +2342,7 @@ func TestAgentConnectProxy(t *testing.T) { go tt.updateFunc() } start := time.Now() - obj, err := a.srv.AgentConnectProxyConfig(resp, req) + obj, err := a.srv.ConnectProxyConfig(resp, req) elapsed := time.Now().Sub(start) if tt.wantErr { diff --git a/agent/structs/connect.go b/agent/structs/connect.go index d879718b2..5f907c1ab 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -103,16 +103,3 @@ func (p *ConnectManagedProxy) ParseConfig() (*ConnectManagedProxyConfig, error) } return &cfg, nil } - -// ConnectManageProxyResponse is the public response object we return for -// queries on local proxy config state. It's similar to ConnectManagedProxy but -// with some fields re-arranged. -type ConnectManageProxyResponse struct { - ProxyServiceID string - TargetServiceID string - TargetServiceName string - ContentHash string - ExecMode string - Command string - Config map[string]interface{} -} diff --git a/agent/structs/service_definition.go b/agent/structs/service_definition.go index ad77d8e3b..2ed424178 100644 --- a/agent/structs/service_definition.go +++ b/agent/structs/service_definition.go @@ -102,14 +102,14 @@ func (s *ServiceDefinition) CheckTypes() (checks CheckTypes, err error) { type ServiceDefinitionConnect struct { // TODO(banks) add way to specify that the app is connect-native // Proxy configures a connect proxy instance for the service - Proxy *ServiceDefinitionConnectProxy `json:"proxy,omitempty" hcl:"proxy" mapstructure:"proxy"` + Proxy *ServiceDefinitionConnectProxy } // ServiceDefinitionConnectProxy is the connect proxy config within a service // registration. Note this is duplicated in config.ServiceConnectProxy and needs // to be kept in sync. type ServiceDefinitionConnectProxy struct { - Command string `json:"command,omitempty" hcl:"command" mapstructure:"command"` - ExecMode string `json:"exec_mode,omitempty" hcl:"exec_mode" mapstructure:"exec_mode"` - Config map[string]interface{} `json:"config,omitempty" hcl:"config" mapstructure:"config"` + Command string + ExecMode string + Config map[string]interface{} } diff --git a/api/agent.go b/api/agent.go index 6b662fa2c..a81fd96f8 100644 --- a/api/agent.go +++ b/api/agent.go @@ -21,6 +21,23 @@ const ( ServiceKindConnectProxy ServiceKind = "connect-proxy" ) +// ProxyExecMode is the execution mode for a managed Connect proxy. +type ProxyExecMode string + +const ( + // ProxyExecModeDaemon indicates that the proxy command should be long-running + // and should be started and supervised by the agent until it's target service + // is deregistered. + ProxyExecModeDaemon ProxyExecMode = "daemon" + + // ProxyExecModeScript indicates that the proxy command should be invoke to + // completion on each change to the configuration of lifecycle event. The + // script typically fetches the config and certificates from the agent API and + // then configures an externally managed daemon, perhaps starting and stopping + // it if necessary. + ProxyExecModeScript ProxyExecMode = "script" +) + // AgentCheck represents a check known to the agent type AgentCheck struct { Node string @@ -47,6 +64,20 @@ type AgentService struct { CreateIndex uint64 ModifyIndex uint64 ProxyDestination string + Connect *AgentServiceConnect +} + +// AgentServiceConnect represents the Connect configuration of a service. +type AgentServiceConnect struct { + Proxy *AgentServiceConnectProxy +} + +// AgentServiceConnectProxy represents the Connect Proxy configuration of a +// service. +type AgentServiceConnectProxy struct { + ExecMode ProxyExecMode + Command string + Config map[string]interface{} } // AgentMember represents a cluster member known to the agent @@ -89,7 +120,8 @@ type AgentServiceRegistration struct { Meta map[string]string `json:",omitempty"` Check *AgentServiceCheck Checks AgentServiceChecks - ProxyDestination string `json:",omitempty"` + ProxyDestination string `json:",omitempty"` + Connect *AgentServiceConnect `json:",omitempty"` } // AgentCheckRegistration is used to register a new check @@ -185,6 +217,18 @@ type AgentAuthorize struct { Reason string } +// ConnectProxyConfig is the response structure for agent-local proxy +// configuration. +type ConnectProxyConfig struct { + ProxyServiceID string + TargetServiceID string + TargetServiceName string + ContentHash string + ExecMode ProxyExecMode + Command string + Config map[string]interface{} +} + // Agent can be used to query the Agent endpoints type Agent struct { c *Client @@ -286,6 +330,7 @@ func (a *Agent) Services() (map[string]*AgentService, error) { if err := decodeBody(resp, &out); err != nil { return nil, err } + return out, nil } @@ -587,6 +632,31 @@ func (a *Agent) ConnectCALeaf(serviceID string, q *QueryOptions) (*LeafCert, *Qu return &out, qm, nil } +// ConnectProxyConfig gets the configuration for a local managed proxy instance. +// +// Note that this uses an unconventional blocking mechanism since it's +// agent-local state. That means there is no persistent raft index so we block +// based on object hash instead. +func (a *Agent) ConnectProxyConfig(proxyServiceID string, q *QueryOptions) (*ConnectProxyConfig, *QueryMeta, error) { + r := a.c.newRequest("GET", "/v1/agent/connect/proxy/"+proxyServiceID) + r.setQueryOptions(q) + rtt, resp, err := requireOK(a.c.doRequest(r)) + if err != nil { + return nil, nil, err + } + defer resp.Body.Close() + + qm := &QueryMeta{} + parseQueryMeta(resp, qm) + qm.RequestTime = rtt + + var out ConnectProxyConfig + if err := decodeBody(resp, &out); err != nil { + return nil, nil, err + } + return &out, qm, nil +} + // EnableServiceMaintenance toggles service maintenance mode on // for the given service ID. func (a *Agent) EnableServiceMaintenance(serviceID, reason string) error { diff --git a/api/agent_test.go b/api/agent_test.go index 6186bffe3..01d35ae15 100644 --- a/api/agent_test.go +++ b/api/agent_test.go @@ -186,7 +186,64 @@ func TestAPI_AgentServices(t *testing.T) { } } -func TestAPI_AgentServices_ConnectProxy(t *testing.T) { +func TestAPI_AgentServices_ManagedConnectProxy(t *testing.T) { + t.Parallel() + c, s := makeClient(t) + defer s.Stop() + + agent := c.Agent() + + reg := &AgentServiceRegistration{ + Name: "foo", + Tags: []string{"bar", "baz"}, + Port: 8000, + Check: &AgentServiceCheck{ + TTL: "15s", + }, + Connect: &AgentServiceConnect{ + Proxy: &AgentServiceConnectProxy{ + ExecMode: ProxyExecModeScript, + Command: "foo.rb", + Config: map[string]interface{}{ + "foo": "bar", + }, + }, + }, + } + if err := agent.ServiceRegister(reg); err != nil { + t.Fatalf("err: %v", err) + } + + services, err := agent.Services() + if err != nil { + t.Fatalf("err: %v", err) + } + if _, ok := services["foo"]; !ok { + t.Fatalf("missing service: %v", services) + } + checks, err := agent.Checks() + if err != nil { + t.Fatalf("err: %v", err) + } + chk, ok := checks["service:foo"] + if !ok { + t.Fatalf("missing check: %v", checks) + } + + // Checks should default to critical + if chk.Status != HealthCritical { + t.Fatalf("Bad: %#v", chk) + } + + // Proxy config should be present in response + require.Equal(t, reg.Connect, services["foo"].Connect) + + if err := agent.ServiceDeregister("foo"); err != nil { + t.Fatalf("err: %v", err) + } +} + +func TestAPI_AgentServices_ExternalConnectProxy(t *testing.T) { t.Parallel() c, s := makeClient(t) defer s.Stop() @@ -1019,3 +1076,31 @@ func TestAPI_AgentConnectAuthorize(t *testing.T) { require.True(auth.Authorized) require.Equal(auth.Reason, "ACLs disabled, access is allowed by default") } + +func TestAPI_AgentConnectProxyConfig(t *testing.T) { + t.Parallel() + c, s := makeClient(t) + defer s.Stop() + + agent := c.Agent() + reg := &AgentServiceRegistration{ + Name: "foo", + Tags: []string{"bar", "baz"}, + Port: 8000, + Check: &AgentServiceCheck{ + CheckID: "foo-ttl", + TTL: "15s", + }, + } + if err := agent.ServiceRegister(reg); err != nil { + t.Fatalf("err: %v", err) + } + + checks, err := agent.Checks() + if err != nil { + t.Fatalf("err: %v", err) + } + if _, ok := checks["foo-ttl"]; !ok { + t.Fatalf("missing check: %v", checks) + } +} diff --git a/api/api.go b/api/api.go index 1cdc21e33..6f3034d90 100644 --- a/api/api.go +++ b/api/api.go @@ -82,6 +82,12 @@ type QueryOptions struct { // until the timeout or the next index is reached WaitIndex uint64 + // WaitHash is used by some endpoints instead of WaitIndex to perform blocking + // on state based on a hash of the response rather than a monotonic index. + // This is required when the state being blocked on is not stored in Raft, for + // example agent-local proxy configuration. + WaitHash string + // WaitTime is used to bound the duration of a wait. // Defaults to that of the Config, but can be overridden. WaitTime time.Duration @@ -533,6 +539,9 @@ func (r *request) setQueryOptions(q *QueryOptions) { if q.WaitTime != 0 { r.params.Set("wait", durToMsec(q.WaitTime)) } + if q.WaitHash != "" { + r.params.Set("hash", q.WaitHash) + } if q.Token != "" { r.header.Set("X-Consul-Token", q.Token) } From f7ff16669fe466c217d245fd3c6d3d80ef6643a1 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 6 Apr 2018 17:13:22 -0700 Subject: [PATCH 124/539] Add the Connect CA config to the state store --- agent/consul/state/connect_ca.go | 124 ++++++++++++++++++++++- agent/consul/state/connect_ca_test.go | 137 ++++++++++++++++++++++++++ 2 files changed, 260 insertions(+), 1 deletion(-) diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go index 95e763b8b..f5962b084 100644 --- a/agent/consul/state/connect_ca.go +++ b/agent/consul/state/connect_ca.go @@ -8,9 +8,28 @@ import ( ) const ( - caRootTableName = "connect-ca-roots" + caConfigTableName = "connect-ca-config" + caRootTableName = "connect-ca-roots" ) +// caConfigTableSchema returns a new table schema used for storing +// the CA config for Connect. +func caConfigTableSchema() *memdb.TableSchema { + return &memdb.TableSchema{ + Name: caConfigTableName, + Indexes: map[string]*memdb.IndexSchema{ + "id": &memdb.IndexSchema{ + Name: "id", + AllowMissing: true, + Unique: true, + Indexer: &memdb.ConditionalIndex{ + Conditional: func(obj interface{}) (bool, error) { return true, nil }, + }, + }, + }, + } +} + // caRootTableSchema returns a new table schema used for storing // CA roots for Connect. func caRootTableSchema() *memdb.TableSchema { @@ -30,9 +49,112 @@ func caRootTableSchema() *memdb.TableSchema { } func init() { + registerSchema(caConfigTableSchema) registerSchema(caRootTableSchema) } +// CAConfig is used to pull the CA config from the snapshot. +func (s *Snapshot) CAConfig() (*structs.CAConfiguration, error) { + c, err := s.tx.First("connect-ca-config", "id") + if err != nil { + return nil, err + } + + config, ok := c.(*structs.CAConfiguration) + if !ok { + return nil, nil + } + + return config, nil +} + +// CAConfig is used when restoring from a snapshot. +func (s *Restore) CAConfig(config *structs.CAConfiguration) error { + if err := s.tx.Insert("connect-ca-config", config); err != nil { + return fmt.Errorf("failed restoring CA config: %s", err) + } + + return nil +} + +// CAConfig is used to get the current Autopilot configuration. +func (s *Store) CAConfig() (uint64, *structs.CAConfiguration, error) { + tx := s.db.Txn(false) + defer tx.Abort() + + // Get the autopilot config + c, err := tx.First("connect-ca-config", "id") + if err != nil { + return 0, nil, fmt.Errorf("failed CA config lookup: %s", err) + } + + config, ok := c.(*structs.CAConfiguration) + if !ok { + return 0, nil, nil + } + + return config.ModifyIndex, config, nil +} + +// CASetConfig is used to set the current Autopilot configuration. +func (s *Store) CASetConfig(idx uint64, config *structs.CAConfiguration) error { + tx := s.db.Txn(true) + defer tx.Abort() + + s.caSetConfigTxn(idx, tx, config) + + tx.Commit() + return nil +} + +// CACheckAndSetConfig is used to try updating the CA configuration with a +// given Raft index. If the CAS index specified is not equal to the last observed index +// for the config, then the call is a noop, +func (s *Store) CACheckAndSetConfig(idx, cidx uint64, config *structs.CAConfiguration) (bool, error) { + tx := s.db.Txn(true) + defer tx.Abort() + + // Check for an existing config + existing, err := tx.First("connect-ca-config", "id") + if err != nil { + return false, fmt.Errorf("failed CA config lookup: %s", err) + } + + // If the existing index does not match the provided CAS + // index arg, then we shouldn't update anything and can safely + // return early here. + e, ok := existing.(*structs.CAConfiguration) + if !ok || e.ModifyIndex != cidx { + return false, nil + } + + s.caSetConfigTxn(idx, tx, config) + + tx.Commit() + return true, nil +} + +func (s *Store) caSetConfigTxn(idx uint64, tx *memdb.Txn, config *structs.CAConfiguration) error { + // Check for an existing config + existing, err := tx.First("connect-ca-config", "id") + if err != nil { + return fmt.Errorf("failed CA config lookup: %s", err) + } + + // Set the indexes. + if existing != nil { + config.CreateIndex = existing.(*structs.CAConfiguration).CreateIndex + } else { + config.CreateIndex = idx + } + config.ModifyIndex = idx + + if err := tx.Insert("connect-ca-config", config); err != nil { + return fmt.Errorf("failed updating CA config: %s", err) + } + return nil +} + // CARoots is used to pull all the CA roots for the snapshot. func (s *Snapshot) CARoots() (structs.CARoots, error) { ixns, err := s.tx.Get(caRootTableName, "id") diff --git a/agent/consul/state/connect_ca_test.go b/agent/consul/state/connect_ca_test.go index cd77eac7c..cd37f526b 100644 --- a/agent/consul/state/connect_ca_test.go +++ b/agent/consul/state/connect_ca_test.go @@ -1,14 +1,151 @@ package state import ( + "reflect" "testing" + "time" "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" + "github.com/pascaldekloe/goe/verify" "github.com/stretchr/testify/assert" ) +func TestStore_CAConfig(t *testing.T) { + s := testStateStore(t) + + expected := &structs.CAConfiguration{ + Provider: "consul", + Config: map[string]interface{}{ + "PrivateKey": "asdf", + "RootCert": "qwer", + "RotationPeriod": 90 * 24 * time.Hour, + }, + } + + if err := s.CASetConfig(0, expected); err != nil { + t.Fatal(err) + } + + idx, config, err := s.CAConfig() + if err != nil { + t.Fatal(err) + } + if idx != 0 { + t.Fatalf("bad: %d", idx) + } + if !reflect.DeepEqual(expected, config) { + t.Fatalf("bad: %#v, %#v", expected, config) + } +} + +func TestStore_CAConfigCAS(t *testing.T) { + s := testStateStore(t) + + expected := &structs.CAConfiguration{ + Provider: "consul", + } + + if err := s.CASetConfig(0, expected); err != nil { + t.Fatal(err) + } + // Do an extra operation to move the index up by 1 for the + // check-and-set operation after this + if err := s.CASetConfig(1, expected); err != nil { + t.Fatal(err) + } + + // Do a CAS with an index lower than the entry + ok, err := s.CACheckAndSetConfig(2, 0, &structs.CAConfiguration{ + Provider: "static", + }) + if ok || err != nil { + t.Fatalf("expected (false, nil), got: (%v, %#v)", ok, err) + } + + // Check that the index is untouched and the entry + // has not been updated. + idx, config, err := s.CAConfig() + if err != nil { + t.Fatal(err) + } + if idx != 1 { + t.Fatalf("bad: %d", idx) + } + if config.Provider != "consul" { + t.Fatalf("bad: %#v", config) + } + + // Do another CAS, this time with the correct index + ok, err = s.CACheckAndSetConfig(2, 1, &structs.CAConfiguration{ + Provider: "static", + }) + if !ok || err != nil { + t.Fatalf("expected (true, nil), got: (%v, %#v)", ok, err) + } + + // Make sure the config was updated + idx, config, err = s.CAConfig() + if err != nil { + t.Fatal(err) + } + if idx != 2 { + t.Fatalf("bad: %d", idx) + } + if config.Provider != "static" { + t.Fatalf("bad: %#v", config) + } +} + +func TestStore_CAConfig_Snapshot_Restore(t *testing.T) { + s := testStateStore(t) + before := &structs.CAConfiguration{ + Provider: "consul", + Config: map[string]interface{}{ + "PrivateKey": "asdf", + "RootCert": "qwer", + "RotationPeriod": 90 * 24 * time.Hour, + }, + } + if err := s.CASetConfig(99, before); err != nil { + t.Fatal(err) + } + + snap := s.Snapshot() + defer snap.Close() + + after := &structs.CAConfiguration{ + Provider: "static", + Config: map[string]interface{}{}, + } + if err := s.CASetConfig(100, after); err != nil { + t.Fatal(err) + } + + snapped, err := snap.CAConfig() + if err != nil { + t.Fatalf("err: %s", err) + } + verify.Values(t, "", before, snapped) + + s2 := testStateStore(t) + restore := s2.Restore() + if err := restore.CAConfig(snapped); err != nil { + t.Fatalf("err: %s", err) + } + restore.Commit() + + idx, res, err := s2.CAConfig() + if err != nil { + t.Fatalf("err: %s", err) + } + if idx != 99 { + t.Fatalf("bad index: %d", idx) + } + verify.Values(t, "", before, res) +} + func TestStore_CARootSetList(t *testing.T) { assert := assert.New(t) s := testStateStore(t) From ebdda17a301c035e5d2baae3e46a9fdd84f85819 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 6 Apr 2018 17:58:45 -0700 Subject: [PATCH 125/539] Add CA config set to fsm operations --- agent/consul/fsm/commands_oss.go | 13 ++++- agent/consul/fsm/commands_oss_test.go | 70 ++++++++++++++++++++++++++- 2 files changed, 81 insertions(+), 2 deletions(-) diff --git a/agent/consul/fsm/commands_oss.go b/agent/consul/fsm/commands_oss.go index 2d2627748..a5ef33efc 100644 --- a/agent/consul/fsm/commands_oss.go +++ b/agent/consul/fsm/commands_oss.go @@ -283,7 +283,18 @@ func (c *FSM) applyConnectCAOperation(buf []byte, index uint64) interface{} { defer metrics.MeasureSinceWithLabels([]string{"fsm", "ca"}, time.Now(), []metrics.Label{{Name: "op", Value: string(req.Op)}}) switch req.Op { - case structs.CAOpSet: + case structs.CAOpSetConfig: + if req.Config.ModifyIndex != 0 { + act, err := c.state.CACheckAndSetConfig(index, req.Config.ModifyIndex, req.Config) + if err != nil { + return err + } + + return act + } + + return c.state.CASetConfig(index, req.Config) + case structs.CAOpSetRoots: act, err := c.state.CARootSetCAS(index, req.Index, req.Roots) if err != nil { return err diff --git a/agent/consul/fsm/commands_oss_test.go b/agent/consul/fsm/commands_oss_test.go index 81852a9c4..a6552240c 100644 --- a/agent/consul/fsm/commands_oss_test.go +++ b/agent/consul/fsm/commands_oss_test.go @@ -15,6 +15,7 @@ import ( "github.com/hashicorp/consul/types" "github.com/hashicorp/go-uuid" "github.com/hashicorp/serf/coordinate" + "github.com/mitchellh/mapstructure" "github.com/pascaldekloe/goe/verify" "github.com/stretchr/testify/assert" ) @@ -1219,6 +1220,73 @@ func TestFSM_Intention_CRUD(t *testing.T) { } } +func TestFSM_CAConfig(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + fsm, err := New(nil, os.Stderr) + assert.Nil(err) + + // Set the autopilot config using a request. + req := structs.CARequest{ + Op: structs.CAOpSetConfig, + Config: &structs.CAConfiguration{ + Provider: "consul", + Config: map[string]interface{}{ + "PrivateKey": "asdf", + "RootCert": "qwer", + "RotationPeriod": 90 * 24 * time.Hour, + }, + }, + } + buf, err := structs.Encode(structs.ConnectCARequestType, req) + assert.Nil(err) + resp := fsm.Apply(makeLog(buf)) + if _, ok := resp.(error); ok { + t.Fatalf("bad: %v", resp) + } + + // Verify key is set directly in the state store. + _, config, err := fsm.state.CAConfig() + if err != nil { + t.Fatalf("err: %v", err) + } + var conf *connect.ConsulCAProviderConfig + if err := mapstructure.WeakDecode(config.Config, &conf); err != nil { + t.Fatalf("error decoding config: %s, %v", err, config.Config) + } + if got, want := config.Provider, req.Config.Provider; got != want { + t.Fatalf("got %v, want %v", got, want) + } + if got, want := conf.PrivateKey, "asdf"; got != want { + t.Fatalf("got %v, want %v", got, want) + } + if got, want := conf.RootCert, "qwer"; got != want { + t.Fatalf("got %v, want %v", got, want) + } + if got, want := conf.RotationPeriod, 90*24*time.Hour; got != want { + t.Fatalf("got %v, want %v", got, want) + } + + // Now use CAS and provide an old index + req.Config.Provider = "static" + req.Config.ModifyIndex = config.ModifyIndex - 1 + buf, err = structs.Encode(structs.ConnectCARequestType, req) + if err != nil { + t.Fatalf("err: %v", err) + } + resp = fsm.Apply(makeLog(buf)) + if _, ok := resp.(error); ok { + t.Fatalf("bad: %v", resp) + } + + _, config, err = fsm.state.CAConfig() + assert.Nil(err) + if config.Provider != "static" { + t.Fatalf("bad: %v", config.Provider) + } +} + func TestFSM_CARoots(t *testing.T) { t.Parallel() @@ -1233,7 +1301,7 @@ func TestFSM_CARoots(t *testing.T) { // Create a new request. req := structs.CARequest{ - Op: structs.CAOpSet, + Op: structs.CAOpSetRoots, Roots: []*structs.CARoot{ca1, ca2}, } From 4d0713d5bb8c0580a13e49e6564a392c13bf18ee Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Sun, 8 Apr 2018 21:56:11 -0700 Subject: [PATCH 126/539] Add the CA provider interface and built-in provider --- agent/connect/ca_provider.go | 278 +++++++++++++++++++++++++++++++++++ 1 file changed, 278 insertions(+) create mode 100644 agent/connect/ca_provider.go diff --git a/agent/connect/ca_provider.go b/agent/connect/ca_provider.go new file mode 100644 index 000000000..2aa1881f8 --- /dev/null +++ b/agent/connect/ca_provider.go @@ -0,0 +1,278 @@ +package connect + +import ( + "bytes" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "fmt" + "math/big" + "net/url" + "sync" + "sync/atomic" + "time" + + "github.com/hashicorp/consul/agent/structs" + uuid "github.com/hashicorp/go-uuid" + "github.com/mitchellh/mapstructure" +) + +// CAProvider is the interface for Consul to interact with +// an external CA that provides leaf certificate signing for +// given SpiffeIDServices. +type CAProvider interface { + SetConfiguration(raw map[string]interface{}) error + ActiveRoot() (*structs.CARoot, error) + ActiveIntermediate() (*structs.CARoot, error) + RotateIntermediate() error + Sign(*SpiffeIDService, *x509.CertificateRequest) (*structs.IssuedCert, error) +} + +type ConsulCAProviderConfig struct { + PrivateKey string + RootCert string + RotationPeriod time.Duration +} + +type ConsulCAProvider struct { + config *ConsulCAProviderConfig + + // todo(kyhavlov): store these directly in the state store + // and pass a reference to the state to this provider instead of + // having these values here + privateKey string + caRoot *structs.CARoot + caIndex uint64 + sync.RWMutex +} + +func NewConsulCAProvider(rawConfig map[string]interface{}) (*ConsulCAProvider, error) { + provider := &ConsulCAProvider{} + provider.SetConfiguration(rawConfig) + + return provider, nil +} + +func (c *ConsulCAProvider) SetConfiguration(raw map[string]interface{}) error { + conf, err := decodeConfig(raw) + if err != nil { + return err + } + + c.config = conf + return nil +} + +func decodeConfig(raw map[string]interface{}) (*ConsulCAProviderConfig, error) { + var config *ConsulCAProviderConfig + if err := mapstructure.WeakDecode(raw, &config); err != nil { + return nil, fmt.Errorf("error decoding config: %s", err) + } + + return config, nil +} + +func (c *ConsulCAProvider) ActiveRoot() (*structs.CARoot, error) { + if c.privateKey == "" { + pk, err := generatePrivateKey() + if err != nil { + return nil, err + } + c.privateKey = pk + } + + if c.caRoot == nil { + ca, err := c.generateCA() + if err != nil { + return nil, err + } + c.caRoot = ca + } + + return c.caRoot, nil +} + +func (c *ConsulCAProvider) ActiveIntermediate() (*structs.CARoot, error) { + return c.ActiveRoot() +} + +func (c *ConsulCAProvider) RotateIntermediate() error { + ca, err := c.generateCA() + if err != nil { + return err + } + c.caRoot = ca + + return nil +} + +// Sign returns a new certificate valid for the given SpiffeIDService +// using the current CA. +func (c *ConsulCAProvider) Sign(serviceId *SpiffeIDService, csr *x509.CertificateRequest) (*structs.IssuedCert, error) { + // The serial number for the cert. + // todo(kyhavlov): increment this based on raft index once the provider uses + // the state store directly + sn, err := rand.Int(rand.Reader, (&big.Int{}).Exp(big.NewInt(2), big.NewInt(159), nil)) + if err != nil { + return nil, fmt.Errorf("error generating serial number: %s", err) + } + + // Create the keyId for the cert from the signing public key. + signer, err := ParseSigner(c.privateKey) + if err != nil { + return nil, err + } + if signer == nil { + return nil, fmt.Errorf("error signing cert: Consul CA not initialized yet") + } + keyId, err := KeyId(signer.Public()) + if err != nil { + return nil, err + } + + // Parse the CA cert + caCert, err := ParseCert(c.caRoot.RootCert) + if err != nil { + return nil, fmt.Errorf("error parsing CA cert: %s", err) + } + + // Cert template for generation + template := x509.Certificate{ + SerialNumber: sn, + Subject: pkix.Name{CommonName: serviceId.Service}, + URIs: csr.URIs, + Signature: csr.Signature, + SignatureAlgorithm: csr.SignatureAlgorithm, + PublicKeyAlgorithm: csr.PublicKeyAlgorithm, + PublicKey: csr.PublicKey, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageDataEncipherment | + x509.KeyUsageKeyAgreement | + x509.KeyUsageDigitalSignature | + x509.KeyUsageKeyEncipherment, + ExtKeyUsage: []x509.ExtKeyUsage{ + x509.ExtKeyUsageClientAuth, + x509.ExtKeyUsageServerAuth, + }, + NotAfter: time.Now().Add(3 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: keyId, + SubjectKeyId: keyId, + } + + // Create the certificate, PEM encode it and return that value. + var buf bytes.Buffer + bs, err := x509.CreateCertificate( + rand.Reader, &template, caCert, signer.Public(), signer) + if err != nil { + return nil, fmt.Errorf("error generating certificate: %s", err) + } + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + return nil, fmt.Errorf("error encoding private key: %s", err) + } + + // Set the response + return &structs.IssuedCert{ + SerialNumber: HexString(template.SerialNumber.Bytes()), + CertPEM: buf.String(), + Service: serviceId.Service, + ServiceURI: template.URIs[0].String(), + ValidAfter: template.NotBefore, + ValidBefore: template.NotAfter, + }, nil +} + +// generatePrivateKey returns a new private key +func generatePrivateKey() (string, error) { + var pk *ecdsa.PrivateKey + + // If we have no key, then create a new one. + pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if err != nil { + return "", fmt.Errorf("error generating private key: %s", err) + } + + bs, err := x509.MarshalECPrivateKey(pk) + if err != nil { + return "", fmt.Errorf("error generating private key: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) + if err != nil { + return "", fmt.Errorf("error encoding private key: %s", err) + } + + return buf.String(), nil +} + +// generateCA makes a new root CA using the given private key +func (c *ConsulCAProvider) generateCA() (*structs.CARoot, error) { + privKey, err := ParseSigner(c.privateKey) + if err != nil { + return nil, err + } + + name := fmt.Sprintf("Consul CA %d", atomic.AddUint64(&c.caIndex, 1)) + + // The serial number for the cert + sn, err := testSerialNumber() + if err != nil { + return nil, err + } + + // The URI (SPIFFE compatible) for the cert + id := &SpiffeIDSigning{ClusterID: testClusterID, Domain: "consul"} + keyId, err := KeyId(privKey.Public()) + if err != nil { + return nil, err + } + + // Create the CA cert + template := x509.Certificate{ + SerialNumber: sn, + Subject: pkix.Name{CommonName: name}, + URIs: []*url.URL{id.URI()}, + PermittedDNSDomainsCritical: true, + PermittedDNSDomains: []string{id.URI().Hostname()}, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageCertSign | + x509.KeyUsageCRLSign | + x509.KeyUsageDigitalSignature, + IsCA: true, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: keyId, + SubjectKeyId: keyId, + } + + bs, err := x509.CreateCertificate( + rand.Reader, &template, &template, privKey.Public(), privKey) + if err != nil { + return nil, fmt.Errorf("error generating CA certificate: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + return nil, fmt.Errorf("error encoding private key: %s", err) + } + + // Generate an ID for the new intermediate + rootId, err := uuid.GenerateUUID() + if err != nil { + return nil, err + } + + return &structs.CARoot{ + ID: rootId, + Name: name, + RootCert: buf.String(), + SigningKey: c.privateKey, + Active: true, + }, nil +} From e26819ed9c00aaa426b85a800fe6d45f2119caec Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Sun, 8 Apr 2018 21:56:46 -0700 Subject: [PATCH 127/539] Add the bootstrap config for the CA --- agent/consul/config.go | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/agent/consul/config.go b/agent/consul/config.go index 6966b5628..df4e55e42 100644 --- a/agent/consul/config.go +++ b/agent/consul/config.go @@ -8,6 +8,7 @@ import ( "time" "github.com/hashicorp/consul/agent/consul/autopilot" + "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/lib" "github.com/hashicorp/consul/tlsutil" "github.com/hashicorp/consul/types" @@ -346,6 +347,10 @@ type Config struct { // autopilot tasks, such as promoting eligible non-voters and removing // dead servers. AutopilotInterval time.Duration + + // CAConfig is used to apply the initial Connect CA configuration when + // bootstrapping. + CAConfig *structs.CAConfiguration } // CheckProtocolVersion validates the protocol version. @@ -427,6 +432,15 @@ func DefaultConfig() *Config { ServerHealthInterval: 2 * time.Second, AutopilotInterval: 10 * time.Second, + + CAConfig: &structs.CAConfiguration{ + Provider: "consul", + Config: map[string]interface{}{ + "PrivateKey": "", + "RootCert": "", + "RotationPeriod": 90 * 24 * time.Hour, + }, + }, } // Increase our reap interval to 3 days instead of 24h. From a40db26ffeea869ee9d9e6d6472e47caff28315c Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Sun, 8 Apr 2018 21:57:32 -0700 Subject: [PATCH 128/539] Add CA bootstrapping on establishing leadership --- agent/consul/leader.go | 107 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 107 insertions(+) diff --git a/agent/consul/leader.go b/agent/consul/leader.go index d950d71ba..516201262 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -1,12 +1,15 @@ package consul import ( + "crypto/x509" "fmt" "net" "strconv" "sync" "time" + "github.com/hashicorp/consul/agent/connect" + "github.com/armon/go-metrics" "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/consul/autopilot" @@ -210,6 +213,10 @@ func (s *Server) establishLeadership() error { s.getOrCreateAutopilotConfig() s.autopilot.Start() + + // todo(kyhavlov): start a goroutine here for handling periodic CA rotation + s.bootstrapCA() + s.setConsistentReadReady() return nil } @@ -359,6 +366,106 @@ func (s *Server) getOrCreateAutopilotConfig() *autopilot.Config { return config } +// getOrCreateCAConfig is used to get the CA config, initializing it if necessary +func (s *Server) getOrCreateCAConfig() (*structs.CAConfiguration, error) { + state := s.fsm.State() + _, config, err := state.CAConfig() + if err != nil { + return nil, err + } + if config != nil { + return config, nil + } + + config = s.config.CAConfig + req := structs.CARequest{ + Op: structs.CAOpSetConfig, + Config: config, + } + if _, err = s.raftApply(structs.ConnectCARequestType, req); err != nil { + return nil, err + } + + return config, nil +} + +// bootstrapCA handles the initialization of a new CA provider +func (s *Server) bootstrapCA() error { + conf, err := s.getOrCreateCAConfig() + if err != nil { + return err + } + + // Initialize the right provider based on the config + var provider connect.CAProvider + switch conf.Provider { + case structs.ConsulCAProvider: + provider, err = connect.NewConsulCAProvider(conf.Config) + if err != nil { + return err + } + default: + return fmt.Errorf("unknown CA provider %q", conf.Provider) + } + + s.caProviderLock.Lock() + s.caProvider = provider + s.caProviderLock.Unlock() + + // Get the intermediate cert from the CA + trustedCA, err := provider.ActiveIntermediate() + if err != nil { + return fmt.Errorf("error getting intermediate cert: %v", err) + } + + // Check if this CA is already initialized + state := s.fsm.State() + _, root, err := state.CARootActive(nil) + if err != nil { + return err + } + // Exit early if the root is already in the state store. + if root != nil && root.ID == trustedCA.ID { + return nil + } + + // Get the highest index + idx, _, err := state.CARoots(nil) + if err != nil { + return err + } + + // Store the intermediate in raft + resp, err := s.raftApply(structs.ConnectCARequestType, &structs.CARequest{ + Op: structs.CAOpSetRoots, + Index: idx, + Roots: []*structs.CARoot{trustedCA}, + }) + if err != nil { + s.logger.Printf("[ERR] connect: Apply failed %v", err) + return err + } + if respErr, ok := resp.(error); ok { + return respErr + } + + s.logger.Printf("[INFO] connect: initialized CA with provider %q", conf.Provider) + + return nil +} + +// signConnectCert signs a cert for a service using the currently configured CA provider +func (s *Server) signConnectCert(service *connect.SpiffeIDService, csr *x509.CertificateRequest) (*structs.IssuedCert, error) { + s.caProviderLock.RLock() + defer s.caProviderLock.RUnlock() + + cert, err := s.caProvider.Sign(service, csr) + if err != nil { + return nil, err + } + return cert, nil +} + // reconcileReaped is used to reconcile nodes that have failed and been reaped // from Serf but remain in the catalog. This is done by looking for unknown nodes with serfHealth checks registered. // We generate a "reap" event to cause the node to be cleaned up. From fc9ef9741b267b502a48f533ebf5f1e069236b20 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Sun, 8 Apr 2018 21:58:31 -0700 Subject: [PATCH 129/539] Hook the CA RPC endpoint into the provider interface --- agent/consul/connect_ca_endpoint.go | 208 +++++++--------------------- agent/consul/server.go | 6 + agent/structs/connect_ca.go | 26 +++- 3 files changed, 78 insertions(+), 162 deletions(-) diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 4efdafc06..84cffc85d 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -1,22 +1,13 @@ package consul import ( - "bytes" - "crypto/rand" - "crypto/x509" - "crypto/x509/pkix" - "encoding/pem" "fmt" - "math/big" - "time" + "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/consul/state" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" - "github.com/hashicorp/go-uuid" - "github.com/mitchellh/go-testing-interface" - "github.com/mitchellh/mapstructure" ) // ConnectCA manages the Connect CA. @@ -25,81 +16,54 @@ type ConnectCA struct { srv *Server } -// ConfigurationSet updates the configuration for the CA. -// -// NOTE(mitchellh): This whole implementation is temporary until the real -// CA plugin work comes in. For now, this is only used to configure a single -// static CA root. -func (s *ConnectCA) ConfigurationSet( - args *structs.CAConfiguration, - reply *interface{}) error { - // NOTE(mitchellh): This is the temporary hardcoding of a static CA - // provider. This will allow us to test agent implementations and so on - // with an incomplete CA for now. - if args.Provider != "static" { - return fmt.Errorf("The CA provider can only be 'static' for now") - } - - // Config is the configuration allowed for our static provider - var config struct { - Name string - CertPEM string - PrivateKeyPEM string - Generate bool - } - if err := mapstructure.Decode(args.Config, &config); err != nil { - return fmt.Errorf("error decoding config: %s", err) - } - - // Basic validation so demos aren't super jank - if config.Name == "" { - return fmt.Errorf("Name must be set") - } - if config.CertPEM == "" || config.PrivateKeyPEM == "" { - if !config.Generate { - return fmt.Errorf( - "CertPEM and PrivateKeyPEM must be set, or Generate must be true") - } - } - - // Convenience to auto-generate the cert - if config.Generate { - ca := connect.TestCA(&testing.RuntimeT{}, nil) - config.CertPEM = ca.RootCert - config.PrivateKeyPEM = ca.SigningKey - } - - // TODO(mitchellh): verify that the private key is valid for the cert - - // Generate an ID for this - id, err := uuid.GenerateUUID() - if err != nil { +// ConfigurationGet returns the configuration for the CA. +func (s *ConnectCA) ConfigurationGet( + args *structs.DCSpecificRequest, + reply *structs.CAConfiguration) error { + if done, err := s.srv.forward("ConnectCA.ConfigurationGet", args, args, reply); done { return err } - // Get the highest index + // This action requires operator read access. + rule, err := s.srv.resolveToken(args.Token) + if err != nil { + return err + } + if rule != nil && !rule.OperatorRead() { + return acl.ErrPermissionDenied + } + state := s.srv.fsm.State() - idx, _, err := state.CARoots(nil) + _, config, err := state.CAConfig() if err != nil { return err } + *reply = *config + + return nil +} + +// ConfigurationSet updates the configuration for the CA. +func (s *ConnectCA) ConfigurationSet( + args *structs.CARequest, + reply *interface{}) error { + if done, err := s.srv.forward("ConnectCA.ConfigurationSet", args, args, reply); done { + return err + } + + // This action requires operator read access. + rule, err := s.srv.resolveToken(args.Token) + if err != nil { + return err + } + if rule != nil && !rule.OperatorWrite() { + return acl.ErrPermissionDenied + } // Commit - resp, err := s.srv.raftApply(structs.ConnectCARequestType, &structs.CARequest{ - Op: structs.CAOpSet, - Index: idx, - Roots: []*structs.CARoot{ - &structs.CARoot{ - ID: id, - Name: config.Name, - RootCert: config.CertPEM, - SigningKey: config.PrivateKeyPEM, - Active: true, - }, - }, - }) + args.Op = structs.CAOpSetConfig + resp, err := s.srv.raftApply(structs.ConnectCARequestType, args) if err != nil { - s.srv.logger.Printf("[ERR] consul.test: Apply failed %v", err) return err } if respErr, ok := resp.(error); ok { @@ -157,13 +121,13 @@ func (s *ConnectCA) Roots( } // Sign signs a certificate for a service. -// -// NOTE(mitchellh): There is a LOT missing from this. I do next to zero -// validation of the incoming CSR, the way the cert is signed probably -// isn't right, we're not using enough of the CSR fields, etc. func (s *ConnectCA) Sign( args *structs.CASignRequest, reply *structs.IssuedCert) error { + if done, err := s.srv.forward("ConnectCA.Sign", args, args, reply); done { + return err + } + // Parse the CSR csr, err := connect.ParseCSR(args.CSR) if err != nil { @@ -180,93 +144,15 @@ func (s *ConnectCA) Sign( return fmt.Errorf("SPIFFE ID in CSR must be a service ID") } - // Get the currently active root - state := s.srv.fsm.State() - _, root, err := state.CARootActive(nil) + // todo(kyhavlov): more validation on the CSR before signing + + cert, err := s.srv.signConnectCert(serviceId, csr) if err != nil { return err } - if root == nil { - return fmt.Errorf("no active CA found") - } - - // Determine the signing certificate. It is the set signing cert - // unless that is empty, in which case it is identically to the public - // cert. - certPem := root.SigningCert - if certPem == "" { - certPem = root.RootCert - } - - // Parse the CA cert and signing key from the root - caCert, err := connect.ParseCert(certPem) - if err != nil { - return fmt.Errorf("error parsing CA cert: %s", err) - } - signer, err := connect.ParseSigner(root.SigningKey) - if err != nil { - return fmt.Errorf("error parsing signing key: %s", err) - } - - // The serial number for the cert. NOTE(mitchellh): in the final - // implementation this should be monotonically increasing based on - // some raft state. - sn, err := rand.Int(rand.Reader, (&big.Int{}).Exp(big.NewInt(2), big.NewInt(159), nil)) - if err != nil { - return fmt.Errorf("error generating serial number: %s", err) - } - - // Create the keyId for the cert from the signing public key. - keyId, err := connect.KeyId(signer.Public()) - if err != nil { - return err - } - - // Cert template for generation - template := x509.Certificate{ - SerialNumber: sn, - Subject: pkix.Name{CommonName: serviceId.Service}, - URIs: csr.URIs, - Signature: csr.Signature, - SignatureAlgorithm: csr.SignatureAlgorithm, - PublicKeyAlgorithm: csr.PublicKeyAlgorithm, - PublicKey: csr.PublicKey, - BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageDataEncipherment | - x509.KeyUsageKeyAgreement | - x509.KeyUsageDigitalSignature | - x509.KeyUsageKeyEncipherment, - ExtKeyUsage: []x509.ExtKeyUsage{ - x509.ExtKeyUsageClientAuth, - x509.ExtKeyUsageServerAuth, - }, - NotAfter: time.Now().Add(3 * 24 * time.Hour), - NotBefore: time.Now(), - AuthorityKeyId: keyId, - SubjectKeyId: keyId, - } - - // Create the certificate, PEM encode it and return that value. - var buf bytes.Buffer - bs, err := x509.CreateCertificate( - rand.Reader, &template, caCert, signer.Public(), signer) - if err != nil { - return fmt.Errorf("error generating certificate: %s", err) - } - err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) - if err != nil { - return fmt.Errorf("error encoding private key: %s", err) - } // Set the response - *reply = structs.IssuedCert{ - SerialNumber: connect.HexString(template.SerialNumber.Bytes()), - CertPEM: buf.String(), - Service: serviceId.Service, - ServiceURI: template.URIs[0].String(), - ValidAfter: template.NotBefore, - ValidBefore: template.NotAfter, - } + *reply = *cert return nil } diff --git a/agent/consul/server.go b/agent/consul/server.go index 23fbf337c..fef016829 100644 --- a/agent/consul/server.go +++ b/agent/consul/server.go @@ -17,6 +17,8 @@ import ( "sync/atomic" "time" + "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/consul/autopilot" "github.com/hashicorp/consul/agent/consul/fsm" @@ -96,6 +98,10 @@ type Server struct { // autopilotWaitGroup is used to block until Autopilot shuts down. autopilotWaitGroup sync.WaitGroup + // caProvider is the current CA provider in use for Connect. + caProvider connect.CAProvider + caProviderLock sync.RWMutex + // Consul configuration config *Config diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 5ac8a0fc2..af8f82653 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -96,7 +96,8 @@ type IssuedCert struct { type CAOp string const ( - CAOpSet CAOp = "set" + CAOpSetRoots CAOp = "set-roots" + CAOpSetConfig CAOp = "set-config" ) // CARequest is used to modify connect CA data. This is used by the @@ -106,14 +107,33 @@ type CARequest struct { // other fields are required. Op CAOp + // Datacenter is the target for this request. + Datacenter string + // Index is used by CAOpSet for a CAS operation. Index uint64 // Roots is a list of roots. This is used for CAOpSet. One root must // always be active. Roots []*CARoot + + // Config is the configuration for the current CA plugin. + Config *CAConfiguration + + // WriteRequest is a common struct containing ACL tokens and other + // write-related common elements for requests. + WriteRequest } +// RequestDatacenter returns the datacenter for a given request. +func (q *CARequest) RequestDatacenter() string { + return q.Datacenter +} + +const ( + ConsulCAProvider = "consul" +) + // CAConfiguration is the configuration for the current CA plugin. type CAConfiguration struct { // Provider is the CA provider implementation to use. @@ -123,4 +143,8 @@ type CAConfiguration struct { // should only contain primitive values and containers (such as lists // and maps). Config map[string]interface{} + + // CreateIndex/ModifyIndex store the create/modify indexes of this configuration. + CreateIndex uint64 + ModifyIndex uint64 } From 9fefac745ea41bf11fe1c6d2d89b7df11305fad2 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Sun, 8 Apr 2018 21:59:08 -0700 Subject: [PATCH 130/539] Update the CA config endpoint to enable GETs --- agent/connect_ca_endpoint.go | 22 ++++++++++++++++++++-- agent/http_oss.go | 2 +- 2 files changed, 21 insertions(+), 3 deletions(-) diff --git a/agent/connect_ca_endpoint.go b/agent/connect_ca_endpoint.go index 43eeb8644..979005df1 100644 --- a/agent/connect_ca_endpoint.go +++ b/agent/connect_ca_endpoint.go @@ -26,6 +26,9 @@ func (s *HTTPServer) ConnectCARoots(resp http.ResponseWriter, req *http.Request) // /v1/connect/ca/configuration func (s *HTTPServer) ConnectCAConfiguration(resp http.ResponseWriter, req *http.Request) (interface{}, error) { switch req.Method { + case "GET": + return s.ConnectCAConfigurationGet(resp, req) + case "PUT": return s.ConnectCAConfigurationSet(resp, req) @@ -34,12 +37,27 @@ func (s *HTTPServer) ConnectCAConfiguration(resp http.ResponseWriter, req *http. } } +// GEt /v1/connect/ca/configuration +func (s *HTTPServer) ConnectCAConfigurationGet(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Method is tested in ConnectCAConfiguration + var args structs.DCSpecificRequest + if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { + return nil, nil + } + + var reply structs.CAConfiguration + err := s.agent.RPC("ConnectCA.ConfigurationGet", &args, &reply) + return reply, err +} + // PUT /v1/connect/ca/configuration func (s *HTTPServer) ConnectCAConfigurationSet(resp http.ResponseWriter, req *http.Request) (interface{}, error) { // Method is tested in ConnectCAConfiguration - var args structs.CAConfiguration - if err := decodeBody(req, &args, nil); err != nil { + var args structs.CARequest + s.parseDC(req, &args.Datacenter) + s.parseToken(req, &args.Token) + if err := decodeBody(req, &args.Config, nil); err != nil { resp.WriteHeader(http.StatusBadRequest) fmt.Fprintf(resp, "Request decode failed: %v", err) return nil, nil diff --git a/agent/http_oss.go b/agent/http_oss.go index 774388ad3..124a26875 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -43,7 +43,7 @@ func init() { registerEndpoint("/v1/catalog/services", []string{"GET"}, (*HTTPServer).CatalogServices) registerEndpoint("/v1/catalog/service/", []string{"GET"}, (*HTTPServer).CatalogServiceNodes) registerEndpoint("/v1/catalog/node/", []string{"GET"}, (*HTTPServer).CatalogNodeServices) - registerEndpoint("/v1/connect/ca/configuration", []string{"PUT"}, (*HTTPServer).ConnectCAConfiguration) + registerEndpoint("/v1/connect/ca/configuration", []string{"GET", "PUT"}, (*HTTPServer).ConnectCAConfiguration) registerEndpoint("/v1/connect/ca/roots", []string{"GET"}, (*HTTPServer).ConnectCARoots) registerEndpoint("/v1/connect/intentions", []string{"GET", "POST"}, (*HTTPServer).IntentionEndpoint) registerEndpoint("/v1/connect/intentions/match", []string{"GET"}, (*HTTPServer).IntentionMatch) From 80eddb0bfb25ad0396bc43ee330e253450200727 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Sun, 8 Apr 2018 21:59:59 -0700 Subject: [PATCH 131/539] Fix the testing endpoint's root set op --- agent/consul/testing_endpoint.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agent/consul/testing_endpoint.go b/agent/consul/testing_endpoint.go index e47e0e737..6e3cec12f 100644 --- a/agent/consul/testing_endpoint.go +++ b/agent/consul/testing_endpoint.go @@ -27,7 +27,7 @@ func (s *Test) ConnectCASetRoots( // Commit resp, err := s.srv.raftApply(structs.ConnectCARequestType, &structs.CARequest{ - Op: structs.CAOpSet, + Op: structs.CAOpSetRoots, Index: idx, Roots: args, }) From a585a0ba102dac88bd4a610239ce84f233fae16c Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 20 Apr 2018 01:30:34 -0700 Subject: [PATCH 132/539] Have the built in CA store its state in raft --- agent/connect/ca_provider.go | 262 +--------------------- agent/consul/connect_ca_endpoint.go | 1 + agent/consul/connect_ca_provider.go | 322 ++++++++++++++++++++++++++++ agent/consul/fsm/commands_oss.go | 7 + agent/consul/leader.go | 17 +- agent/consul/state/connect_ca.go | 123 +++++++++-- agent/structs/connect_ca.go | 27 ++- 7 files changed, 474 insertions(+), 285 deletions(-) create mode 100644 agent/consul/connect_ca_provider.go diff --git a/agent/connect/ca_provider.go b/agent/connect/ca_provider.go index 2aa1881f8..ca0ccf9b0 100644 --- a/agent/connect/ca_provider.go +++ b/agent/connect/ca_provider.go @@ -1,23 +1,9 @@ package connect import ( - "bytes" - "crypto/ecdsa" - "crypto/elliptic" - "crypto/rand" "crypto/x509" - "crypto/x509/pkix" - "encoding/pem" - "fmt" - "math/big" - "net/url" - "sync" - "sync/atomic" - "time" "github.com/hashicorp/consul/agent/structs" - uuid "github.com/hashicorp/go-uuid" - "github.com/mitchellh/mapstructure" ) // CAProvider is the interface for Consul to interact with @@ -27,252 +13,6 @@ type CAProvider interface { SetConfiguration(raw map[string]interface{}) error ActiveRoot() (*structs.CARoot, error) ActiveIntermediate() (*structs.CARoot, error) - RotateIntermediate() error + GenerateIntermediate() (*structs.CARoot, error) Sign(*SpiffeIDService, *x509.CertificateRequest) (*structs.IssuedCert, error) } - -type ConsulCAProviderConfig struct { - PrivateKey string - RootCert string - RotationPeriod time.Duration -} - -type ConsulCAProvider struct { - config *ConsulCAProviderConfig - - // todo(kyhavlov): store these directly in the state store - // and pass a reference to the state to this provider instead of - // having these values here - privateKey string - caRoot *structs.CARoot - caIndex uint64 - sync.RWMutex -} - -func NewConsulCAProvider(rawConfig map[string]interface{}) (*ConsulCAProvider, error) { - provider := &ConsulCAProvider{} - provider.SetConfiguration(rawConfig) - - return provider, nil -} - -func (c *ConsulCAProvider) SetConfiguration(raw map[string]interface{}) error { - conf, err := decodeConfig(raw) - if err != nil { - return err - } - - c.config = conf - return nil -} - -func decodeConfig(raw map[string]interface{}) (*ConsulCAProviderConfig, error) { - var config *ConsulCAProviderConfig - if err := mapstructure.WeakDecode(raw, &config); err != nil { - return nil, fmt.Errorf("error decoding config: %s", err) - } - - return config, nil -} - -func (c *ConsulCAProvider) ActiveRoot() (*structs.CARoot, error) { - if c.privateKey == "" { - pk, err := generatePrivateKey() - if err != nil { - return nil, err - } - c.privateKey = pk - } - - if c.caRoot == nil { - ca, err := c.generateCA() - if err != nil { - return nil, err - } - c.caRoot = ca - } - - return c.caRoot, nil -} - -func (c *ConsulCAProvider) ActiveIntermediate() (*structs.CARoot, error) { - return c.ActiveRoot() -} - -func (c *ConsulCAProvider) RotateIntermediate() error { - ca, err := c.generateCA() - if err != nil { - return err - } - c.caRoot = ca - - return nil -} - -// Sign returns a new certificate valid for the given SpiffeIDService -// using the current CA. -func (c *ConsulCAProvider) Sign(serviceId *SpiffeIDService, csr *x509.CertificateRequest) (*structs.IssuedCert, error) { - // The serial number for the cert. - // todo(kyhavlov): increment this based on raft index once the provider uses - // the state store directly - sn, err := rand.Int(rand.Reader, (&big.Int{}).Exp(big.NewInt(2), big.NewInt(159), nil)) - if err != nil { - return nil, fmt.Errorf("error generating serial number: %s", err) - } - - // Create the keyId for the cert from the signing public key. - signer, err := ParseSigner(c.privateKey) - if err != nil { - return nil, err - } - if signer == nil { - return nil, fmt.Errorf("error signing cert: Consul CA not initialized yet") - } - keyId, err := KeyId(signer.Public()) - if err != nil { - return nil, err - } - - // Parse the CA cert - caCert, err := ParseCert(c.caRoot.RootCert) - if err != nil { - return nil, fmt.Errorf("error parsing CA cert: %s", err) - } - - // Cert template for generation - template := x509.Certificate{ - SerialNumber: sn, - Subject: pkix.Name{CommonName: serviceId.Service}, - URIs: csr.URIs, - Signature: csr.Signature, - SignatureAlgorithm: csr.SignatureAlgorithm, - PublicKeyAlgorithm: csr.PublicKeyAlgorithm, - PublicKey: csr.PublicKey, - BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageDataEncipherment | - x509.KeyUsageKeyAgreement | - x509.KeyUsageDigitalSignature | - x509.KeyUsageKeyEncipherment, - ExtKeyUsage: []x509.ExtKeyUsage{ - x509.ExtKeyUsageClientAuth, - x509.ExtKeyUsageServerAuth, - }, - NotAfter: time.Now().Add(3 * 24 * time.Hour), - NotBefore: time.Now(), - AuthorityKeyId: keyId, - SubjectKeyId: keyId, - } - - // Create the certificate, PEM encode it and return that value. - var buf bytes.Buffer - bs, err := x509.CreateCertificate( - rand.Reader, &template, caCert, signer.Public(), signer) - if err != nil { - return nil, fmt.Errorf("error generating certificate: %s", err) - } - err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) - if err != nil { - return nil, fmt.Errorf("error encoding private key: %s", err) - } - - // Set the response - return &structs.IssuedCert{ - SerialNumber: HexString(template.SerialNumber.Bytes()), - CertPEM: buf.String(), - Service: serviceId.Service, - ServiceURI: template.URIs[0].String(), - ValidAfter: template.NotBefore, - ValidBefore: template.NotAfter, - }, nil -} - -// generatePrivateKey returns a new private key -func generatePrivateKey() (string, error) { - var pk *ecdsa.PrivateKey - - // If we have no key, then create a new one. - pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) - if err != nil { - return "", fmt.Errorf("error generating private key: %s", err) - } - - bs, err := x509.MarshalECPrivateKey(pk) - if err != nil { - return "", fmt.Errorf("error generating private key: %s", err) - } - - var buf bytes.Buffer - err = pem.Encode(&buf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) - if err != nil { - return "", fmt.Errorf("error encoding private key: %s", err) - } - - return buf.String(), nil -} - -// generateCA makes a new root CA using the given private key -func (c *ConsulCAProvider) generateCA() (*structs.CARoot, error) { - privKey, err := ParseSigner(c.privateKey) - if err != nil { - return nil, err - } - - name := fmt.Sprintf("Consul CA %d", atomic.AddUint64(&c.caIndex, 1)) - - // The serial number for the cert - sn, err := testSerialNumber() - if err != nil { - return nil, err - } - - // The URI (SPIFFE compatible) for the cert - id := &SpiffeIDSigning{ClusterID: testClusterID, Domain: "consul"} - keyId, err := KeyId(privKey.Public()) - if err != nil { - return nil, err - } - - // Create the CA cert - template := x509.Certificate{ - SerialNumber: sn, - Subject: pkix.Name{CommonName: name}, - URIs: []*url.URL{id.URI()}, - PermittedDNSDomainsCritical: true, - PermittedDNSDomains: []string{id.URI().Hostname()}, - BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageCertSign | - x509.KeyUsageCRLSign | - x509.KeyUsageDigitalSignature, - IsCA: true, - NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), - NotBefore: time.Now(), - AuthorityKeyId: keyId, - SubjectKeyId: keyId, - } - - bs, err := x509.CreateCertificate( - rand.Reader, &template, &template, privKey.Public(), privKey) - if err != nil { - return nil, fmt.Errorf("error generating CA certificate: %s", err) - } - - var buf bytes.Buffer - err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) - if err != nil { - return nil, fmt.Errorf("error encoding private key: %s", err) - } - - // Generate an ID for the new intermediate - rootId, err := uuid.GenerateUUID() - if err != nil { - return nil, err - } - - return &structs.CARoot{ - ID: rootId, - Name: name, - RootCert: buf.String(), - SigningKey: c.privateKey, - Active: true, - }, nil -} diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 84cffc85d..d0c582165 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -61,6 +61,7 @@ func (s *ConnectCA) ConfigurationSet( } // Commit + // todo(kyhavlov): trigger a bootstrap here when the provider changes args.Op = structs.CAOpSetConfig resp, err := s.srv.raftApply(structs.ConnectCARequestType, args) if err != nil { diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go new file mode 100644 index 000000000..9beb6bfac --- /dev/null +++ b/agent/consul/connect_ca_provider.go @@ -0,0 +1,322 @@ +package consul + +import ( + "bytes" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/x509" + "crypto/x509/pkix" + "encoding/pem" + "fmt" + "math/big" + "net/url" + "sync" + "time" + + "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/agent/structs" + uuid "github.com/hashicorp/go-uuid" + "github.com/mitchellh/mapstructure" +) + +type ConsulCAProviderConfig struct { + PrivateKey string + RootCert string + RotationPeriod time.Duration +} + +type ConsulCAProvider struct { + config *ConsulCAProviderConfig + + // todo(kyhavlov): store these directly in the state store + // and pass a reference to the state to this provider instead of + // having these values here + srv *Server + sync.RWMutex +} + +func NewConsulCAProvider(rawConfig map[string]interface{}, srv *Server) (*ConsulCAProvider, error) { + provider := &ConsulCAProvider{srv: srv} + provider.SetConfiguration(rawConfig) + + return provider, nil +} + +func (c *ConsulCAProvider) SetConfiguration(raw map[string]interface{}) error { + conf, err := decodeConfig(raw) + if err != nil { + return err + } + + c.config = conf + return nil +} + +func decodeConfig(raw map[string]interface{}) (*ConsulCAProviderConfig, error) { + var config *ConsulCAProviderConfig + if err := mapstructure.WeakDecode(raw, &config); err != nil { + return nil, fmt.Errorf("error decoding config: %s", err) + } + + return config, nil +} + +// Return the active root CA and generate a new one if needed +func (c *ConsulCAProvider) ActiveRoot() (*structs.CARoot, error) { + state := c.srv.fsm.State() + _, providerState, err := state.CAProviderState() + if err != nil { + return nil, err + } + + var update bool + var newState structs.CAConsulProviderState + if providerState != nil { + newState = *providerState + } + + // Generate a private key if needed + if providerState == nil || providerState.PrivateKey == "" { + pk, err := generatePrivateKey() + if err != nil { + return nil, err + } + newState.PrivateKey = pk + update = true + } + + // Generate a root CA if needed + if providerState == nil || providerState.CARoot == nil { + ca, err := c.generateCA(newState.PrivateKey, newState.RootIndex+1) + if err != nil { + return nil, err + } + newState.CARoot = ca + newState.RootIndex += 1 + update = true + } + + // Update the provider state if we generated a new private key/cert + if update { + args := &structs.CARequest{ + Op: structs.CAOpSetProviderState, + ProviderState: &newState, + } + resp, err := c.srv.raftApply(structs.ConnectCARequestType, args) + if err != nil { + return nil, err + } + if respErr, ok := resp.(error); ok { + return nil, respErr + } + } + return newState.CARoot, nil +} + +func (c *ConsulCAProvider) ActiveIntermediate() (*structs.CARoot, error) { + return c.ActiveRoot() +} + +func (c *ConsulCAProvider) GenerateIntermediate() (*structs.CARoot, error) { + state := c.srv.fsm.State() + _, providerState, err := state.CAProviderState() + if err != nil { + return nil, err + } + if providerState == nil { + return nil, fmt.Errorf("CA provider not yet initialized") + } + + ca, err := c.generateCA(providerState.PrivateKey, providerState.RootIndex+1) + if err != nil { + return nil, err + } + + return ca, nil +} + +// Sign returns a new certificate valid for the given SpiffeIDService +// using the current CA. +func (c *ConsulCAProvider) Sign(serviceId *connect.SpiffeIDService, csr *x509.CertificateRequest) (*structs.IssuedCert, error) { + // Get the provider state + state := c.srv.fsm.State() + _, providerState, err := state.CAProviderState() + if err != nil { + return nil, err + } + + // Create the keyId for the cert from the signing public key. + signer, err := connect.ParseSigner(providerState.PrivateKey) + if err != nil { + return nil, err + } + if signer == nil { + return nil, fmt.Errorf("error signing cert: Consul CA not initialized yet") + } + keyId, err := connect.KeyId(signer.Public()) + if err != nil { + return nil, err + } + + // Parse the CA cert + caCert, err := connect.ParseCert(providerState.CARoot.RootCert) + if err != nil { + return nil, fmt.Errorf("error parsing CA cert: %s", err) + } + + // Cert template for generation + sn := &big.Int{} + sn.SetUint64(providerState.LeafIndex + 1) + template := x509.Certificate{ + SerialNumber: sn, + Subject: pkix.Name{CommonName: serviceId.Service}, + URIs: csr.URIs, + Signature: csr.Signature, + SignatureAlgorithm: csr.SignatureAlgorithm, + PublicKeyAlgorithm: csr.PublicKeyAlgorithm, + PublicKey: csr.PublicKey, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageDataEncipherment | + x509.KeyUsageKeyAgreement | + x509.KeyUsageDigitalSignature | + x509.KeyUsageKeyEncipherment, + ExtKeyUsage: []x509.ExtKeyUsage{ + x509.ExtKeyUsageClientAuth, + x509.ExtKeyUsageServerAuth, + }, + NotAfter: time.Now().Add(3 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: keyId, + SubjectKeyId: keyId, + } + + // Create the certificate, PEM encode it and return that value. + var buf bytes.Buffer + bs, err := x509.CreateCertificate( + rand.Reader, &template, caCert, signer.Public(), signer) + if err != nil { + return nil, fmt.Errorf("error generating certificate: %s", err) + } + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + return nil, fmt.Errorf("error encoding private key: %s", err) + } + + // Increment the leaf cert index + newState := *providerState + newState.LeafIndex += 1 + args := &structs.CARequest{ + Op: structs.CAOpSetProviderState, + ProviderState: &newState, + } + resp, err := c.srv.raftApply(structs.ConnectCARequestType, args) + if err != nil { + return nil, err + } + if respErr, ok := resp.(error); ok { + return nil, respErr + } + + // Set the response + return &structs.IssuedCert{ + SerialNumber: connect.HexString(template.SerialNumber.Bytes()), + CertPEM: buf.String(), + Service: serviceId.Service, + ServiceURI: template.URIs[0].String(), + ValidAfter: template.NotBefore, + ValidBefore: template.NotAfter, + }, nil +} + +// generatePrivateKey returns a new private key +func generatePrivateKey() (string, error) { + var pk *ecdsa.PrivateKey + + // If we have no key, then create a new one. + pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if err != nil { + return "", fmt.Errorf("error generating private key: %s", err) + } + + bs, err := x509.MarshalECPrivateKey(pk) + if err != nil { + return "", fmt.Errorf("error generating private key: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) + if err != nil { + return "", fmt.Errorf("error encoding private key: %s", err) + } + + return buf.String(), nil +} + +// generateCA makes a new root CA using the current private key +func (c *ConsulCAProvider) generateCA(privateKey string, sn uint64) (*structs.CARoot, error) { + state := c.srv.fsm.State() + _, config, err := state.CAConfig() + if err != nil { + return nil, err + } + + privKey, err := connect.ParseSigner(privateKey) + if err != nil { + return nil, err + } + + name := fmt.Sprintf("Consul CA %d", sn) + + // The URI (SPIFFE compatible) for the cert + id := &connect.SpiffeIDSigning{ClusterID: config.ClusterSerial, Domain: "consul"} + keyId, err := connect.KeyId(privKey.Public()) + if err != nil { + return nil, err + } + + // Create the CA cert + serialNum := &big.Int{} + serialNum.SetUint64(sn) + template := x509.Certificate{ + SerialNumber: serialNum, + Subject: pkix.Name{CommonName: name}, + URIs: []*url.URL{id.URI()}, + PermittedDNSDomainsCritical: true, + PermittedDNSDomains: []string{id.URI().Hostname()}, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageCertSign | + x509.KeyUsageCRLSign | + x509.KeyUsageDigitalSignature, + IsCA: true, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: keyId, + SubjectKeyId: keyId, + } + + bs, err := x509.CreateCertificate( + rand.Reader, &template, &template, privKey.Public(), privKey) + if err != nil { + return nil, fmt.Errorf("error generating CA certificate: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + return nil, fmt.Errorf("error encoding private key: %s", err) + } + + // Generate an ID for the new CA cert + rootId, err := uuid.GenerateUUID() + if err != nil { + return nil, err + } + + return &structs.CARoot{ + ID: rootId, + Name: name, + RootCert: buf.String(), + Active: true, + }, nil +} diff --git a/agent/consul/fsm/commands_oss.go b/agent/consul/fsm/commands_oss.go index a5ef33efc..99755194b 100644 --- a/agent/consul/fsm/commands_oss.go +++ b/agent/consul/fsm/commands_oss.go @@ -300,6 +300,13 @@ func (c *FSM) applyConnectCAOperation(buf []byte, index uint64) interface{} { return err } + return act + case structs.CAOpSetProviderState: + act, err := c.state.CASetProviderState(index, req.ProviderState) + if err != nil { + return err + } + return act default: c.logger.Printf("[WARN] consul.fsm: Invalid CA operation '%s'", req.Op) diff --git a/agent/consul/leader.go b/agent/consul/leader.go index 516201262..fca3fa07f 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -9,6 +9,7 @@ import ( "time" "github.com/hashicorp/consul/agent/connect" + uuid "github.com/hashicorp/go-uuid" "github.com/armon/go-metrics" "github.com/hashicorp/consul/acl" @@ -377,7 +378,13 @@ func (s *Server) getOrCreateCAConfig() (*structs.CAConfiguration, error) { return config, nil } + sn, err := uuid.GenerateUUID() + if err != nil { + return nil, err + } + config = s.config.CAConfig + config.ClusterSerial = sn req := structs.CARequest{ Op: structs.CAOpSetConfig, Config: config, @@ -400,7 +407,7 @@ func (s *Server) bootstrapCA() error { var provider connect.CAProvider switch conf.Provider { case structs.ConsulCAProvider: - provider, err = connect.NewConsulCAProvider(conf.Config) + provider, err = NewConsulCAProvider(conf.Config, s) if err != nil { return err } @@ -412,10 +419,10 @@ func (s *Server) bootstrapCA() error { s.caProvider = provider s.caProviderLock.Unlock() - // Get the intermediate cert from the CA - trustedCA, err := provider.ActiveIntermediate() + // Get the active root cert from the CA + trustedCA, err := provider.ActiveRoot() if err != nil { - return fmt.Errorf("error getting intermediate cert: %v", err) + return fmt.Errorf("error getting root cert: %v", err) } // Check if this CA is already initialized @@ -435,7 +442,7 @@ func (s *Server) bootstrapCA() error { return err } - // Store the intermediate in raft + // Store the root cert in raft resp, err := s.raftApply(structs.ConnectCARequestType, &structs.CARequest{ Op: structs.CAOpSetRoots, Index: idx, diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go index f5962b084..2cce8028b 100644 --- a/agent/consul/state/connect_ca.go +++ b/agent/consul/state/connect_ca.go @@ -8,8 +8,9 @@ import ( ) const ( - caConfigTableName = "connect-ca-config" - caRootTableName = "connect-ca-roots" + caConfigTableName = "connect-ca-config" + caRootTableName = "connect-ca-roots" + caProviderTableName = "connect-ca-builtin" ) // caConfigTableSchema returns a new table schema used for storing @@ -48,14 +49,34 @@ func caRootTableSchema() *memdb.TableSchema { } } +// caProviderTableSchema returns a new table schema used for storing +// the built-in CA provider's state for connect. This is only used by +// the internal Consul CA provider. +func caProviderTableSchema() *memdb.TableSchema { + return &memdb.TableSchema{ + Name: caProviderTableName, + Indexes: map[string]*memdb.IndexSchema{ + "id": &memdb.IndexSchema{ + Name: "id", + AllowMissing: false, + Unique: true, + Indexer: &memdb.ConditionalIndex{ + Conditional: func(obj interface{}) (bool, error) { return true, nil }, + }, + }, + }, + } +} + func init() { registerSchema(caConfigTableSchema) registerSchema(caRootTableSchema) + registerSchema(caProviderTableSchema) } // CAConfig is used to pull the CA config from the snapshot. func (s *Snapshot) CAConfig() (*structs.CAConfiguration, error) { - c, err := s.tx.First("connect-ca-config", "id") + c, err := s.tx.First(caConfigTableName, "id") if err != nil { return nil, err } @@ -70,7 +91,7 @@ func (s *Snapshot) CAConfig() (*structs.CAConfiguration, error) { // CAConfig is used when restoring from a snapshot. func (s *Restore) CAConfig(config *structs.CAConfiguration) error { - if err := s.tx.Insert("connect-ca-config", config); err != nil { + if err := s.tx.Insert(caConfigTableName, config); err != nil { return fmt.Errorf("failed restoring CA config: %s", err) } @@ -83,7 +104,7 @@ func (s *Store) CAConfig() (uint64, *structs.CAConfiguration, error) { defer tx.Abort() // Get the autopilot config - c, err := tx.First("connect-ca-config", "id") + c, err := tx.First(caConfigTableName, "id") if err != nil { return 0, nil, fmt.Errorf("failed CA config lookup: %s", err) } @@ -101,7 +122,9 @@ func (s *Store) CASetConfig(idx uint64, config *structs.CAConfiguration) error { tx := s.db.Txn(true) defer tx.Abort() - s.caSetConfigTxn(idx, tx, config) + if err := s.caSetConfigTxn(idx, tx, config); err != nil { + return err + } tx.Commit() return nil @@ -115,7 +138,7 @@ func (s *Store) CACheckAndSetConfig(idx, cidx uint64, config *structs.CAConfigur defer tx.Abort() // Check for an existing config - existing, err := tx.First("connect-ca-config", "id") + existing, err := tx.First(caConfigTableName, "id") if err != nil { return false, fmt.Errorf("failed CA config lookup: %s", err) } @@ -128,7 +151,9 @@ func (s *Store) CACheckAndSetConfig(idx, cidx uint64, config *structs.CAConfigur return false, nil } - s.caSetConfigTxn(idx, tx, config) + if err := s.caSetConfigTxn(idx, tx, config); err != nil { + return false, err + } tx.Commit() return true, nil @@ -136,20 +161,22 @@ func (s *Store) CACheckAndSetConfig(idx, cidx uint64, config *structs.CAConfigur func (s *Store) caSetConfigTxn(idx uint64, tx *memdb.Txn, config *structs.CAConfiguration) error { // Check for an existing config - existing, err := tx.First("connect-ca-config", "id") + prev, err := tx.First(caConfigTableName, "id") if err != nil { return fmt.Errorf("failed CA config lookup: %s", err) } - // Set the indexes. - if existing != nil { - config.CreateIndex = existing.(*structs.CAConfiguration).CreateIndex + // Set the indexes, prevent the cluster ID from changing. + if prev != nil { + existing := prev.(*structs.CAConfiguration) + config.CreateIndex = existing.CreateIndex + config.ClusterSerial = existing.ClusterSerial } else { config.CreateIndex = idx } config.ModifyIndex = idx - if err := tx.Insert("connect-ca-config", config); err != nil { + if err := tx.Insert(caConfigTableName, config); err != nil { return fmt.Errorf("failed updating CA config: %s", err) } return nil @@ -289,3 +316,73 @@ func (s *Store) CARootSetCAS(idx, cidx uint64, rs []*structs.CARoot) (bool, erro tx.Commit() return true, nil } + +// CAProviderState is used to pull the built-in provider state from the snapshot. +func (s *Snapshot) CAProviderState() (*structs.CAConsulProviderState, error) { + c, err := s.tx.First(caProviderTableName, "id") + if err != nil { + return nil, err + } + + state, ok := c.(*structs.CAConsulProviderState) + if !ok { + return nil, nil + } + + return state, nil +} + +// CAProviderState is used when restoring from a snapshot. +func (s *Restore) CAProviderState(state *structs.CAConsulProviderState) error { + if err := s.tx.Insert(caProviderTableName, state); err != nil { + return fmt.Errorf("failed restoring built-in CA state: %s", err) + } + + return nil +} + +// CAProviderState is used to get the current Consul CA provider state. +func (s *Store) CAProviderState() (uint64, *structs.CAConsulProviderState, error) { + tx := s.db.Txn(false) + defer tx.Abort() + + // Get the autopilot config + c, err := tx.First(caProviderTableName, "id") + if err != nil { + return 0, nil, fmt.Errorf("failed built-in CA state lookup: %s", err) + } + + state, ok := c.(*structs.CAConsulProviderState) + if !ok { + return 0, nil, nil + } + + return state.ModifyIndex, state, nil +} + +// CASetProviderState is used to set the current built-in CA provider state. +func (s *Store) CASetProviderState(idx uint64, state *structs.CAConsulProviderState) (bool, error) { + tx := s.db.Txn(true) + defer tx.Abort() + + // Check for an existing config + existing, err := tx.First(caProviderTableName, "id") + if err != nil { + return false, fmt.Errorf("failed built-in CA state lookup: %s", err) + } + + // Set the indexes. + if existing != nil { + state.CreateIndex = existing.(*structs.CAConfiguration).CreateIndex + } else { + state.CreateIndex = idx + } + state.ModifyIndex = idx + + if err := tx.Insert(caProviderTableName, state); err != nil { + return false, fmt.Errorf("failed updating built-in CA state: %s", err) + } + tx.Commit() + + return true, nil +} diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index af8f82653..a923c0361 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -96,8 +96,9 @@ type IssuedCert struct { type CAOp string const ( - CAOpSetRoots CAOp = "set-roots" - CAOpSetConfig CAOp = "set-config" + CAOpSetRoots CAOp = "set-roots" + CAOpSetConfig CAOp = "set-config" + CAOpSetProviderState CAOp = "set-provider-state" ) // CARequest is used to modify connect CA data. This is used by the @@ -110,7 +111,7 @@ type CARequest struct { // Datacenter is the target for this request. Datacenter string - // Index is used by CAOpSet for a CAS operation. + // Index is used by CAOpSetRoots and CAOpSetConfig for a CAS operation. Index uint64 // Roots is a list of roots. This is used for CAOpSet. One root must @@ -120,6 +121,9 @@ type CARequest struct { // Config is the configuration for the current CA plugin. Config *CAConfiguration + // ProviderState is the state for the builtin CA provider. + ProviderState *CAConsulProviderState + // WriteRequest is a common struct containing ACL tokens and other // write-related common elements for requests. WriteRequest @@ -136,6 +140,9 @@ const ( // CAConfiguration is the configuration for the current CA plugin. type CAConfiguration struct { + // Unique identifier for the cluster + ClusterSerial string `json:"-"` + // Provider is the CA provider implementation to use. Provider string @@ -144,7 +151,15 @@ type CAConfiguration struct { // and maps). Config map[string]interface{} - // CreateIndex/ModifyIndex store the create/modify indexes of this configuration. - CreateIndex uint64 - ModifyIndex uint64 + RaftIndex +} + +// CAConsulProviderState is used to track the built-in Consul CA provider's state. +type CAConsulProviderState struct { + PrivateKey string + CARoot *CARoot + RootIndex uint64 + LeafIndex uint64 + + RaftIndex } From bbfcb278e189edd48fa83c90fab9972178f50f4c Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 20 Apr 2018 18:46:02 -0700 Subject: [PATCH 133/539] Add the root rotation mechanism to the CA config endpoint --- agent/connect/ca.go | 12 ++ agent/connect/ca_provider.go | 3 +- agent/consul/connect_ca_endpoint.go | 104 +++++++++++- agent/consul/connect_ca_provider.go | 254 +++++++++++++++++----------- agent/consul/fsm/commands_oss.go | 17 ++ agent/consul/leader.go | 45 +++-- agent/consul/state/connect_ca.go | 86 ++++++++-- agent/structs/connect_ca.go | 10 +- 8 files changed, 396 insertions(+), 135 deletions(-) diff --git a/agent/connect/ca.go b/agent/connect/ca.go index bca9392d3..818af9f9f 100644 --- a/agent/connect/ca.go +++ b/agent/connect/ca.go @@ -38,6 +38,18 @@ func ParseSigner(pemValue string) (crypto.Signer, error) { case "EC PRIVATE KEY": return x509.ParseECPrivateKey(block.Bytes) + case "PRIVATE KEY": + signer, err := x509.ParsePKCS8PrivateKey(block.Bytes) + if err != nil { + return nil, err + } + pk, ok := signer.(crypto.Signer) + if !ok { + return nil, fmt.Errorf("private key is not a valid format") + } + + return pk, nil + default: return nil, fmt.Errorf("unknown PEM block type for signing key: %s", block.Type) } diff --git a/agent/connect/ca_provider.go b/agent/connect/ca_provider.go index ca0ccf9b0..dc70c6a58 100644 --- a/agent/connect/ca_provider.go +++ b/agent/connect/ca_provider.go @@ -10,9 +10,10 @@ import ( // an external CA that provides leaf certificate signing for // given SpiffeIDServices. type CAProvider interface { - SetConfiguration(raw map[string]interface{}) error ActiveRoot() (*structs.CARoot, error) ActiveIntermediate() (*structs.CARoot, error) GenerateIntermediate() (*structs.CARoot, error) Sign(*SpiffeIDService, *x509.CertificateRequest) (*structs.IssuedCert, error) + //SignCA(*x509.CertificateRequest) (*structs.IssuedCert, error) + Teardown() error } diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index d0c582165..128c1493d 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -2,6 +2,7 @@ package consul import ( "fmt" + "reflect" "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/connect" @@ -60,9 +61,95 @@ func (s *ConnectCA) ConfigurationSet( return acl.ErrPermissionDenied } - // Commit - // todo(kyhavlov): trigger a bootstrap here when the provider changes - args.Op = structs.CAOpSetConfig + // Exit early if it's a no-op change + state := s.srv.fsm.State() + _, config, err := state.CAConfig() + if err != nil { + return err + } + if args.Config.Provider == config.Provider && reflect.DeepEqual(args.Config.Config, config.Config) { + return nil + } + + // Create a new instance of the provider described by the config + // and get the current active root CA. This acts as a good validation + // of the config and makes sure the provider is functioning correctly + // before we commit any changes to Raft. + newProvider, err := s.srv.createCAProvider(args.Config) + if err != nil { + return fmt.Errorf("could not initialize provider: %v", err) + } + + newActiveRoot, err := newProvider.ActiveRoot() + if err != nil { + return err + } + + // Compare the new provider's root CA ID to the current one. If they + // match, just update the existing provider with the new config. + // If they don't match, begin the root rotation process. + _, root, err := state.CARootActive(nil) + if err != nil { + return err + } + + if root != nil && root.ID == newActiveRoot.ID { + args.Op = structs.CAOpSetConfig + resp, err := s.srv.raftApply(structs.ConnectCARequestType, args) + if err != nil { + return err + } + if respErr, ok := resp.(error); ok { + return respErr + } + + // If the config has been committed, update the local provider instance + s.srv.setCAProvider(newProvider) + + s.srv.logger.Printf("[INFO] connect: provider config updated") + + return nil + } + + // At this point, we know the config change has trigged a root rotation, + // either by swapping the provider type or changing the provider's config + // to use a different root certificate. + + // If it's a config change that would trigger a rotation (different provider/root): + // -1. Create an instance of the provider described by the new config + // 2. Get the intermediate from the new provider + // 3. Generate a CSR for the new intermediate, call SignCA on the old/current provider + // to get the cross-signed intermediate + // ~4. Get the active root for the new provider, append the intermediate from step 3 + // to its list of intermediates + // -5. Update the roots and CA config in the state store at the same time, finally switching + // to the new provider + // -6. Call teardown on the old provider, so it can clean up whatever it needs to + + /*_, err := newProvider.ActiveIntermediate() + if err != nil { + return err + }*/ + + // Update the roots and CA config in the state store at the same time + idx, roots, err := state.CARoots(nil) + if err != nil { + return err + } + + var newRoots structs.CARoots + for _, r := range roots { + newRoot := *r + if newRoot.Active { + newRoot.Active = false + } + newRoots = append(newRoots, &newRoot) + } + newRoots = append(newRoots, newActiveRoot) + + args.Op = structs.CAOpSetRootsAndConfig + args.Index = idx + args.Roots = newRoots resp, err := s.srv.raftApply(structs.ConnectCARequestType, args) if err != nil { return err @@ -71,6 +158,17 @@ func (s *ConnectCA) ConfigurationSet( return respErr } + // If the config has been committed, update the local provider instance + // and call teardown on the old provider + oldProvider := s.srv.getCAProvider() + s.srv.setCAProvider(newProvider) + + if err := oldProvider.Teardown(); err != nil { + return err + } + + s.srv.logger.Printf("[INFO] connect: CA rotated to the new root under %q provider", args.Config.Provider) + return nil } diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index 9beb6bfac..b72a9ee36 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -29,28 +29,94 @@ type ConsulCAProviderConfig struct { type ConsulCAProvider struct { config *ConsulCAProviderConfig - // todo(kyhavlov): store these directly in the state store - // and pass a reference to the state to this provider instead of - // having these values here + id string srv *Server sync.RWMutex } +// NewConsulCAProvider returns a new instance of the Consul CA provider, +// bootstrapping its state in the state store necessary func NewConsulCAProvider(rawConfig map[string]interface{}, srv *Server) (*ConsulCAProvider, error) { - provider := &ConsulCAProvider{srv: srv} - provider.SetConfiguration(rawConfig) - - return provider, nil -} - -func (c *ConsulCAProvider) SetConfiguration(raw map[string]interface{}) error { - conf, err := decodeConfig(raw) + conf, err := decodeConfig(rawConfig) if err != nil { - return err + return nil, err + } + provider := &ConsulCAProvider{ + config: conf, + srv: srv, + id: fmt.Sprintf("%s,%s", conf.PrivateKey, conf.RootCert), } - c.config = conf - return nil + // Check if this configuration of the provider has already been + // initialized in the state store. + state := srv.fsm.State() + _, providerState, err := state.CAProviderState(provider.id) + if err != nil { + return nil, err + } + + // Exit early if the state store has already been populated for this config. + if providerState != nil { + return provider, nil + } + + newState := structs.CAConsulProviderState{ + ID: provider.id, + } + + // Write the initial provider state to get the index to use for the + // CA serial number. + { + args := &structs.CARequest{ + Op: structs.CAOpSetProviderState, + ProviderState: &newState, + } + resp, err := srv.raftApply(structs.ConnectCARequestType, args) + if err != nil { + return nil, err + } + if respErr, ok := resp.(error); ok { + return nil, respErr + } + } + + idx, _, err := state.CAProviderState(provider.id) + if err != nil { + return nil, err + } + + // Generate a private key if needed + if conf.PrivateKey == "" { + pk, err := generatePrivateKey() + if err != nil { + return nil, err + } + newState.PrivateKey = pk + } else { + newState.PrivateKey = conf.PrivateKey + } + + // Generate the root CA + ca, err := provider.generateCA(newState.PrivateKey, conf.RootCert, idx+1) + if err != nil { + return nil, fmt.Errorf("error generating CA: %v", err) + } + newState.CARoot = ca + + // Write the provider state + args := &structs.CARequest{ + Op: structs.CAOpSetProviderState, + ProviderState: &newState, + } + resp, err := srv.raftApply(structs.ConnectCARequestType, args) + if err != nil { + return nil, err + } + if respErr, ok := resp.(error); ok { + return nil, respErr + } + + return provider, nil } func decodeConfig(raw map[string]interface{}) (*ConsulCAProviderConfig, error) { @@ -59,59 +125,22 @@ func decodeConfig(raw map[string]interface{}) (*ConsulCAProviderConfig, error) { return nil, fmt.Errorf("error decoding config: %s", err) } + if config.PrivateKey == "" && config.RootCert != "" { + return nil, fmt.Errorf("must provide a private key when providing a root cert") + } + return config, nil } // Return the active root CA and generate a new one if needed func (c *ConsulCAProvider) ActiveRoot() (*structs.CARoot, error) { state := c.srv.fsm.State() - _, providerState, err := state.CAProviderState() + _, providerState, err := state.CAProviderState(c.id) if err != nil { return nil, err } - var update bool - var newState structs.CAConsulProviderState - if providerState != nil { - newState = *providerState - } - - // Generate a private key if needed - if providerState == nil || providerState.PrivateKey == "" { - pk, err := generatePrivateKey() - if err != nil { - return nil, err - } - newState.PrivateKey = pk - update = true - } - - // Generate a root CA if needed - if providerState == nil || providerState.CARoot == nil { - ca, err := c.generateCA(newState.PrivateKey, newState.RootIndex+1) - if err != nil { - return nil, err - } - newState.CARoot = ca - newState.RootIndex += 1 - update = true - } - - // Update the provider state if we generated a new private key/cert - if update { - args := &structs.CARequest{ - Op: structs.CAOpSetProviderState, - ProviderState: &newState, - } - resp, err := c.srv.raftApply(structs.ConnectCARequestType, args) - if err != nil { - return nil, err - } - if respErr, ok := resp.(error); ok { - return nil, respErr - } - } - return newState.CARoot, nil + return providerState.CARoot, nil } func (c *ConsulCAProvider) ActiveIntermediate() (*structs.CARoot, error) { @@ -120,15 +149,12 @@ func (c *ConsulCAProvider) ActiveIntermediate() (*structs.CARoot, error) { func (c *ConsulCAProvider) GenerateIntermediate() (*structs.CARoot, error) { state := c.srv.fsm.State() - _, providerState, err := state.CAProviderState() + idx, providerState, err := state.CAProviderState(c.id) if err != nil { return nil, err } - if providerState == nil { - return nil, fmt.Errorf("CA provider not yet initialized") - } - ca, err := c.generateCA(providerState.PrivateKey, providerState.RootIndex+1) + ca, err := c.generateCA(providerState.PrivateKey, "", idx+1) if err != nil { return nil, err } @@ -136,12 +162,34 @@ func (c *ConsulCAProvider) GenerateIntermediate() (*structs.CARoot, error) { return ca, nil } +// Remove the state store entry for this provider instance. +func (c *ConsulCAProvider) Teardown() error { + args := &structs.CARequest{ + Op: structs.CAOpDeleteProviderState, + ProviderState: &structs.CAConsulProviderState{ID: c.id}, + } + resp, err := c.srv.raftApply(structs.ConnectCARequestType, args) + if err != nil { + return err + } + if respErr, ok := resp.(error); ok { + return respErr + } + + return nil +} + // Sign returns a new certificate valid for the given SpiffeIDService // using the current CA. func (c *ConsulCAProvider) Sign(serviceId *connect.SpiffeIDService, csr *x509.CertificateRequest) (*structs.IssuedCert, error) { + // Lock during the signing so we don't use the same index twice + // for different cert serial numbers. + c.Lock() + defer c.Unlock() + // Get the provider state state := c.srv.fsm.State() - _, providerState, err := state.CAProviderState() + _, providerState, err := state.CAProviderState(c.id) if err != nil { return nil, err } @@ -254,7 +302,7 @@ func generatePrivateKey() (string, error) { } // generateCA makes a new root CA using the current private key -func (c *ConsulCAProvider) generateCA(privateKey string, sn uint64) (*structs.CARoot, error) { +func (c *ConsulCAProvider) generateCA(privateKey, contents string, sn uint64) (*structs.CARoot, error) { state := c.srv.fsm.State() _, config, err := state.CAConfig() if err != nil { @@ -263,48 +311,54 @@ func (c *ConsulCAProvider) generateCA(privateKey string, sn uint64) (*structs.CA privKey, err := connect.ParseSigner(privateKey) if err != nil { - return nil, err + return nil, fmt.Errorf("error parsing private key %q: %v", privateKey, err) } name := fmt.Sprintf("Consul CA %d", sn) - // The URI (SPIFFE compatible) for the cert - id := &connect.SpiffeIDSigning{ClusterID: config.ClusterSerial, Domain: "consul"} - keyId, err := connect.KeyId(privKey.Public()) - if err != nil { - return nil, err - } + pemContents := contents - // Create the CA cert - serialNum := &big.Int{} - serialNum.SetUint64(sn) - template := x509.Certificate{ - SerialNumber: serialNum, - Subject: pkix.Name{CommonName: name}, - URIs: []*url.URL{id.URI()}, - PermittedDNSDomainsCritical: true, - PermittedDNSDomains: []string{id.URI().Hostname()}, - BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageCertSign | - x509.KeyUsageCRLSign | - x509.KeyUsageDigitalSignature, - IsCA: true, - NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), - NotBefore: time.Now(), - AuthorityKeyId: keyId, - SubjectKeyId: keyId, - } + if pemContents == "" { + // The URI (SPIFFE compatible) for the cert + id := &connect.SpiffeIDSigning{ClusterID: config.ClusterSerial, Domain: "consul"} + keyId, err := connect.KeyId(privKey.Public()) + if err != nil { + return nil, err + } - bs, err := x509.CreateCertificate( - rand.Reader, &template, &template, privKey.Public(), privKey) - if err != nil { - return nil, fmt.Errorf("error generating CA certificate: %s", err) - } + // Create the CA cert + serialNum := &big.Int{} + serialNum.SetUint64(sn) + template := x509.Certificate{ + SerialNumber: serialNum, + Subject: pkix.Name{CommonName: name}, + URIs: []*url.URL{id.URI()}, + PermittedDNSDomainsCritical: true, + PermittedDNSDomains: []string{id.URI().Hostname()}, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageCertSign | + x509.KeyUsageCRLSign | + x509.KeyUsageDigitalSignature, + IsCA: true, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: keyId, + SubjectKeyId: keyId, + } - var buf bytes.Buffer - err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) - if err != nil { - return nil, fmt.Errorf("error encoding private key: %s", err) + bs, err := x509.CreateCertificate( + rand.Reader, &template, &template, privKey.Public(), privKey) + if err != nil { + return nil, fmt.Errorf("error generating CA certificate: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + return nil, fmt.Errorf("error encoding private key: %s", err) + } + + pemContents = buf.String() } // Generate an ID for the new CA cert @@ -316,7 +370,7 @@ func (c *ConsulCAProvider) generateCA(privateKey string, sn uint64) (*structs.CA return &structs.CARoot{ ID: rootId, Name: name, - RootCert: buf.String(), + RootCert: pemContents, Active: true, }, nil } diff --git a/agent/consul/fsm/commands_oss.go b/agent/consul/fsm/commands_oss.go index 99755194b..5292bd0f5 100644 --- a/agent/consul/fsm/commands_oss.go +++ b/agent/consul/fsm/commands_oss.go @@ -307,6 +307,23 @@ func (c *FSM) applyConnectCAOperation(buf []byte, index uint64) interface{} { return err } + return act + case structs.CAOpDeleteProviderState: + if err := c.state.CADeleteProviderState(req.ProviderState.ID); err != nil { + return err + } + + return true + case structs.CAOpSetRootsAndConfig: + act, err := c.state.CARootSetCAS(index, req.Index, req.Roots) + if err != nil { + return err + } + + if err := c.state.CASetConfig(index+1, req.Config); err != nil { + return err + } + return act default: c.logger.Printf("[WARN] consul.fsm: Invalid CA operation '%s'", req.Op) diff --git a/agent/consul/leader.go b/agent/consul/leader.go index fca3fa07f..8d62ca1aa 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -396,7 +396,7 @@ func (s *Server) getOrCreateCAConfig() (*structs.CAConfiguration, error) { return config, nil } -// bootstrapCA handles the initialization of a new CA provider +// bootstrapCA creates a CA provider from the current configuration. func (s *Server) bootstrapCA() error { conf, err := s.getOrCreateCAConfig() if err != nil { @@ -404,20 +404,12 @@ func (s *Server) bootstrapCA() error { } // Initialize the right provider based on the config - var provider connect.CAProvider - switch conf.Provider { - case structs.ConsulCAProvider: - provider, err = NewConsulCAProvider(conf.Config, s) - if err != nil { - return err - } - default: - return fmt.Errorf("unknown CA provider %q", conf.Provider) + provider, err := s.createCAProvider(conf) + if err != nil { + return err } - s.caProviderLock.Lock() - s.caProvider = provider - s.caProviderLock.Unlock() + s.setCAProvider(provider) // Get the active root cert from the CA trustedCA, err := provider.ActiveRoot() @@ -425,13 +417,14 @@ func (s *Server) bootstrapCA() error { return fmt.Errorf("error getting root cert: %v", err) } - // Check if this CA is already initialized + // Check if the CA root is already initialized and exit if it is. + // Every change to the CA after this initial bootstrapping should + // be done through the rotation process. state := s.fsm.State() _, root, err := state.CARootActive(nil) if err != nil { return err } - // Exit early if the root is already in the state store. if root != nil && root.ID == trustedCA.ID { return nil } @@ -461,6 +454,28 @@ func (s *Server) bootstrapCA() error { return nil } +// createProvider returns a connect CA provider from the given config. +func (s *Server) createCAProvider(conf *structs.CAConfiguration) (connect.CAProvider, error) { + switch conf.Provider { + case structs.ConsulCAProvider: + return NewConsulCAProvider(conf.Config, s) + default: + return nil, fmt.Errorf("unknown CA provider %q", conf.Provider) + } +} + +func (s *Server) getCAProvider() connect.CAProvider { + s.caProviderLock.RLock() + defer s.caProviderLock.RUnlock() + return s.caProvider +} + +func (s *Server) setCAProvider(newProvider connect.CAProvider) { + s.caProviderLock.Lock() + defer s.caProviderLock.Unlock() + s.caProvider = newProvider +} + // signConnectCert signs a cert for a service using the currently configured CA provider func (s *Server) signConnectCert(service *connect.SpiffeIDService, csr *x509.CertificateRequest) (*structs.IssuedCert, error) { s.caProviderLock.RLock() diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go index 2cce8028b..17e274992 100644 --- a/agent/consul/state/connect_ca.go +++ b/agent/consul/state/connect_ca.go @@ -60,8 +60,8 @@ func caProviderTableSchema() *memdb.TableSchema { Name: "id", AllowMissing: false, Unique: true, - Indexer: &memdb.ConditionalIndex{ - Conditional: func(obj interface{}) (bool, error) { return true, nil }, + Indexer: &memdb.StringFieldIndex{ + Field: "ID", }, }, }, @@ -98,12 +98,12 @@ func (s *Restore) CAConfig(config *structs.CAConfiguration) error { return nil } -// CAConfig is used to get the current Autopilot configuration. +// CAConfig is used to get the current CA configuration. func (s *Store) CAConfig() (uint64, *structs.CAConfiguration, error) { tx := s.db.Txn(false) defer tx.Abort() - // Get the autopilot config + // Get the CA config c, err := tx.First(caConfigTableName, "id") if err != nil { return 0, nil, fmt.Errorf("failed CA config lookup: %s", err) @@ -117,7 +117,7 @@ func (s *Store) CAConfig() (uint64, *structs.CAConfiguration, error) { return config.ModifyIndex, config, nil } -// CASetConfig is used to set the current Autopilot configuration. +// CASetConfig is used to set the current CA configuration. func (s *Store) CASetConfig(idx uint64, config *structs.CAConfiguration) error { tx := s.db.Txn(true) defer tx.Abort() @@ -341,13 +341,16 @@ func (s *Restore) CAProviderState(state *structs.CAConsulProviderState) error { return nil } -// CAProviderState is used to get the current Consul CA provider state. -func (s *Store) CAProviderState() (uint64, *structs.CAConsulProviderState, error) { +// CAProviderState is used to get the Consul CA provider state for the given ID. +func (s *Store) CAProviderState(id string) (uint64, *structs.CAConsulProviderState, error) { tx := s.db.Txn(false) defer tx.Abort() - // Get the autopilot config - c, err := tx.First(caProviderTableName, "id") + // Get the index + idx := maxIndexTxn(tx, caProviderTableName) + + // Get the provider config + c, err := tx.First(caProviderTableName, "id", id) if err != nil { return 0, nil, fmt.Errorf("failed built-in CA state lookup: %s", err) } @@ -357,7 +360,28 @@ func (s *Store) CAProviderState() (uint64, *structs.CAConsulProviderState, error return 0, nil, nil } - return state.ModifyIndex, state, nil + return idx, state, nil +} + +// CAProviderStates is used to get the Consul CA provider state for the given ID. +func (s *Store) CAProviderStates() (uint64, []*structs.CAConsulProviderState, error) { + tx := s.db.Txn(false) + defer tx.Abort() + + // Get the index + idx := maxIndexTxn(tx, caProviderTableName) + + // Get all + iter, err := tx.Get(caProviderTableName, "id") + if err != nil { + return 0, nil, fmt.Errorf("failed CA provider state lookup: %s", err) + } + + var results []*structs.CAConsulProviderState + for v := iter.Next(); v != nil; v = iter.Next() { + results = append(results, v.(*structs.CAConsulProviderState)) + } + return idx, results, nil } // CASetProviderState is used to set the current built-in CA provider state. @@ -366,14 +390,14 @@ func (s *Store) CASetProviderState(idx uint64, state *structs.CAConsulProviderSt defer tx.Abort() // Check for an existing config - existing, err := tx.First(caProviderTableName, "id") + existing, err := tx.First(caProviderTableName, "id", state.ID) if err != nil { return false, fmt.Errorf("failed built-in CA state lookup: %s", err) } // Set the indexes. if existing != nil { - state.CreateIndex = existing.(*structs.CAConfiguration).CreateIndex + state.CreateIndex = existing.(*structs.CAConsulProviderState).CreateIndex } else { state.CreateIndex = idx } @@ -382,7 +406,45 @@ func (s *Store) CASetProviderState(idx uint64, state *structs.CAConsulProviderSt if err := tx.Insert(caProviderTableName, state); err != nil { return false, fmt.Errorf("failed updating built-in CA state: %s", err) } + + // Update the index + if err := tx.Insert("index", &IndexEntry{caProviderTableName, idx}); err != nil { + return false, fmt.Errorf("failed updating index: %s", err) + } + tx.Commit() return true, nil } + +// CADeleteProviderState is used to remove the Consul CA provider state for the given ID. +func (s *Store) CADeleteProviderState(id string) error { + tx := s.db.Txn(true) + defer tx.Abort() + + // Get the index + idx := maxIndexTxn(tx, caProviderTableName) + + // Check for an existing config + existing, err := tx.First(caProviderTableName, "id", id) + if err != nil { + return fmt.Errorf("failed built-in CA state lookup: %s", err) + } + if existing == nil { + return nil + } + + providerState := existing.(*structs.CAConsulProviderState) + + // Do the delete and update the index + if err := tx.Delete(caProviderTableName, providerState); err != nil { + return err + } + if err := tx.Insert("index", &IndexEntry{caProviderTableName, idx}); err != nil { + return fmt.Errorf("failed updating index: %s", err) + } + + tx.Commit() + + return nil +} diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index a923c0361..1e2959dd1 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -96,9 +96,11 @@ type IssuedCert struct { type CAOp string const ( - CAOpSetRoots CAOp = "set-roots" - CAOpSetConfig CAOp = "set-config" - CAOpSetProviderState CAOp = "set-provider-state" + CAOpSetRoots CAOp = "set-roots" + CAOpSetConfig CAOp = "set-config" + CAOpSetProviderState CAOp = "set-provider-state" + CAOpDeleteProviderState CAOp = "delete-provider-state" + CAOpSetRootsAndConfig CAOp = "set-roots-config" ) // CARequest is used to modify connect CA data. This is used by the @@ -156,9 +158,9 @@ type CAConfiguration struct { // CAConsulProviderState is used to track the built-in Consul CA provider's state. type CAConsulProviderState struct { + ID string PrivateKey string CARoot *CARoot - RootIndex uint64 LeafIndex uint64 RaftIndex From 43f13d5a0b1c09822ad7e60940d2c03539aab709 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 20 Apr 2018 20:39:51 -0700 Subject: [PATCH 134/539] Add cross-signing mechanism to root rotation --- agent/connect/ca_provider.go | 21 +++- agent/consul/connect_ca_endpoint.go | 40 ++++---- agent/consul/connect_ca_provider.go | 119 +++++++++++++++++++++-- agent/consul/connect_ca_provider_test.go | 34 +++++++ agent/consul/leader.go | 13 --- agent/structs/connect_ca.go | 4 + 6 files changed, 191 insertions(+), 40 deletions(-) create mode 100644 agent/consul/connect_ca_provider_test.go diff --git a/agent/connect/ca_provider.go b/agent/connect/ca_provider.go index dc70c6a58..9a53d02a0 100644 --- a/agent/connect/ca_provider.go +++ b/agent/connect/ca_provider.go @@ -10,10 +10,27 @@ import ( // an external CA that provides leaf certificate signing for // given SpiffeIDServices. type CAProvider interface { + // Active root returns the currently active root CA for this + // provider. This should be a parent of the certificate returned by + // ActiveIntermediate() ActiveRoot() (*structs.CARoot, error) + + // ActiveIntermediate returns the current signing cert used by this + // provider for generating SPIFFE leaf certs. ActiveIntermediate() (*structs.CARoot, error) - GenerateIntermediate() (*structs.CARoot, error) + + // GenerateIntermediate returns a new intermediate signing cert, a + // cross-signing CSR for it and sets it to the active intermediate. + GenerateIntermediate() (*structs.CARoot, *x509.CertificateRequest, error) + + // Sign signs a leaf certificate used by Connect proxies from a CSR. Sign(*SpiffeIDService, *x509.CertificateRequest) (*structs.IssuedCert, error) - //SignCA(*x509.CertificateRequest) (*structs.IssuedCert, error) + + // SignCA signs a CA CSR and returns the resulting cross-signed cert. + SignCA(*x509.CertificateRequest) (string, error) + + // Teardown performs any necessary cleanup that should happen when the provider + // is shut down permanently, such as removing a temporary PKI backend in Vault + // created for an intermediate CA. Teardown() error } diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 128c1493d..9a3adeb99 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -116,20 +116,24 @@ func (s *ConnectCA) ConfigurationSet( // to use a different root certificate. // If it's a config change that would trigger a rotation (different provider/root): - // -1. Create an instance of the provider described by the new config - // 2. Get the intermediate from the new provider - // 3. Generate a CSR for the new intermediate, call SignCA on the old/current provider + // 1. Get the intermediate from the new provider + // 2. Generate a CSR for the new intermediate, call SignCA on the old/current provider // to get the cross-signed intermediate - // ~4. Get the active root for the new provider, append the intermediate from step 3 + // 3. Get the active root for the new provider, append the intermediate from step 3 // to its list of intermediates - // -5. Update the roots and CA config in the state store at the same time, finally switching - // to the new provider - // -6. Call teardown on the old provider, so it can clean up whatever it needs to - - /*_, err := newProvider.ActiveIntermediate() + _, csr, err := newProvider.GenerateIntermediate() if err != nil { return err - }*/ + } + + oldProvider := s.srv.getCAProvider() + xcCert, err := oldProvider.SignCA(csr) + if err != nil { + return err + } + + // Add the cross signed cert to the new root's intermediates + newActiveRoot.Intermediates = []string{xcCert} // Update the roots and CA config in the state store at the same time idx, roots, err := state.CARoots(nil) @@ -160,7 +164,6 @@ func (s *ConnectCA) ConfigurationSet( // If the config has been committed, update the local provider instance // and call teardown on the old provider - oldProvider := s.srv.getCAProvider() s.srv.setCAProvider(newProvider) if err := oldProvider.Teardown(); err != nil { @@ -202,11 +205,12 @@ func (s *ConnectCA) Roots( // directly to the structure in the memdb store. reply.Roots[i] = &structs.CARoot{ - ID: r.ID, - Name: r.Name, - RootCert: r.RootCert, - RaftIndex: r.RaftIndex, - Active: r.Active, + ID: r.ID, + Name: r.Name, + RootCert: r.RootCert, + Intermediates: r.Intermediates, + RaftIndex: r.RaftIndex, + Active: r.Active, } if r.Active { @@ -245,7 +249,9 @@ func (s *ConnectCA) Sign( // todo(kyhavlov): more validation on the CSR before signing - cert, err := s.srv.signConnectCert(serviceId, csr) + provider := s.srv.getCAProvider() + + cert, err := provider.Sign(serviceId, csr) if err != nil { return err } diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index b72a9ee36..6f0508ce1 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -143,23 +143,58 @@ func (c *ConsulCAProvider) ActiveRoot() (*structs.CARoot, error) { return providerState.CARoot, nil } +// We aren't maintaining separate root/intermediate CAs for the builtin +// provider, so just return the root. func (c *ConsulCAProvider) ActiveIntermediate() (*structs.CARoot, error) { return c.ActiveRoot() } -func (c *ConsulCAProvider) GenerateIntermediate() (*structs.CARoot, error) { +// We aren't maintaining separate root/intermediate CAs for the builtin +// provider, so just generate a CSR for the active root. +func (c *ConsulCAProvider) GenerateIntermediate() (*structs.CARoot, *x509.CertificateRequest, error) { + ca, err := c.ActiveIntermediate() + if err != nil { + return nil, nil, err + } + state := c.srv.fsm.State() - idx, providerState, err := state.CAProviderState(c.id) + _, providerState, err := state.CAProviderState(c.id) if err != nil { - return nil, err + return nil, nil, err + } + _, config, err := state.CAConfig() + if err != nil { + return nil, nil, err } - ca, err := c.generateCA(providerState.PrivateKey, "", idx+1) - if err != nil { - return nil, err + id := &connect.SpiffeIDSigning{ClusterID: config.ClusterSerial, Domain: "consul"} + template := &x509.CertificateRequest{ + URIs: []*url.URL{id.URI()}, } - return ca, nil + signer, err := connect.ParseSigner(providerState.PrivateKey) + if err != nil { + return nil, nil, err + } + + // Create the CSR itself + var csrBuf bytes.Buffer + bs, err := x509.CreateCertificateRequest(rand.Reader, template, signer) + if err != nil { + return nil, nil, fmt.Errorf("error creating CSR: %s", err) + } + + err = pem.Encode(&csrBuf, &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: bs}) + if err != nil { + return nil, nil, fmt.Errorf("error encoding CSR: %s", err) + } + + csr, err := connect.ParseCSR(csrBuf.String()) + if err != nil { + return nil, nil, err + } + + return ca, csr, err } // Remove the state store entry for this provider instance. @@ -194,7 +229,7 @@ func (c *ConsulCAProvider) Sign(serviceId *connect.SpiffeIDService, csr *x509.Ce return nil, err } - // Create the keyId for the cert from the signing public key. + // Create the keyId for the cert from the signing private key. signer, err := connect.ParseSigner(providerState.PrivateKey) if err != nil { return nil, err @@ -277,6 +312,74 @@ func (c *ConsulCAProvider) Sign(serviceId *connect.SpiffeIDService, csr *x509.Ce }, nil } +// SignCA returns an intermediate CA cert signed by the current active root. +func (c *ConsulCAProvider) SignCA(csr *x509.CertificateRequest) (string, error) { + c.Lock() + defer c.Unlock() + + // Get the provider state + state := c.srv.fsm.State() + _, providerState, err := state.CAProviderState(c.id) + if err != nil { + return "", err + } + + privKey, err := connect.ParseSigner(providerState.PrivateKey) + if err != nil { + return "", fmt.Errorf("error parsing private key %q: %v", providerState.PrivateKey, err) + } + + name := fmt.Sprintf("Consul cross-signed CA %d", providerState.LeafIndex+1) + + // The URI (SPIFFE compatible) for the cert + _, config, err := state.CAConfig() + if err != nil { + return "", err + } + id := &connect.SpiffeIDSigning{ClusterID: config.ClusterSerial, Domain: "consul"} + keyId, err := connect.KeyId(privKey.Public()) + if err != nil { + return "", err + } + + // Create the CA cert + serialNum := &big.Int{} + serialNum.SetUint64(providerState.LeafIndex + 1) + template := x509.Certificate{ + SerialNumber: serialNum, + Subject: pkix.Name{CommonName: name}, + URIs: csr.URIs, + Signature: csr.Signature, + PublicKeyAlgorithm: csr.PublicKeyAlgorithm, + PublicKey: csr.PublicKey, + PermittedDNSDomainsCritical: true, + PermittedDNSDomains: []string{id.URI().Hostname()}, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageCertSign | + x509.KeyUsageCRLSign | + x509.KeyUsageDigitalSignature, + IsCA: true, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: keyId, + SubjectKeyId: keyId, + } + + bs, err := x509.CreateCertificate( + rand.Reader, &template, &template, privKey.Public(), privKey) + if err != nil { + return "", fmt.Errorf("error generating CA certificate: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + return "", fmt.Errorf("error encoding private key: %s", err) + } + + return buf.String(), nil +} + // generatePrivateKey returns a new private key func generatePrivateKey() (string, error) { var pk *ecdsa.PrivateKey diff --git a/agent/consul/connect_ca_provider_test.go b/agent/consul/connect_ca_provider_test.go new file mode 100644 index 000000000..adad3acba --- /dev/null +++ b/agent/consul/connect_ca_provider_test.go @@ -0,0 +1,34 @@ +package consul + +import ( + "os" + "testing" + + "github.com/hashicorp/consul/testrpc" + "github.com/stretchr/testify/assert" +) + +func TestCAProvider_Bootstrap(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + provider := s1.getCAProvider() + + root, err := provider.ActiveRoot() + assert.NoError(err) + + state := s1.fsm.State() + _, activeRoot, err := state.CARootActive(nil) + assert.NoError(err) + assert.Equal(root.ID, activeRoot.ID) + assert.Equal(root.Name, activeRoot.Name) + assert.Equal(root.RootCert, activeRoot.RootCert) +} diff --git a/agent/consul/leader.go b/agent/consul/leader.go index 8d62ca1aa..91bacee2f 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -1,7 +1,6 @@ package consul import ( - "crypto/x509" "fmt" "net" "strconv" @@ -476,18 +475,6 @@ func (s *Server) setCAProvider(newProvider connect.CAProvider) { s.caProvider = newProvider } -// signConnectCert signs a cert for a service using the currently configured CA provider -func (s *Server) signConnectCert(service *connect.SpiffeIDService, csr *x509.CertificateRequest) (*structs.IssuedCert, error) { - s.caProviderLock.RLock() - defer s.caProviderLock.RUnlock() - - cert, err := s.caProvider.Sign(service, csr) - if err != nil { - return nil, err - } - return cert, nil -} - // reconcileReaped is used to reconcile nodes that have failed and been reaped // from Serf but remain in the catalog. This is done by looking for unknown nodes with serfHealth checks registered. // We generate a "reap" event to cause the node to be cleaned up. diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 1e2959dd1..33c355fca 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -31,6 +31,10 @@ type CARoot struct { // RootCert is the PEM-encoded public certificate. RootCert string + // Intermediates is a list of PEM-encoded intermediate certs to + // attach to any leaf certs signed by this CA. + Intermediates []string + // SigningCert is the PEM-encoded signing certificate and SigningKey // is the PEM-encoded private key for the signing certificate. These // may actually be empty if the CA plugin in use manages these for us. From 8584e9262e050975566671378b4c10f6b82e2526 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 3 Apr 2018 20:46:07 -0700 Subject: [PATCH 135/539] agent/cache: initial kind-of working cache --- agent/cache/cache.go | 239 ++++++++++++++++++++++++++++++++++++ agent/cache/cache_test.go | 200 ++++++++++++++++++++++++++++++ agent/cache/mock_RPC.go | 23 ++++ agent/cache/mock_Request.go | 37 ++++++ agent/cache/mock_Type.go | 30 +++++ agent/cache/request.go | 17 +++ agent/cache/testing.go | 84 +++++++++++++ agent/cache/type.go | 68 ++++++++++ 8 files changed, 698 insertions(+) create mode 100644 agent/cache/cache.go create mode 100644 agent/cache/cache_test.go create mode 100644 agent/cache/mock_RPC.go create mode 100644 agent/cache/mock_Request.go create mode 100644 agent/cache/mock_Type.go create mode 100644 agent/cache/request.go create mode 100644 agent/cache/testing.go create mode 100644 agent/cache/type.go diff --git a/agent/cache/cache.go b/agent/cache/cache.go new file mode 100644 index 000000000..d0172cdc0 --- /dev/null +++ b/agent/cache/cache.go @@ -0,0 +1,239 @@ +// Package cache provides caching features for data from a Consul server. +// +// While this is similar in some ways to the "agent/ae" package, a key +// difference is that with anti-entropy, the agent is the authoritative +// source so it resolves differences the server may have. With caching (this +// package), the server is the authoritative source and we do our best to +// balance performance and correctness, depending on the type of data being +// requested. +// +// Currently, the cache package supports only continuous, blocking query +// caching. This means that the cache update is edge-triggered by Consul +// server blocking queries. +package cache + +import ( + "fmt" + "sync" + "time" +) + +//go:generate mockery -all -inpkg + +// Pre-written options for type registration. These should not be modified. +var ( + // RegisterOptsPeriodic performs a periodic refresh of data fetched + // by the registered type. + RegisterOptsPeriodic = &RegisterOptions{ + Refresh: true, + RefreshTimer: 30 * time.Second, + RefreshTimeout: 5 * time.Minute, + } +) + +// TODO: DC-aware + +// RPC is an interface that an RPC client must implement. +type RPC interface { + RPC(method string, args interface{}, reply interface{}) error +} + +// Cache is a agent-local cache of Consul data. +type Cache struct { + // rpcClient is the RPC-client. + rpcClient RPC + + entriesLock sync.RWMutex + entries map[string]cacheEntry + + typesLock sync.RWMutex + types map[string]typeEntry +} + +type cacheEntry struct { + // Fields pertaining to the actual value + Value interface{} + Error error + Index uint64 + + // Metadata that is used for internal accounting + Valid bool + Fetching bool + Waiter chan struct{} +} + +// typeEntry is a single type that is registered with a Cache. +type typeEntry struct { + Type Type + Opts *RegisterOptions +} + +// New creates a new cache with the given RPC client and reasonable defaults. +// Further settings can be tweaked on the returned value. +func New(rpc RPC) *Cache { + return &Cache{ + rpcClient: rpc, + entries: make(map[string]cacheEntry), + types: make(map[string]typeEntry), + } +} + +// RegisterOptions are options that can be associated with a type being +// registered for the cache. This changes the behavior of the cache for +// this type. +type RegisterOptions struct { + // Refresh configures whether the data is actively refreshed or if + // the data is only refreshed on an explicit Get. The default (false) + // is to only request data on explicit Get. + Refresh bool + + // RefreshTimer is the time between attempting to refresh data. + // If this is zero, then data is refreshed immediately when a fetch + // is returned. + // + // RefreshTimeout determines the maximum query time for a refresh + // operation. This is specified as part of the query options and is + // expected to be implemented by the Type itself. + // + // Using these values, various "refresh" mechanisms can be implemented: + // + // * With a high timer duration and a low timeout, a timer-based + // refresh can be set that minimizes load on the Consul servers. + // + // * With a low timer and high timeout duration, a blocking-query-based + // refresh can be set so that changes in server data are recognized + // within the cache very quickly. + // + RefreshTimer time.Duration + RefreshTimeout time.Duration +} + +// RegisterType registers a cacheable type. +func (c *Cache) RegisterType(n string, typ Type, opts *RegisterOptions) { + c.typesLock.Lock() + defer c.typesLock.Unlock() + c.types[n] = typeEntry{Type: typ, Opts: opts} +} + +// Get loads the data for the given type and request. If data satisfying the +// minimum index is present in the cache, it is returned immediately. Otherwise, +// this will block until the data is available or the request timeout is +// reached. +// +// Multiple Get calls for the same Request (matching CacheKey value) will +// block on a single network request. +func (c *Cache) Get(t string, r Request) (interface{}, error) { + key := r.CacheKey() + idx := r.CacheMinIndex() + +RETRY_GET: + // Get the current value + c.entriesLock.RLock() + entry, ok := c.entries[key] + c.entriesLock.RUnlock() + + // If we have a current value and the index is greater than the + // currently stored index then we return that right away. If the + // index is zero and we have something in the cache we accept whatever + // we have. + if ok && entry.Valid && (idx == 0 || idx < entry.Index) { + return entry.Value, nil + } + + // At this point, we know we either don't have a value at all or the + // value we have is too old. We need to wait for new data. + waiter, err := c.fetch(t, r) + if err != nil { + return nil, err + } + + // Wait on our waiter and then retry the cache load + <-waiter + goto RETRY_GET +} + +func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { + // Get the type that we're fetching + c.typesLock.RLock() + tEntry, ok := c.types[t] + c.typesLock.RUnlock() + if !ok { + return nil, fmt.Errorf("unknown type in cache: %s", t) + } + + // The cache key is used multiple times and might be dynamically + // constructed so let's just store it once here. + key := r.CacheKey() + + c.entriesLock.Lock() + defer c.entriesLock.Unlock() + entry, ok := c.entries[key] + + // If we already have an entry and it is actively fetching, then return + // the currently active waiter. + if ok && entry.Fetching { + return entry.Waiter, nil + } + + // If we don't have an entry, then create it. The entry must be marked + // as invalid so that it isn't returned as a valid value for a zero index. + if !ok { + entry = cacheEntry{Valid: false, Waiter: make(chan struct{})} + } + + // Set that we're fetching to true, which makes it so that future + // identical calls to fetch will return the same waiter rather than + // perform multiple fetches. + entry.Fetching = true + c.entries[key] = entry + + // The actual Fetch must be performed in a goroutine. + go func() { + // Start building the new entry by blocking on the fetch. + var newEntry cacheEntry + result, err := tEntry.Type.Fetch(FetchOptions{ + RPC: c.rpcClient, + MinIndex: entry.Index, + }, r) + newEntry.Value = result.Value + newEntry.Index = result.Index + newEntry.Error = err + + // This is a valid entry with a result + newEntry.Valid = true + + // Create a new waiter that will be used for the next fetch. + newEntry.Waiter = make(chan struct{}) + + // Insert + c.entriesLock.Lock() + c.entries[key] = newEntry + c.entriesLock.Unlock() + + // Trigger the waiter + close(entry.Waiter) + + // If refresh is enabled, run the refresh in due time. The refresh + // below might block, but saves us from spawning another goroutine. + if tEntry.Opts != nil && tEntry.Opts.Refresh { + c.refresh(tEntry.Opts, t, r) + } + }() + + return entry.Waiter, nil +} + +func (c *Cache) refresh(opts *RegisterOptions, t string, r Request) { + // Sanity-check, we should not schedule anything that has refresh disabled + if !opts.Refresh { + return + } + + // If we have a timer, wait for it + if opts.RefreshTimer > 0 { + time.Sleep(opts.RefreshTimer) + } + + // Trigger + c.fetch(t, r) +} diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go new file mode 100644 index 000000000..d82ded195 --- /dev/null +++ b/agent/cache/cache_test.go @@ -0,0 +1,200 @@ +package cache + +import ( + "sort" + "sync" + "testing" + "time" + + "github.com/stretchr/testify/mock" + "github.com/stretchr/testify/require" +) + +// Test a basic Get with no indexes (and therefore no blocking queries). +func TestCacheGet_noIndex(t *testing.T) { + t.Parallel() + + require := require.New(t) + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + c.RegisterType("t", typ, nil) + + // Configure the type + typ.Static(FetchResult{Value: 42}, nil).Times(1) + + // Get, should fetch + req := TestRequest(t, "hello", 0) + result, err := c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Get, should not fetch since we already have a satisfying value + result, err = c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Sleep a tiny bit just to let maybe some background calls happen + // then verify that we still only got the one call + time.Sleep(20 * time.Millisecond) + typ.AssertExpectations(t) +} + +// Test that Get blocks on the initial value +func TestCacheGet_blockingInitSameKey(t *testing.T) { + t.Parallel() + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + c.RegisterType("t", typ, nil) + + // Configure the type + triggerCh := make(chan time.Time) + typ.Static(FetchResult{Value: 42}, nil).WaitUntil(triggerCh).Times(1) + + // Perform multiple gets + getCh1 := TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) + getCh2 := TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) + + // They should block + select { + case <-getCh1: + t.Fatal("should block (ch1)") + case <-getCh2: + t.Fatal("should block (ch2)") + case <-time.After(50 * time.Millisecond): + } + + // Trigger it + close(triggerCh) + + // Should return + TestCacheGetChResult(t, getCh1, 42) + TestCacheGetChResult(t, getCh2, 42) +} + +// Test that Get with different cache keys both block on initial value +// but that the fetches were both properly called. +func TestCacheGet_blockingInitDiffKeys(t *testing.T) { + t.Parallel() + + require := require.New(t) + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + c.RegisterType("t", typ, nil) + + // Keep track of the keys + var keysLock sync.Mutex + var keys []string + + // Configure the type + triggerCh := make(chan time.Time) + typ.Static(FetchResult{Value: 42}, nil). + WaitUntil(triggerCh). + Times(2). + Run(func(args mock.Arguments) { + keysLock.Lock() + defer keysLock.Unlock() + keys = append(keys, args.Get(1).(Request).CacheKey()) + }) + + // Perform multiple gets + getCh1 := TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) + getCh2 := TestCacheGetCh(t, c, "t", TestRequest(t, "goodbye", 0)) + + // They should block + select { + case <-getCh1: + t.Fatal("should block (ch1)") + case <-getCh2: + t.Fatal("should block (ch2)") + case <-time.After(50 * time.Millisecond): + } + + // Trigger it + close(triggerCh) + + // Should return both! + TestCacheGetChResult(t, getCh1, 42) + TestCacheGetChResult(t, getCh2, 42) + + // Verify proper keys + sort.Strings(keys) + require.Equal([]string{"goodbye", "hello"}, keys) +} + +// Test a get with an index set will wait until an index that is higher +// is set in the cache. +func TestCacheGet_blockingIndex(t *testing.T) { + t.Parallel() + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + c.RegisterType("t", typ, nil) + + // Configure the type + triggerCh := make(chan time.Time) + typ.Static(FetchResult{Value: 1, Index: 4}, nil).Once() + typ.Static(FetchResult{Value: 12, Index: 5}, nil).Once() + typ.Static(FetchResult{Value: 42, Index: 6}, nil).WaitUntil(triggerCh) + + // Fetch should block + resultCh := TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 5)) + + // Should block + select { + case <-resultCh: + t.Fatal("should block") + case <-time.After(50 * time.Millisecond): + } + + // Wait a bit + close(triggerCh) + + // Should return + TestCacheGetChResult(t, resultCh, 42) +} + +// Test that a type registered with a periodic refresh will perform +// that refresh after the timer is up. +func TestCacheGet_periodicRefresh(t *testing.T) { + t.Parallel() + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + c.RegisterType("t", typ, &RegisterOptions{ + Refresh: true, + RefreshTimer: 100 * time.Millisecond, + RefreshTimeout: 5 * time.Minute, + }) + + // This is a bit weird, but we do this to ensure that the final + // call to the Fetch (if it happens, depends on timing) just blocks. + triggerCh := make(chan time.Time) + defer close(triggerCh) + + // Configure the type + typ.Static(FetchResult{Value: 1, Index: 4}, nil).Once() + typ.Static(FetchResult{Value: 12, Index: 5}, nil).Once() + typ.Static(FetchResult{Value: 12, Index: 5}, nil).WaitUntil(triggerCh) + + // Fetch should block + resultCh := TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) + TestCacheGetChResult(t, resultCh, 1) + + // Fetch again almost immediately should return old result + time.Sleep(5 * time.Millisecond) + resultCh = TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) + TestCacheGetChResult(t, resultCh, 1) + + // Wait for the timer + time.Sleep(200 * time.Millisecond) + resultCh = TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) + TestCacheGetChResult(t, resultCh, 12) +} diff --git a/agent/cache/mock_RPC.go b/agent/cache/mock_RPC.go new file mode 100644 index 000000000..a1100d2a7 --- /dev/null +++ b/agent/cache/mock_RPC.go @@ -0,0 +1,23 @@ +// Code generated by mockery v1.0.0 +package cache + +import mock "github.com/stretchr/testify/mock" + +// MockRPC is an autogenerated mock type for the RPC type +type MockRPC struct { + mock.Mock +} + +// RPC provides a mock function with given fields: method, args, reply +func (_m *MockRPC) RPC(method string, args interface{}, reply interface{}) error { + ret := _m.Called(method, args, reply) + + var r0 error + if rf, ok := ret.Get(0).(func(string, interface{}, interface{}) error); ok { + r0 = rf(method, args, reply) + } else { + r0 = ret.Error(0) + } + + return r0 +} diff --git a/agent/cache/mock_Request.go b/agent/cache/mock_Request.go new file mode 100644 index 000000000..157912182 --- /dev/null +++ b/agent/cache/mock_Request.go @@ -0,0 +1,37 @@ +// Code generated by mockery v1.0.0 +package cache + +import mock "github.com/stretchr/testify/mock" + +// MockRequest is an autogenerated mock type for the Request type +type MockRequest struct { + mock.Mock +} + +// CacheKey provides a mock function with given fields: +func (_m *MockRequest) CacheKey() string { + ret := _m.Called() + + var r0 string + if rf, ok := ret.Get(0).(func() string); ok { + r0 = rf() + } else { + r0 = ret.Get(0).(string) + } + + return r0 +} + +// CacheMinIndex provides a mock function with given fields: +func (_m *MockRequest) CacheMinIndex() uint64 { + ret := _m.Called() + + var r0 uint64 + if rf, ok := ret.Get(0).(func() uint64); ok { + r0 = rf() + } else { + r0 = ret.Get(0).(uint64) + } + + return r0 +} diff --git a/agent/cache/mock_Type.go b/agent/cache/mock_Type.go new file mode 100644 index 000000000..110fc5787 --- /dev/null +++ b/agent/cache/mock_Type.go @@ -0,0 +1,30 @@ +// Code generated by mockery v1.0.0 +package cache + +import mock "github.com/stretchr/testify/mock" + +// MockType is an autogenerated mock type for the Type type +type MockType struct { + mock.Mock +} + +// Fetch provides a mock function with given fields: _a0, _a1 +func (_m *MockType) Fetch(_a0 FetchOptions, _a1 Request) (FetchResult, error) { + ret := _m.Called(_a0, _a1) + + var r0 FetchResult + if rf, ok := ret.Get(0).(func(FetchOptions, Request) FetchResult); ok { + r0 = rf(_a0, _a1) + } else { + r0 = ret.Get(0).(FetchResult) + } + + var r1 error + if rf, ok := ret.Get(1).(func(FetchOptions, Request) error); ok { + r1 = rf(_a0, _a1) + } else { + r1 = ret.Error(1) + } + + return r0, r1 +} diff --git a/agent/cache/request.go b/agent/cache/request.go new file mode 100644 index 000000000..c75c8ad84 --- /dev/null +++ b/agent/cache/request.go @@ -0,0 +1,17 @@ +package cache + +// Request is a cache-able request. +// +// This interface is typically implemented by request structures in +// the agent/structs package. +type Request interface { + // CacheKey is a unique cache key for this request. This key should + // absolutely uniquely identify this request, since any conflicting + // cache keys could result in invalid data being returned from the cache. + CacheKey() string + + // CacheMinIndex is the minimum index being queried. This is used to + // determine if we already have data satisfying the query or if we need + // to block until new data is available. + CacheMinIndex() uint64 +} diff --git a/agent/cache/testing.go b/agent/cache/testing.go new file mode 100644 index 000000000..7bf2bf891 --- /dev/null +++ b/agent/cache/testing.go @@ -0,0 +1,84 @@ +package cache + +import ( + "reflect" + "time" + + "github.com/mitchellh/go-testing-interface" + "github.com/stretchr/testify/mock" +) + +// TestCache returns a Cache instance configuring for testing. +func TestCache(t testing.T) *Cache { + // Simple but lets us do some fine-tuning later if we want to. + return New(TestRPC(t)) +} + +// TestCacheGetCh returns a channel that returns the result of the Get call. +// This is useful for testing timing and concurrency with Get calls. Any +// error will be logged, so the result value should always be asserted. +func TestCacheGetCh(t testing.T, c *Cache, typ string, r Request) <-chan interface{} { + resultCh := make(chan interface{}) + go func() { + result, err := c.Get(typ, r) + if err != nil { + t.Logf("Error: %s", err) + close(resultCh) + return + } + + resultCh <- result + }() + + return resultCh +} + +// TestCacheGetChResult tests that the result from TestCacheGetCh matches +// within a reasonable period of time (it expects it to be "immediate" but +// waits some milliseconds). +func TestCacheGetChResult(t testing.T, ch <-chan interface{}, expected interface{}) { + t.Helper() + + select { + case result := <-ch: + if !reflect.DeepEqual(result, expected) { + t.Fatalf("Result doesn't match!\n\n%#v\n\n%#v", result, expected) + } + case <-time.After(50 * time.Millisecond): + } +} + +// TestRequest returns a Request that returns the given cache key and index. +// The Reset method can be called to reset it for custom usage. +func TestRequest(t testing.T, key string, index uint64) *MockRequest { + req := &MockRequest{} + req.On("CacheKey").Return(key) + req.On("CacheMinIndex").Return(index) + return req +} + +// TestRPC returns a mock implementation of the RPC interface. +func TestRPC(t testing.T) *MockRPC { + // This function is relatively useless but this allows us to perhaps + // perform some initialization later. + return &MockRPC{} +} + +// TestType returns a MockType that can be used to setup expectations +// on data fetching. +func TestType(t testing.T) *MockType { + typ := &MockType{} + return typ +} + +// A bit weird, but we add methods to the auto-generated structs here so that +// they don't get clobbered. The helper methods are conveniences. + +// Static sets a static value to return for a call to Fetch. +func (m *MockType) Static(r FetchResult, err error) *mock.Call { + return m.Mock.On("Fetch", mock.Anything, mock.Anything).Return(r, err) +} + +func (m *MockRequest) Reset() { + m.Mock = mock.Mock{} +} diff --git a/agent/cache/type.go b/agent/cache/type.go new file mode 100644 index 000000000..fbb65761f --- /dev/null +++ b/agent/cache/type.go @@ -0,0 +1,68 @@ +package cache + +import ( + "time" +) + +// Type implement the logic to fetch certain types of data. +type Type interface { + // Fetch fetches a single unique item. + // + // The FetchOptions contain the index and timeouts for blocking queries. + // The CacheMinIndex value on the Request itself should NOT be used + // as the blocking index since a request may be reused multiple times + // as part of Refresh behavior. + // + // The return value is a FetchResult which contains information about + // the fetch. + Fetch(FetchOptions, Request) (FetchResult, error) +} + +// FetchOptions are various settable options when a Fetch is called. +type FetchOptions struct { + // RPC is the RPC client to communicate to a Consul server. + RPC RPC + + // MinIndex is the minimum index to be used for blocking queries. + // If blocking queries aren't supported for data being returned, + // this value can be ignored. + MinIndex uint64 + + // Timeout is the maximum time for the query. This must be implemented + // in the Fetch itself. + Timeout time.Duration +} + +// FetchResult is the result of a Type Fetch operation and contains the +// data along with metadata gathered from that operation. +type FetchResult struct { + // Value is the result of the fetch. + Value interface{} + + // Index is the corresponding index value for this data. + Index uint64 +} + +/* +type TypeCARoot struct{} + +func (c *TypeCARoot) Fetch(delegate RPC, idx uint64, req Request) (interface{}, uint64, error) { + // The request should be a DCSpecificRequest. + reqReal, ok := req.(*structs.DCSpecificRequest) + if !ok { + return nil, 0, fmt.Errorf( + "Internal cache failure: request wrong type: %T", req) + } + + // Set the minimum query index to our current index so we block + reqReal.QueryOptions.MinQueryIndex = idx + + // Fetch + var reply structs.IndexedCARoots + if err := delegate.RPC("ConnectCA.Roots", reqReal, &reply); err != nil { + return nil, 0, err + } + + return &reply, reply.QueryMeta.Index, nil +} +*/ From c69df79e0c698917ffe162c13a7d50a8d30c81fc Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 8 Apr 2018 14:30:14 +0100 Subject: [PATCH 136/539] agent/cache: blank cache key means to always fetch --- agent/cache/cache.go | 70 ++++++++++++++++++++-------------- agent/cache/cache_test.go | 32 ++++++++++++++++ agent/cache/rpc.go | 8 ++++ agent/cache/testing.go | 2 +- agent/cache/type.go | 27 ------------- agent/cache/type_connect_ca.go | 39 +++++++++++++++++++ 6 files changed, 122 insertions(+), 56 deletions(-) create mode 100644 agent/cache/rpc.go create mode 100644 agent/cache/type_connect_ca.go diff --git a/agent/cache/cache.go b/agent/cache/cache.go index d0172cdc0..1c2f316dd 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -20,29 +20,10 @@ import ( //go:generate mockery -all -inpkg -// Pre-written options for type registration. These should not be modified. -var ( - // RegisterOptsPeriodic performs a periodic refresh of data fetched - // by the registered type. - RegisterOptsPeriodic = &RegisterOptions{ - Refresh: true, - RefreshTimer: 30 * time.Second, - RefreshTimeout: 5 * time.Minute, - } -) - -// TODO: DC-aware - -// RPC is an interface that an RPC client must implement. -type RPC interface { - RPC(method string, args interface{}, reply interface{}) error -} +// TODO: DC-aware, ACL-aware // Cache is a agent-local cache of Consul data. type Cache struct { - // rpcClient is the RPC-client. - rpcClient RPC - entriesLock sync.RWMutex entries map[string]cacheEntry @@ -50,6 +31,7 @@ type Cache struct { types map[string]typeEntry } +// cacheEntry stores a single cache entry. type cacheEntry struct { // Fields pertaining to the actual value Value interface{} @@ -68,13 +50,17 @@ type typeEntry struct { Opts *RegisterOptions } +// Options are options for the Cache. +type Options struct { + // Nothing currently, reserved. +} + // New creates a new cache with the given RPC client and reasonable defaults. // Further settings can be tweaked on the returned value. -func New(rpc RPC) *Cache { +func New(*Options) *Cache { return &Cache{ - rpcClient: rpc, - entries: make(map[string]cacheEntry), - types: make(map[string]typeEntry), + entries: make(map[string]cacheEntry), + types: make(map[string]typeEntry), } } @@ -124,7 +110,11 @@ func (c *Cache) RegisterType(n string, typ Type, opts *RegisterOptions) { // block on a single network request. func (c *Cache) Get(t string, r Request) (interface{}, error) { key := r.CacheKey() - idx := r.CacheMinIndex() + if key == "" { + // If no key is specified, then we do not cache this request. + // Pass directly through to the backend. + return c.fetchDirect(t, r) + } RETRY_GET: // Get the current value @@ -136,8 +126,11 @@ RETRY_GET: // currently stored index then we return that right away. If the // index is zero and we have something in the cache we accept whatever // we have. - if ok && entry.Valid && (idx == 0 || idx < entry.Index) { - return entry.Value, nil + if ok && entry.Valid { + idx := r.CacheMinIndex() + if idx == 0 || idx < entry.Index { + return entry.Value, nil + } } // At this point, we know we either don't have a value at all or the @@ -192,7 +185,6 @@ func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { // Start building the new entry by blocking on the fetch. var newEntry cacheEntry result, err := tEntry.Type.Fetch(FetchOptions{ - RPC: c.rpcClient, MinIndex: entry.Index, }, r) newEntry.Value = result.Value @@ -223,6 +215,28 @@ func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { return entry.Waiter, nil } +// fetchDirect fetches the given request with no caching. +func (c *Cache) fetchDirect(t string, r Request) (interface{}, error) { + // Get the type that we're fetching + c.typesLock.RLock() + tEntry, ok := c.types[t] + c.typesLock.RUnlock() + if !ok { + return nil, fmt.Errorf("unknown type in cache: %s", t) + } + + // Fetch it with the min index specified directly by the request. + result, err := tEntry.Type.Fetch(FetchOptions{ + MinIndex: r.CacheMinIndex(), + }, r) + if err != nil { + return nil, err + } + + // Return the result and ignore the rest + return result.Value, nil +} + func (c *Cache) refresh(opts *RegisterOptions, t string, r Request) { // Sanity-check, we should not schedule anything that has refresh disabled if !opts.Refresh { diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go index d82ded195..69f99a628 100644 --- a/agent/cache/cache_test.go +++ b/agent/cache/cache_test.go @@ -41,6 +41,38 @@ func TestCacheGet_noIndex(t *testing.T) { typ.AssertExpectations(t) } +// Test a Get with a request that returns a blank cache key. This should +// force a backend request and skip the cache entirely. +func TestCacheGet_blankCacheKey(t *testing.T) { + t.Parallel() + + require := require.New(t) + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + c.RegisterType("t", typ, nil) + + // Configure the type + typ.Static(FetchResult{Value: 42}, nil).Times(2) + + // Get, should fetch + req := TestRequest(t, "", 0) + result, err := c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Get, should not fetch since we already have a satisfying value + result, err = c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Sleep a tiny bit just to let maybe some background calls happen + // then verify that we still only got the one call + time.Sleep(20 * time.Millisecond) + typ.AssertExpectations(t) +} + // Test that Get blocks on the initial value func TestCacheGet_blockingInitSameKey(t *testing.T) { t.Parallel() diff --git a/agent/cache/rpc.go b/agent/cache/rpc.go new file mode 100644 index 000000000..98976284a --- /dev/null +++ b/agent/cache/rpc.go @@ -0,0 +1,8 @@ +package cache + +// RPC is an interface that an RPC client must implement. This is a helper +// interface that is implemented by the agent delegate so that Type +// implementations can request RPC access. +type RPC interface { + RPC(method string, args interface{}, reply interface{}) error +} diff --git a/agent/cache/testing.go b/agent/cache/testing.go index 7bf2bf891..6a094c117 100644 --- a/agent/cache/testing.go +++ b/agent/cache/testing.go @@ -11,7 +11,7 @@ import ( // TestCache returns a Cache instance configuring for testing. func TestCache(t testing.T) *Cache { // Simple but lets us do some fine-tuning later if we want to. - return New(TestRPC(t)) + return New(nil) } // TestCacheGetCh returns a channel that returns the result of the Get call. diff --git a/agent/cache/type.go b/agent/cache/type.go index fbb65761f..6e8edeb5f 100644 --- a/agent/cache/type.go +++ b/agent/cache/type.go @@ -20,9 +20,6 @@ type Type interface { // FetchOptions are various settable options when a Fetch is called. type FetchOptions struct { - // RPC is the RPC client to communicate to a Consul server. - RPC RPC - // MinIndex is the minimum index to be used for blocking queries. // If blocking queries aren't supported for data being returned, // this value can be ignored. @@ -42,27 +39,3 @@ type FetchResult struct { // Index is the corresponding index value for this data. Index uint64 } - -/* -type TypeCARoot struct{} - -func (c *TypeCARoot) Fetch(delegate RPC, idx uint64, req Request) (interface{}, uint64, error) { - // The request should be a DCSpecificRequest. - reqReal, ok := req.(*structs.DCSpecificRequest) - if !ok { - return nil, 0, fmt.Errorf( - "Internal cache failure: request wrong type: %T", req) - } - - // Set the minimum query index to our current index so we block - reqReal.QueryOptions.MinQueryIndex = idx - - // Fetch - var reply structs.IndexedCARoots - if err := delegate.RPC("ConnectCA.Roots", reqReal, &reply); err != nil { - return nil, 0, err - } - - return &reply, reply.QueryMeta.Index, nil -} -*/ diff --git a/agent/cache/type_connect_ca.go b/agent/cache/type_connect_ca.go new file mode 100644 index 000000000..40bda72df --- /dev/null +++ b/agent/cache/type_connect_ca.go @@ -0,0 +1,39 @@ +package cache + +/* +import ( + "fmt" + + "github.com/hashicorp/consul/agent/structs" +) + +// TypeCARoot supports fetching the Connect CA roots. +type TypeCARoot struct { + RPC RPC +} + +func (c *TypeCARoot) Fetch(opts FetchOptions, req Request) (FetchResult, error) { + var result FetchResult + + // The request should be a DCSpecificRequest. + reqReal, ok := req.(*structs.DCSpecificRequest) + if !ok { + return result, fmt.Errorf( + "Internal cache failure: request wrong type: %T", req) + } + + // Set the minimum query index to our current index so we block + reqReal.QueryOptions.MinQueryIndex = opts.MinIndex + reqReal.QueryOptions.MaxQueryTime = opts.Timeout + + // Fetch + var reply structs.IndexedCARoots + if err := c.RPC.RPC("ConnectCA.Roots", reqReal, &reply); err != nil { + return result, err + } + + result.Value = &reply + result.Index = reply.QueryMeta.Index + return result, nil +} +*/ From ecc789ddb59ab7a0ceff8bdfc70b775923c73a8b Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 8 Apr 2018 14:45:55 +0100 Subject: [PATCH 137/539] agent/cache: ConnectCA roots caching type --- agent/cache/type_connect_ca.go | 2 -- agent/cache/type_connect_ca_test.go | 55 +++++++++++++++++++++++++++++ agent/structs/structs.go | 20 +++++++++++ 3 files changed, 75 insertions(+), 2 deletions(-) create mode 100644 agent/cache/type_connect_ca_test.go diff --git a/agent/cache/type_connect_ca.go b/agent/cache/type_connect_ca.go index 40bda72df..6a0a6699c 100644 --- a/agent/cache/type_connect_ca.go +++ b/agent/cache/type_connect_ca.go @@ -1,6 +1,5 @@ package cache -/* import ( "fmt" @@ -36,4 +35,3 @@ func (c *TypeCARoot) Fetch(opts FetchOptions, req Request) (FetchResult, error) result.Index = reply.QueryMeta.Index return result, nil } -*/ diff --git a/agent/cache/type_connect_ca_test.go b/agent/cache/type_connect_ca_test.go new file mode 100644 index 000000000..359449d21 --- /dev/null +++ b/agent/cache/type_connect_ca_test.go @@ -0,0 +1,55 @@ +package cache + +import ( + "testing" + "time" + + "github.com/hashicorp/consul/agent/structs" + "github.com/stretchr/testify/mock" + "github.com/stretchr/testify/require" +) + +func TestTypeCARoot(t *testing.T) { + require := require.New(t) + rpc := TestRPC(t) + defer rpc.AssertExpectations(t) + typ := &TypeCARoot{RPC: rpc} + + // Expect the proper RPC call. This also sets the expected value + // since that is return-by-pointer in the arguments. + var resp *structs.IndexedCARoots + rpc.On("RPC", "ConnectCA.Roots", mock.Anything, mock.Anything).Return(nil). + Run(func(args mock.Arguments) { + req := args.Get(1).(*structs.DCSpecificRequest) + require.Equal(uint64(24), req.QueryOptions.MinQueryIndex) + require.Equal(1*time.Second, req.QueryOptions.MaxQueryTime) + + reply := args.Get(2).(*structs.IndexedCARoots) + reply.QueryMeta.Index = 48 + resp = reply + }) + + // Fetch + result, err := typ.Fetch(FetchOptions{ + MinIndex: 24, + Timeout: 1 * time.Second, + }, &structs.DCSpecificRequest{Datacenter: "dc1"}) + require.Nil(err) + require.Equal(FetchResult{ + Value: resp, + Index: 48, + }, result) +} + +func TestTypeCARoot_badReqType(t *testing.T) { + require := require.New(t) + rpc := TestRPC(t) + defer rpc.AssertExpectations(t) + typ := &TypeCARoot{RPC: rpc} + + // Fetch + _, err := typ.Fetch(FetchOptions{}, TestRequest(t, "foo", 64)) + require.NotNil(err) + require.Contains(err.Error(), "wrong type") + +} diff --git a/agent/structs/structs.go b/agent/structs/structs.go index 4f25e50f0..d40c90baa 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -6,6 +6,7 @@ import ( "math/rand" "reflect" "regexp" + "strconv" "strings" "time" @@ -14,6 +15,7 @@ import ( "github.com/hashicorp/go-msgpack/codec" "github.com/hashicorp/go-multierror" "github.com/hashicorp/serf/coordinate" + "github.com/mitchellh/hashstructure" ) type MessageType uint8 @@ -276,6 +278,24 @@ func (r *DCSpecificRequest) RequestDatacenter() string { return r.Datacenter } +func (r *DCSpecificRequest) CacheKey() string { + // To calculate the cache key we only hash the node filters. The + // datacenter is handled by the cache framework. The other fields are + // not, but should not be used in any cache types. + v, err := hashstructure.Hash(r.NodeMetaFilters, nil) + if err != nil { + // Empty string means do not cache. If we have an error we should + // just forward along to the server. + return "" + } + + return strconv.FormatUint(v, 10) +} + +func (r *DCSpecificRequest) CacheMinIndex() uint64 { + return r.QueryOptions.MinQueryIndex +} + // ServiceSpecificRequest is used to query about a specific service type ServiceSpecificRequest struct { Datacenter string From 72c82a9b29aac384dc67cb2511da09cd61567830 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 8 Apr 2018 15:08:34 +0100 Subject: [PATCH 138/539] agent/cache: Reorganize some files, RequestInfo struct, prepare for partitioning --- .../connect_ca.go} | 7 ++-- .../connect_ca_test.go} | 10 +++--- agent/{cache => cache-types}/mock_RPC.go | 2 +- agent/{cache => cache-types}/rpc.go | 4 ++- agent/cache-types/testing.go | 12 +++++++ agent/cache/cache.go | 22 ++++++------- agent/cache/cache_test.go | 23 ++++++------- agent/cache/mock_Request.go | 24 +++----------- agent/cache/request.go | 32 ++++++++++++++++--- agent/cache/testing.go | 12 ++----- agent/structs/structs.go | 18 +++++++---- 11 files changed, 94 insertions(+), 72 deletions(-) rename agent/{cache/type_connect_ca.go => cache-types/connect_ca.go} (79%) rename agent/{cache/type_connect_ca_test.go => cache-types/connect_ca_test.go} (83%) rename agent/{cache => cache-types}/mock_RPC.go (96%) rename agent/{cache => cache-types}/rpc.go (83%) create mode 100644 agent/cache-types/testing.go diff --git a/agent/cache/type_connect_ca.go b/agent/cache-types/connect_ca.go similarity index 79% rename from agent/cache/type_connect_ca.go rename to agent/cache-types/connect_ca.go index 6a0a6699c..85962b1fb 100644 --- a/agent/cache/type_connect_ca.go +++ b/agent/cache-types/connect_ca.go @@ -1,8 +1,9 @@ -package cache +package cachetype import ( "fmt" + "github.com/hashicorp/consul/agent/cache" "github.com/hashicorp/consul/agent/structs" ) @@ -11,8 +12,8 @@ type TypeCARoot struct { RPC RPC } -func (c *TypeCARoot) Fetch(opts FetchOptions, req Request) (FetchResult, error) { - var result FetchResult +func (c *TypeCARoot) Fetch(opts cache.FetchOptions, req cache.Request) (cache.FetchResult, error) { + var result cache.FetchResult // The request should be a DCSpecificRequest. reqReal, ok := req.(*structs.DCSpecificRequest) diff --git a/agent/cache/type_connect_ca_test.go b/agent/cache-types/connect_ca_test.go similarity index 83% rename from agent/cache/type_connect_ca_test.go rename to agent/cache-types/connect_ca_test.go index 359449d21..faf8317bd 100644 --- a/agent/cache/type_connect_ca_test.go +++ b/agent/cache-types/connect_ca_test.go @@ -1,9 +1,10 @@ -package cache +package cachetype import ( "testing" "time" + "github.com/hashicorp/consul/agent/cache" "github.com/hashicorp/consul/agent/structs" "github.com/stretchr/testify/mock" "github.com/stretchr/testify/require" @@ -30,12 +31,12 @@ func TestTypeCARoot(t *testing.T) { }) // Fetch - result, err := typ.Fetch(FetchOptions{ + result, err := typ.Fetch(cache.FetchOptions{ MinIndex: 24, Timeout: 1 * time.Second, }, &structs.DCSpecificRequest{Datacenter: "dc1"}) require.Nil(err) - require.Equal(FetchResult{ + require.Equal(cache.FetchResult{ Value: resp, Index: 48, }, result) @@ -48,7 +49,8 @@ func TestTypeCARoot_badReqType(t *testing.T) { typ := &TypeCARoot{RPC: rpc} // Fetch - _, err := typ.Fetch(FetchOptions{}, TestRequest(t, "foo", 64)) + _, err := typ.Fetch(cache.FetchOptions{}, cache.TestRequest( + t, cache.RequestInfo{Key: "foo", MinIndex: 64})) require.NotNil(err) require.Contains(err.Error(), "wrong type") diff --git a/agent/cache/mock_RPC.go b/agent/cache-types/mock_RPC.go similarity index 96% rename from agent/cache/mock_RPC.go rename to agent/cache-types/mock_RPC.go index a1100d2a7..6f642c66b 100644 --- a/agent/cache/mock_RPC.go +++ b/agent/cache-types/mock_RPC.go @@ -1,5 +1,5 @@ // Code generated by mockery v1.0.0 -package cache +package cachetype import mock "github.com/stretchr/testify/mock" diff --git a/agent/cache/rpc.go b/agent/cache-types/rpc.go similarity index 83% rename from agent/cache/rpc.go rename to agent/cache-types/rpc.go index 98976284a..0aaf040f3 100644 --- a/agent/cache/rpc.go +++ b/agent/cache-types/rpc.go @@ -1,4 +1,6 @@ -package cache +package cachetype + +//go:generate mockery -all -inpkg // RPC is an interface that an RPC client must implement. This is a helper // interface that is implemented by the agent delegate so that Type diff --git a/agent/cache-types/testing.go b/agent/cache-types/testing.go new file mode 100644 index 000000000..bf68ec478 --- /dev/null +++ b/agent/cache-types/testing.go @@ -0,0 +1,12 @@ +package cachetype + +import ( + "github.com/mitchellh/go-testing-interface" +) + +// TestRPC returns a mock implementation of the RPC interface. +func TestRPC(t testing.T) *MockRPC { + // This function is relatively useless but this allows us to perhaps + // perform some initialization later. + return &MockRPC{} +} diff --git a/agent/cache/cache.go b/agent/cache/cache.go index 1c2f316dd..04323a4c5 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -109,8 +109,8 @@ func (c *Cache) RegisterType(n string, typ Type, opts *RegisterOptions) { // Multiple Get calls for the same Request (matching CacheKey value) will // block on a single network request. func (c *Cache) Get(t string, r Request) (interface{}, error) { - key := r.CacheKey() - if key == "" { + info := r.CacheInfo() + if info.Key == "" { // If no key is specified, then we do not cache this request. // Pass directly through to the backend. return c.fetchDirect(t, r) @@ -119,7 +119,7 @@ func (c *Cache) Get(t string, r Request) (interface{}, error) { RETRY_GET: // Get the current value c.entriesLock.RLock() - entry, ok := c.entries[key] + entry, ok := c.entries[info.Key] c.entriesLock.RUnlock() // If we have a current value and the index is greater than the @@ -127,8 +127,7 @@ RETRY_GET: // index is zero and we have something in the cache we accept whatever // we have. if ok && entry.Valid { - idx := r.CacheMinIndex() - if idx == 0 || idx < entry.Index { + if info.MinIndex == 0 || info.MinIndex < entry.Index { return entry.Value, nil } } @@ -154,13 +153,12 @@ func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { return nil, fmt.Errorf("unknown type in cache: %s", t) } - // The cache key is used multiple times and might be dynamically - // constructed so let's just store it once here. - key := r.CacheKey() + // Grab the cache information while we're outside the lock. + info := r.CacheInfo() c.entriesLock.Lock() defer c.entriesLock.Unlock() - entry, ok := c.entries[key] + entry, ok := c.entries[info.Key] // If we already have an entry and it is actively fetching, then return // the currently active waiter. @@ -178,7 +176,7 @@ func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { // identical calls to fetch will return the same waiter rather than // perform multiple fetches. entry.Fetching = true - c.entries[key] = entry + c.entries[info.Key] = entry // The actual Fetch must be performed in a goroutine. go func() { @@ -199,7 +197,7 @@ func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { // Insert c.entriesLock.Lock() - c.entries[key] = newEntry + c.entries[info.Key] = newEntry c.entriesLock.Unlock() // Trigger the waiter @@ -227,7 +225,7 @@ func (c *Cache) fetchDirect(t string, r Request) (interface{}, error) { // Fetch it with the min index specified directly by the request. result, err := tEntry.Type.Fetch(FetchOptions{ - MinIndex: r.CacheMinIndex(), + MinIndex: r.CacheInfo().MinIndex, }, r) if err != nil { return nil, err diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go index 69f99a628..1bfed590c 100644 --- a/agent/cache/cache_test.go +++ b/agent/cache/cache_test.go @@ -25,7 +25,7 @@ func TestCacheGet_noIndex(t *testing.T) { typ.Static(FetchResult{Value: 42}, nil).Times(1) // Get, should fetch - req := TestRequest(t, "hello", 0) + req := TestRequest(t, RequestInfo{Key: "hello"}) result, err := c.Get("t", req) require.Nil(err) require.Equal(42, result) @@ -57,7 +57,7 @@ func TestCacheGet_blankCacheKey(t *testing.T) { typ.Static(FetchResult{Value: 42}, nil).Times(2) // Get, should fetch - req := TestRequest(t, "", 0) + req := TestRequest(t, RequestInfo{Key: ""}) result, err := c.Get("t", req) require.Nil(err) require.Equal(42, result) @@ -87,8 +87,8 @@ func TestCacheGet_blockingInitSameKey(t *testing.T) { typ.Static(FetchResult{Value: 42}, nil).WaitUntil(triggerCh).Times(1) // Perform multiple gets - getCh1 := TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) - getCh2 := TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) + getCh1 := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{Key: "hello"})) + getCh2 := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{Key: "hello"})) // They should block select { @@ -131,12 +131,12 @@ func TestCacheGet_blockingInitDiffKeys(t *testing.T) { Run(func(args mock.Arguments) { keysLock.Lock() defer keysLock.Unlock() - keys = append(keys, args.Get(1).(Request).CacheKey()) + keys = append(keys, args.Get(1).(Request).CacheInfo().Key) }) // Perform multiple gets - getCh1 := TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) - getCh2 := TestCacheGetCh(t, c, "t", TestRequest(t, "goodbye", 0)) + getCh1 := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{Key: "hello"})) + getCh2 := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{Key: "goodbye"})) // They should block select { @@ -176,7 +176,8 @@ func TestCacheGet_blockingIndex(t *testing.T) { typ.Static(FetchResult{Value: 42, Index: 6}, nil).WaitUntil(triggerCh) // Fetch should block - resultCh := TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 5)) + resultCh := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{ + Key: "hello", MinIndex: 5})) // Should block select { @@ -217,16 +218,16 @@ func TestCacheGet_periodicRefresh(t *testing.T) { typ.Static(FetchResult{Value: 12, Index: 5}, nil).WaitUntil(triggerCh) // Fetch should block - resultCh := TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) + resultCh := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{Key: "hello"})) TestCacheGetChResult(t, resultCh, 1) // Fetch again almost immediately should return old result time.Sleep(5 * time.Millisecond) - resultCh = TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) + resultCh = TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{Key: "hello"})) TestCacheGetChResult(t, resultCh, 1) // Wait for the timer time.Sleep(200 * time.Millisecond) - resultCh = TestCacheGetCh(t, c, "t", TestRequest(t, "hello", 0)) + resultCh = TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{Key: "hello"})) TestCacheGetChResult(t, resultCh, 12) } diff --git a/agent/cache/mock_Request.go b/agent/cache/mock_Request.go index 157912182..e3abd1515 100644 --- a/agent/cache/mock_Request.go +++ b/agent/cache/mock_Request.go @@ -8,29 +8,15 @@ type MockRequest struct { mock.Mock } -// CacheKey provides a mock function with given fields: -func (_m *MockRequest) CacheKey() string { +// CacheInfo provides a mock function with given fields: +func (_m *MockRequest) CacheInfo() RequestInfo { ret := _m.Called() - var r0 string - if rf, ok := ret.Get(0).(func() string); ok { + var r0 RequestInfo + if rf, ok := ret.Get(0).(func() RequestInfo); ok { r0 = rf() } else { - r0 = ret.Get(0).(string) - } - - return r0 -} - -// CacheMinIndex provides a mock function with given fields: -func (_m *MockRequest) CacheMinIndex() uint64 { - ret := _m.Called() - - var r0 uint64 - if rf, ok := ret.Get(0).(func() uint64); ok { - r0 = rf() - } else { - r0 = ret.Get(0).(uint64) + r0 = ret.Get(0).(RequestInfo) } return r0 diff --git a/agent/cache/request.go b/agent/cache/request.go index c75c8ad84..b4a1b75d0 100644 --- a/agent/cache/request.go +++ b/agent/cache/request.go @@ -5,13 +5,35 @@ package cache // This interface is typically implemented by request structures in // the agent/structs package. type Request interface { - // CacheKey is a unique cache key for this request. This key should + // CacheInfo returns information used for caching this request. + CacheInfo() RequestInfo +} + +// RequestInfo represents cache information for a request. The caching +// framework uses this to control the behavior of caching and to determine +// cacheability. +type RequestInfo struct { + // Key is a unique cache key for this request. This key should // absolutely uniquely identify this request, since any conflicting // cache keys could result in invalid data being returned from the cache. - CacheKey() string + Key string - // CacheMinIndex is the minimum index being queried. This is used to + // Token is the ACL token associated with this request. + // + // Datacenter is the datacenter that the request is targeting. + // + // Both of these values are used to partition the cache. The cache framework + // today partitions data on these values to simplify behavior: by + // partitioning ACL tokens, the cache doesn't need to be smart about + // filtering results. By filtering datacenter results, the cache can + // service the multi-DC nature of Consul. This comes at the expense of + // working set size, but in general the effect is minimal. + Token string + Datacenter string + + // MinIndex is the minimum index being queried. This is used to // determine if we already have data satisfying the query or if we need - // to block until new data is available. - CacheMinIndex() uint64 + // to block until new data is available. If no index is available, the + // default value (zero) is acceptable. + MinIndex uint64 } diff --git a/agent/cache/testing.go b/agent/cache/testing.go index 6a094c117..365dc3b4e 100644 --- a/agent/cache/testing.go +++ b/agent/cache/testing.go @@ -50,20 +50,12 @@ func TestCacheGetChResult(t testing.T, ch <-chan interface{}, expected interface // TestRequest returns a Request that returns the given cache key and index. // The Reset method can be called to reset it for custom usage. -func TestRequest(t testing.T, key string, index uint64) *MockRequest { +func TestRequest(t testing.T, info RequestInfo) *MockRequest { req := &MockRequest{} - req.On("CacheKey").Return(key) - req.On("CacheMinIndex").Return(index) + req.On("CacheInfo").Return(info) return req } -// TestRPC returns a mock implementation of the RPC interface. -func TestRPC(t testing.T) *MockRPC { - // This function is relatively useless but this allows us to perhaps - // perform some initialization later. - return &MockRPC{} -} - // TestType returns a MockType that can be used to setup expectations // on data fetching. func TestType(t testing.T) *MockType { diff --git a/agent/structs/structs.go b/agent/structs/structs.go index d40c90baa..19a9c7313 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -10,6 +10,7 @@ import ( "strings" "time" + "github.com/hashicorp/consul/agent/cache" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/types" "github.com/hashicorp/go-msgpack/codec" @@ -278,18 +279,23 @@ func (r *DCSpecificRequest) RequestDatacenter() string { return r.Datacenter } -func (r *DCSpecificRequest) CacheKey() string { +func (r *DCSpecificRequest) CacheInfo() cache.RequestInfo { + info := cache.RequestInfo{ + MinIndex: r.QueryOptions.MinQueryIndex, + } + // To calculate the cache key we only hash the node filters. The // datacenter is handled by the cache framework. The other fields are // not, but should not be used in any cache types. v, err := hashstructure.Hash(r.NodeMetaFilters, nil) - if err != nil { - // Empty string means do not cache. If we have an error we should - // just forward along to the server. - return "" + if err == nil { + // If there is an error, we don't set the key. A blank key forces + // no cache for this request so the request is forwarded directly + // to the server. + info.Key = strconv.FormatUint(v, 10) } - return strconv.FormatUint(v, 10) + return info } func (r *DCSpecificRequest) CacheMinIndex() uint64 { From 286217cbd847925651821ae9009e20761cab9029 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 10 Apr 2018 16:05:34 +0100 Subject: [PATCH 139/539] agent/cache: partition by DC/ACL token --- agent/cache/cache.go | 48 +++++++++++++++++++++++------------- agent/cache/cache_test.go | 52 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 83 insertions(+), 17 deletions(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index 04323a4c5..c512476d5 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -20,15 +20,23 @@ import ( //go:generate mockery -all -inpkg -// TODO: DC-aware, ACL-aware - // Cache is a agent-local cache of Consul data. type Cache struct { - entriesLock sync.RWMutex - entries map[string]cacheEntry - + // types stores the list of data types that the cache knows how to service. + // These can be dynamically registered with RegisterType. typesLock sync.RWMutex types map[string]typeEntry + + // entries contains the actual cache data. + // + // NOTE(mitchellh): The entry map key is currently a string in the format + // of "//" in order to properly partition + // requests to different datacenters and ACL tokens. This format has some + // big drawbacks: we can't evict by datacenter, ACL token, etc. For an + // initial implementaiton this works and the tests are agnostic to the + // internal storage format so changing this should be possible safely. + entriesLock sync.RWMutex + entries map[string]cacheEntry } // cacheEntry stores a single cache entry. @@ -116,10 +124,13 @@ func (c *Cache) Get(t string, r Request) (interface{}, error) { return c.fetchDirect(t, r) } + // Get the actual key for our entry + key := c.entryKey(&info) + RETRY_GET: // Get the current value c.entriesLock.RLock() - entry, ok := c.entries[info.Key] + entry, ok := c.entries[key] c.entriesLock.RUnlock() // If we have a current value and the index is greater than the @@ -134,7 +145,7 @@ RETRY_GET: // At this point, we know we either don't have a value at all or the // value we have is too old. We need to wait for new data. - waiter, err := c.fetch(t, r) + waiter, err := c.fetch(t, key, r) if err != nil { return nil, err } @@ -144,7 +155,13 @@ RETRY_GET: goto RETRY_GET } -func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { +// entryKey returns the key for the entry in the cache. See the note +// about the entry key format in the structure docs for Cache. +func (c *Cache) entryKey(r *RequestInfo) string { + return fmt.Sprintf("%s/%s/%s", r.Datacenter, r.Token, r.Key) +} + +func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { // Get the type that we're fetching c.typesLock.RLock() tEntry, ok := c.types[t] @@ -153,12 +170,9 @@ func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { return nil, fmt.Errorf("unknown type in cache: %s", t) } - // Grab the cache information while we're outside the lock. - info := r.CacheInfo() - c.entriesLock.Lock() defer c.entriesLock.Unlock() - entry, ok := c.entries[info.Key] + entry, ok := c.entries[key] // If we already have an entry and it is actively fetching, then return // the currently active waiter. @@ -176,7 +190,7 @@ func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { // identical calls to fetch will return the same waiter rather than // perform multiple fetches. entry.Fetching = true - c.entries[info.Key] = entry + c.entries[key] = entry // The actual Fetch must be performed in a goroutine. go func() { @@ -197,7 +211,7 @@ func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { // Insert c.entriesLock.Lock() - c.entries[info.Key] = newEntry + c.entries[key] = newEntry c.entriesLock.Unlock() // Trigger the waiter @@ -206,7 +220,7 @@ func (c *Cache) fetch(t string, r Request) (<-chan struct{}, error) { // If refresh is enabled, run the refresh in due time. The refresh // below might block, but saves us from spawning another goroutine. if tEntry.Opts != nil && tEntry.Opts.Refresh { - c.refresh(tEntry.Opts, t, r) + c.refresh(tEntry.Opts, t, key, r) } }() @@ -235,7 +249,7 @@ func (c *Cache) fetchDirect(t string, r Request) (interface{}, error) { return result.Value, nil } -func (c *Cache) refresh(opts *RegisterOptions, t string, r Request) { +func (c *Cache) refresh(opts *RegisterOptions, t string, key string, r Request) { // Sanity-check, we should not schedule anything that has refresh disabled if !opts.Refresh { return @@ -247,5 +261,5 @@ func (c *Cache) refresh(opts *RegisterOptions, t string, r Request) { } // Trigger - c.fetch(t, r) + c.fetch(t, key, r) } diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go index 1bfed590c..1e75490a0 100644 --- a/agent/cache/cache_test.go +++ b/agent/cache/cache_test.go @@ -1,6 +1,7 @@ package cache import ( + "fmt" "sort" "sync" "testing" @@ -231,3 +232,54 @@ func TestCacheGet_periodicRefresh(t *testing.T) { resultCh = TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{Key: "hello"})) TestCacheGetChResult(t, resultCh, 12) } + +// Test that Get partitions the caches based on DC so two equivalent requests +// to different datacenters are automatically cached even if their keys are +// the same. +func TestCacheGet_partitionDC(t *testing.T) { + t.Parallel() + + c := TestCache(t) + c.RegisterType("t", &testPartitionType{}, nil) + + // Perform multiple gets + getCh1 := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{ + Datacenter: "dc1", Key: "hello"})) + getCh2 := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{ + Datacenter: "dc9", Key: "hello"})) + + // Should return both! + TestCacheGetChResult(t, getCh1, "dc1") + TestCacheGetChResult(t, getCh2, "dc9") +} + +// Test that Get partitions the caches based on token so two equivalent requests +// with different ACL tokens do not return the same result. +func TestCacheGet_partitionToken(t *testing.T) { + t.Parallel() + + c := TestCache(t) + c.RegisterType("t", &testPartitionType{}, nil) + + // Perform multiple gets + getCh1 := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{ + Token: "", Key: "hello"})) + getCh2 := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{ + Token: "foo", Key: "hello"})) + + // Should return both! + TestCacheGetChResult(t, getCh1, "") + TestCacheGetChResult(t, getCh2, "foo") +} + +// testPartitionType implements Type for testing that simply returns a value +// comprised of the request DC and ACL token, used for testing cache +// partitioning. +type testPartitionType struct{} + +func (t *testPartitionType) Fetch(opts FetchOptions, r Request) (FetchResult, error) { + info := r.CacheInfo() + return FetchResult{ + Value: fmt.Sprintf("%s%s", info.Datacenter, info.Token), + }, nil +} From 8bb4fd95a670e6eea15cc746366a44432875fb59 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 11 Apr 2018 09:52:51 +0100 Subject: [PATCH 140/539] agent: initialize the cache and cache the CA roots --- agent/agent.go | 26 ++++++++++++++++++++++++++ agent/agent_endpoint.go | 23 +++++++++++++++++++---- agent/cache-types/connect_ca.go | 9 ++++++--- agent/cache-types/connect_ca_test.go | 8 ++++---- 4 files changed, 55 insertions(+), 11 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 277bdd046..b6e923ee3 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -21,6 +21,8 @@ import ( "github.com/armon/go-metrics" "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/ae" + "github.com/hashicorp/consul/agent/cache" + "github.com/hashicorp/consul/agent/cache-types" "github.com/hashicorp/consul/agent/checks" "github.com/hashicorp/consul/agent/config" "github.com/hashicorp/consul/agent/consul" @@ -118,6 +120,9 @@ type Agent struct { // and the remote state. sync *ae.StateSyncer + // cache is the in-memory cache for data the Agent requests. + cache *cache.Cache + // checkReapAfter maps the check ID to a timeout after which we should // reap its associated service checkReapAfter map[types.CheckID]time.Duration @@ -290,6 +295,9 @@ func (a *Agent) Start() error { // regular and on-demand state synchronizations (anti-entropy). a.sync = ae.NewStateSyncer(a.State, c.AEInterval, a.shutdownCh, a.logger) + // create the cache + a.cache = cache.New(nil) + // create the config for the rpc server/client consulCfg, err := a.consulConfig() if err != nil { @@ -326,6 +334,9 @@ func (a *Agent) Start() error { a.State.Delegate = a.delegate a.State.TriggerSyncChanges = a.sync.SyncChanges.Trigger + // Register the cache + a.registerCache() + // Load checks/services/metadata. if err := a.loadServices(c); err != nil { return err @@ -2624,3 +2635,18 @@ func (a *Agent) ReloadConfig(newCfg *config.RuntimeConfig) error { return nil } + +// registerCache configures the cache and registers all the supported +// types onto the cache. This is NOT safe to call multiple times so +// care should be taken to call this exactly once after the cache +// field has been initialized. +func (a *Agent) registerCache() { + a.cache.RegisterType(cachetype.ConnectCARootName, &cachetype.ConnectCARoot{ + RPC: a.delegate, + }, &cache.RegisterOptions{ + // Maintain a blocking query, retry dropped connections quickly + Refresh: true, + RefreshTimer: 0, + RefreshTimeout: 10 * time.Minute, + }) +} diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index c1bf6fbe1..c64eb7a92 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -13,6 +13,7 @@ import ( "github.com/mitchellh/hashstructure" "github.com/hashicorp/consul/acl" + "github.com/hashicorp/consul/agent/cache-types" "github.com/hashicorp/consul/agent/checks" "github.com/hashicorp/consul/agent/config" "github.com/hashicorp/consul/agent/connect" @@ -885,10 +886,24 @@ func (s *HTTPServer) AgentToken(resp http.ResponseWriter, req *http.Request) (in // AgentConnectCARoots returns the trusted CA roots. func (s *HTTPServer) AgentConnectCARoots(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - // NOTE(mitchellh): for now this is identical to /v1/connect/ca/roots. - // In the future, we're going to do some agent-local caching and the - // behavior will differ. - return s.ConnectCARoots(resp, req) + var args structs.DCSpecificRequest + if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { + return nil, nil + } + + raw, err := s.agent.cache.Get(cachetype.ConnectCARootName, &args) + if err != nil { + return nil, err + } + + reply, ok := raw.(*structs.IndexedCARoots) + if !ok { + // This should never happen, but we want to protect against panics + return nil, fmt.Errorf("internal error: response type not correct") + } + defer setMeta(resp, &reply.QueryMeta) + + return *reply, nil } // AgentConnectCALeafCert returns the certificate bundle for a service diff --git a/agent/cache-types/connect_ca.go b/agent/cache-types/connect_ca.go index 85962b1fb..5b72a47a7 100644 --- a/agent/cache-types/connect_ca.go +++ b/agent/cache-types/connect_ca.go @@ -7,12 +7,15 @@ import ( "github.com/hashicorp/consul/agent/structs" ) -// TypeCARoot supports fetching the Connect CA roots. -type TypeCARoot struct { +// Recommended name for registration for ConnectCARoot +const ConnectCARootName = "connect-ca" + +// ConnectCARoot supports fetching the Connect CA roots. +type ConnectCARoot struct { RPC RPC } -func (c *TypeCARoot) Fetch(opts cache.FetchOptions, req cache.Request) (cache.FetchResult, error) { +func (c *ConnectCARoot) Fetch(opts cache.FetchOptions, req cache.Request) (cache.FetchResult, error) { var result cache.FetchResult // The request should be a DCSpecificRequest. diff --git a/agent/cache-types/connect_ca_test.go b/agent/cache-types/connect_ca_test.go index faf8317bd..24c37f313 100644 --- a/agent/cache-types/connect_ca_test.go +++ b/agent/cache-types/connect_ca_test.go @@ -10,11 +10,11 @@ import ( "github.com/stretchr/testify/require" ) -func TestTypeCARoot(t *testing.T) { +func TestConnectCARoot(t *testing.T) { require := require.New(t) rpc := TestRPC(t) defer rpc.AssertExpectations(t) - typ := &TypeCARoot{RPC: rpc} + typ := &ConnectCARoot{RPC: rpc} // Expect the proper RPC call. This also sets the expected value // since that is return-by-pointer in the arguments. @@ -42,11 +42,11 @@ func TestTypeCARoot(t *testing.T) { }, result) } -func TestTypeCARoot_badReqType(t *testing.T) { +func TestConnectCARoot_badReqType(t *testing.T) { require := require.New(t) rpc := TestRPC(t) defer rpc.AssertExpectations(t) - typ := &TypeCARoot{RPC: rpc} + typ := &ConnectCARoot{RPC: rpc} // Fetch _, err := typ.Fetch(cache.FetchOptions{}, cache.TestRequest( From 9e44a319d38b448588ad7c3a6d47fbd8c79f89f3 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 11 Apr 2018 10:18:24 +0100 Subject: [PATCH 141/539] agent: check cache hit count to verify CA root caching, background update --- agent/agent_endpoint_test.go | 59 +++++++++++++++++++++++++++++++----- agent/cache/cache.go | 26 ++++++++++++++++ 2 files changed, 78 insertions(+), 7 deletions(-) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 32cb6ab98..2e583ec4f 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2121,32 +2121,77 @@ func TestAgentConnectCARoots_empty(t *testing.T) { func TestAgentConnectCARoots_list(t *testing.T) { t.Parallel() - assert := assert.New(t) + require := require.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() + // Grab the initial cache hit count + cacheHits := a.cache.Hits() + // Set some CAs var reply interface{} ca1 := connect.TestCA(t, nil) ca1.Active = false ca2 := connect.TestCA(t, nil) - assert.Nil(a.RPC("Test.ConnectCASetRoots", + require.Nil(a.RPC("Test.ConnectCASetRoots", []*structs.CARoot{ca1, ca2}, &reply)) // List req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/roots", nil) resp := httptest.NewRecorder() obj, err := a.srv.AgentConnectCARoots(resp, req) - assert.Nil(err) + require.Nil(err) value := obj.(structs.IndexedCARoots) - assert.Equal(value.ActiveRootID, ca2.ID) - assert.Len(value.Roots, 2) + require.Equal(value.ActiveRootID, ca2.ID) + require.Len(value.Roots, 2) // We should never have the secret information for _, r := range value.Roots { - assert.Equal("", r.SigningCert) - assert.Equal("", r.SigningKey) + require.Equal("", r.SigningCert) + require.Equal("", r.SigningKey) + } + + // That should've been a cache miss, so not hit change + require.Equal(cacheHits, a.cache.Hits()) + + // Test caching + { + // List it again + obj2, err := a.srv.AgentConnectCARoots(httptest.NewRecorder(), req) + require.Nil(err) + require.Equal(obj, obj2) + + // Should cache hit this time and not make request + require.Equal(cacheHits+1, a.cache.Hits()) + cacheHits++ + } + + // Test that caching is updated in the background + { + // Set some new CAs + var reply interface{} + ca := connect.TestCA(t, nil) + require.Nil(a.RPC("Test.ConnectCASetRoots", + []*structs.CARoot{ca}, &reply)) + + // Sleep a bit to wait for the cache to update + time.Sleep(100 * time.Millisecond) + + // List it again + obj, err := a.srv.AgentConnectCARoots(httptest.NewRecorder(), req) + require.Nil(err) + require.Equal(obj, obj) + + value := obj.(structs.IndexedCARoots) + require.Equal(value.ActiveRootID, ca.ID) + require.Len(value.Roots, 1) + + // Should be a cache hit! The data should've updated in the cache + // in the background so this should've been fetched directly from + // the cache. + require.Equal(cacheHits+1, a.cache.Hits()) + cacheHits++ } } diff --git a/agent/cache/cache.go b/agent/cache/cache.go index c512476d5..a57ab8343 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -15,6 +15,7 @@ package cache import ( "fmt" "sync" + "sync/atomic" "time" ) @@ -22,6 +23,11 @@ import ( // Cache is a agent-local cache of Consul data. type Cache struct { + // Keeps track of the cache hits and misses in total. This is used by + // tests currently to verify cache behavior and is not meant for general + // analytics; for that, go-metrics emitted values are better. + hits, misses uint64 + // types stores the list of data types that the cache knows how to service. // These can be dynamically registered with RegisterType. typesLock sync.RWMutex @@ -127,6 +133,9 @@ func (c *Cache) Get(t string, r Request) (interface{}, error) { // Get the actual key for our entry key := c.entryKey(&info) + // First time through + first := true + RETRY_GET: // Get the current value c.entriesLock.RLock() @@ -139,10 +148,22 @@ RETRY_GET: // we have. if ok && entry.Valid { if info.MinIndex == 0 || info.MinIndex < entry.Index { + if first { + atomic.AddUint64(&c.hits, 1) + } + return entry.Value, nil } } + if first { + // Record the miss if its our first time through + atomic.AddUint64(&c.misses, 1) + } + + // No longer our first time through + first = false + // At this point, we know we either don't have a value at all or the // value we have is too old. We need to wait for new data. waiter, err := c.fetch(t, key, r) @@ -263,3 +284,8 @@ func (c *Cache) refresh(opts *RegisterOptions, t string, key string, r Request) // Trigger c.fetch(t, key, r) } + +// Returns the number of cache hits. Safe to call concurrently. +func (c *Cache) Hits() uint64 { + return atomic.LoadUint64(&c.hits) +} From e3b1c400e5474a55978d2243ec2b8aa1cecaefee Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 15 Apr 2018 22:11:04 +0200 Subject: [PATCH 142/539] agent/cache-types: got basic CA leaf caching work, major problems still --- agent/cache-types/connect_ca.go | 182 ++++++++++++++++++++++++- agent/cache-types/connect_ca_test.go | 195 +++++++++++++++++++++++++++ agent/cache-types/testing.go | 48 +++++++ 3 files changed, 422 insertions(+), 3 deletions(-) diff --git a/agent/cache-types/connect_ca.go b/agent/cache-types/connect_ca.go index 5b72a47a7..22549ed49 100644 --- a/agent/cache-types/connect_ca.go +++ b/agent/cache-types/connect_ca.go @@ -2,15 +2,27 @@ package cachetype import ( "fmt" + "sync" + "sync/atomic" + "time" "github.com/hashicorp/consul/agent/cache" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" + + // NOTE(mitcehllh): This is temporary while certs are stubbed out. + "github.com/mitchellh/go-testing-interface" ) -// Recommended name for registration for ConnectCARoot -const ConnectCARootName = "connect-ca" +// Recommended name for registration. +const ( + ConnectCARootName = "connect-ca-root" + ConnectCALeafName = "connect-ca-leaf" +) -// ConnectCARoot supports fetching the Connect CA roots. +// ConnectCARoot supports fetching the Connect CA roots. This is a +// straightforward cache type since it only has to block on the given +// index and return the data. type ConnectCARoot struct { RPC RPC } @@ -39,3 +51,167 @@ func (c *ConnectCARoot) Fetch(opts cache.FetchOptions, req cache.Request) (cache result.Index = reply.QueryMeta.Index return result, nil } + +// ConnectCALeaf supports fetching and generating Connect leaf +// certificates. +type ConnectCALeaf struct { + caIndex uint64 // Current index for CA roots + + issuedCertsLock sync.RWMutex + issuedCerts map[string]*structs.IssuedCert + + RPC RPC // RPC client for remote requests + Cache *cache.Cache // Cache that has CA root certs via ConnectCARoot +} + +func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache.FetchResult, error) { + var result cache.FetchResult + + // Get the correct type + reqReal, ok := req.(*ConnectCALeafRequest) + if !ok { + return result, fmt.Errorf( + "Internal cache failure: request wrong type: %T", req) + } + + // This channel watches our overall timeout. The other goroutines + // launched in this function should end all around the same time so + // they clean themselves up. + timeoutCh := time.After(opts.Timeout) + + // Kick off the goroutine that waits for new CA roots. The channel buffer + // is so that the goroutine doesn't block forever if we return for other + // reasons. + newRootCACh := make(chan error, 1) + go c.waitNewRootCA(newRootCACh, opts.Timeout) + + // Get our prior cert (if we had one) and use that to determine our + // expiration time. If no cert exists, we expire immediately since we + // need to generate. + c.issuedCertsLock.RLock() + lastCert := c.issuedCerts[reqReal.Service] + c.issuedCertsLock.RUnlock() + + var leafExpiryCh <-chan time.Time + if lastCert != nil { + // Determine how long we wait until triggering. If we've already + // expired, we trigger immediately. + if expiryDur := lastCert.ValidBefore.Sub(time.Now()); expiryDur > 0 { + leafExpiryCh = time.After(expiryDur - 1*time.Hour) + // TODO(mitchellh): 1 hour buffer is hardcoded above + } + } + + if leafExpiryCh == nil { + // If the channel is still nil then it means we need to generate + // a cert no matter what: we either don't have an existing one or + // it is expired. + leafExpiryCh = time.After(0) + } + + // Block on the events that wake us up. + select { + case <-timeoutCh: + // TODO: what is the right error for a timeout? + return result, fmt.Errorf("timeout") + + case err := <-newRootCACh: + // A new root CA triggers us to refresh the leaf certificate. + // If there was an error while getting the root CA then we return. + // Otherwise, we leave the select statement and move to generation. + if err != nil { + return result, err + } + + case <-leafExpiryCh: + // The existing leaf certificate is expiring soon, so we generate a + // new cert with a healthy overlapping validity period (determined + // by the above channel). + } + + // Create a CSR. + // TODO(mitchellh): This is obviously not production ready! + csr, pk := connect.TestCSR(&testing.RuntimeT{}, &connect.SpiffeIDService{ + Host: "1234.consul", + Namespace: "default", + Datacenter: reqReal.Datacenter, + Service: reqReal.Service, + }) + + // Request signing + var reply structs.IssuedCert + args := structs.CASignRequest{CSR: csr} + if err := c.RPC.RPC("ConnectCA.Sign", &args, &reply); err != nil { + return result, err + } + reply.PrivateKeyPEM = pk + + // Lock the issued certs map so we can insert it. We only insert if + // we didn't happen to get a newer one. This should never happen since + // the Cache should ensure only one Fetch per service, but we sanity + // check just in case. + c.issuedCertsLock.Lock() + defer c.issuedCertsLock.Unlock() + lastCert = c.issuedCerts[reqReal.Service] + if lastCert == nil || lastCert.ModifyIndex < reply.ModifyIndex { + if c.issuedCerts == nil { + c.issuedCerts = make(map[string]*structs.IssuedCert) + } + c.issuedCerts[reqReal.Service] = &reply + lastCert = &reply + } + + result.Value = lastCert + result.Index = lastCert.ModifyIndex + return result, nil +} + +// waitNewRootCA blocks until a new root CA is available or the timeout is +// reached (on timeout ErrTimeout is returned on the channel). +func (c *ConnectCALeaf) waitNewRootCA(ch chan<- error, timeout time.Duration) { + // Fetch some new roots. This will block until our MinQueryIndex is + // matched or the timeout is reached. + rawRoots, err := c.Cache.Get(ConnectCARootName, &structs.DCSpecificRequest{ + Datacenter: "", + QueryOptions: structs.QueryOptions{ + MinQueryIndex: atomic.LoadUint64(&c.caIndex), + MaxQueryTime: timeout, + }, + }) + if err != nil { + ch <- err + return + } + + roots, ok := rawRoots.(*structs.IndexedCARoots) + if !ok { + // This should never happen but we don't want to even risk a panic + ch <- fmt.Errorf( + "internal error: CA root cache returned bad type: %T", rawRoots) + return + } + + // Set the new index + atomic.StoreUint64(&c.caIndex, roots.QueryMeta.Index) + + // Trigger the channel since we updated. + ch <- nil +} + +// ConnectCALeafRequest is the cache.Request implementation for the +// COnnectCALeaf cache type. This is implemented here and not in structs +// since this is only used for cache-related requests and not forwarded +// directly to any Consul servers. +type ConnectCALeafRequest struct { + Datacenter string + Service string // Service name, not ID + MinQueryIndex uint64 +} + +func (r *ConnectCALeafRequest) CacheInfo() cache.RequestInfo { + return cache.RequestInfo{ + Key: r.Service, + Datacenter: r.Datacenter, + MinIndex: r.MinQueryIndex, + } +} diff --git a/agent/cache-types/connect_ca_test.go b/agent/cache-types/connect_ca_test.go index 24c37f313..43953e7f8 100644 --- a/agent/cache-types/connect_ca_test.go +++ b/agent/cache-types/connect_ca_test.go @@ -1,6 +1,8 @@ package cachetype import ( + "fmt" + "sync/atomic" "testing" "time" @@ -55,3 +57,196 @@ func TestConnectCARoot_badReqType(t *testing.T) { require.Contains(err.Error(), "wrong type") } + +// Test that after an initial signing, new CA roots (new ID) will +// trigger a blocking query to execute. +func TestConnectCALeaf_changingRoots(t *testing.T) { + t.Parallel() + + require := require.New(t) + rpc := TestRPC(t) + defer rpc.AssertExpectations(t) + + typ, rootsCh := testCALeafType(t, rpc) + defer close(rootsCh) + rootsCh <- structs.IndexedCARoots{ + ActiveRootID: "1", + QueryMeta: structs.QueryMeta{Index: 1}, + } + + // Instrument ConnectCA.Sign to + var resp *structs.IssuedCert + var idx uint64 + rpc.On("RPC", "ConnectCA.Sign", mock.Anything, mock.Anything).Return(nil). + Run(func(args mock.Arguments) { + reply := args.Get(2).(*structs.IssuedCert) + reply.ValidBefore = time.Now().Add(12 * time.Hour) + reply.CreateIndex = atomic.AddUint64(&idx, 1) + reply.ModifyIndex = reply.CreateIndex + resp = reply + }) + + // We'll reuse the fetch options and request + opts := cache.FetchOptions{MinIndex: 0, Timeout: 10 * time.Second} + req := &ConnectCALeafRequest{Datacenter: "dc1", Service: "web"} + + // First fetch should return immediately + fetchCh := TestFetchCh(t, typ, opts, req) + select { + case <-time.After(100 * time.Millisecond): + t.Fatal("shouldn't block waiting for fetch") + case result := <-fetchCh: + require.Equal(cache.FetchResult{ + Value: resp, + Index: 1, + }, result) + } + + // Second fetch should block with set index + fetchCh = TestFetchCh(t, typ, opts, req) + select { + case result := <-fetchCh: + t.Fatalf("should not return: %#v", result) + case <-time.After(100 * time.Millisecond): + } + + // Let's send in new roots, which should trigger the sign req + rootsCh <- structs.IndexedCARoots{ + ActiveRootID: "2", + QueryMeta: structs.QueryMeta{Index: 2}, + } + select { + case <-time.After(100 * time.Millisecond): + t.Fatal("shouldn't block waiting for fetch") + case result := <-fetchCh: + require.Equal(cache.FetchResult{ + Value: resp, + Index: 2, + }, result) + } + + // Third fetch should block + fetchCh = TestFetchCh(t, typ, opts, req) + select { + case result := <-fetchCh: + t.Fatalf("should not return: %#v", result) + case <-time.After(100 * time.Millisecond): + } +} + +// Test that after an initial signing, an expiringLeaf will trigger a +// blocking query to resign. +func TestConnectCALeaf_expiringLeaf(t *testing.T) { + t.Parallel() + + require := require.New(t) + rpc := TestRPC(t) + defer rpc.AssertExpectations(t) + + typ, rootsCh := testCALeafType(t, rpc) + defer close(rootsCh) + rootsCh <- structs.IndexedCARoots{ + ActiveRootID: "1", + QueryMeta: structs.QueryMeta{Index: 1}, + } + + // Instrument ConnectCA.Sign to + var resp *structs.IssuedCert + var idx uint64 + rpc.On("RPC", "ConnectCA.Sign", mock.Anything, mock.Anything).Return(nil). + Run(func(args mock.Arguments) { + reply := args.Get(2).(*structs.IssuedCert) + reply.CreateIndex = atomic.AddUint64(&idx, 1) + reply.ModifyIndex = reply.CreateIndex + + // This sets the validity to 0 on the first call, and + // 12 hours+ on subsequent calls. This means that our first + // cert expires immediately. + reply.ValidBefore = time.Now().Add((12 * time.Hour) * + time.Duration(reply.CreateIndex-1)) + + resp = reply + }) + + // We'll reuse the fetch options and request + opts := cache.FetchOptions{MinIndex: 0, Timeout: 10 * time.Second} + req := &ConnectCALeafRequest{Datacenter: "dc1", Service: "web"} + + // First fetch should return immediately + fetchCh := TestFetchCh(t, typ, opts, req) + select { + case <-time.After(100 * time.Millisecond): + t.Fatal("shouldn't block waiting for fetch") + case result := <-fetchCh: + require.Equal(cache.FetchResult{ + Value: resp, + Index: 1, + }, result) + } + + // Second fetch should return immediately despite there being + // no updated CA roots, because we issued an expired cert. + fetchCh = TestFetchCh(t, typ, opts, req) + select { + case <-time.After(100 * time.Millisecond): + t.Fatal("shouldn't block waiting for fetch") + case result := <-fetchCh: + require.Equal(cache.FetchResult{ + Value: resp, + Index: 2, + }, result) + } + + // Third fetch should block since the cert is not expiring and + // we also didn't update CA certs. + fetchCh = TestFetchCh(t, typ, opts, req) + select { + case result := <-fetchCh: + t.Fatalf("should not return: %#v", result) + case <-time.After(100 * time.Millisecond): + } +} + +// testCALeafType returns a *ConnectCALeaf that is pre-configured to +// use the given RPC implementation for "ConnectCA.Sign" operations. +func testCALeafType(t *testing.T, rpc RPC) (*ConnectCALeaf, chan structs.IndexedCARoots) { + // This creates an RPC implementation that will block until the + // value is sent on the channel. This lets us control when the + // next values show up. + rootsCh := make(chan structs.IndexedCARoots, 10) + rootsRPC := &testGatedRootsRPC{ValueCh: rootsCh} + + // Create a cache + c := cache.TestCache(t) + c.RegisterType(ConnectCARootName, &ConnectCARoot{RPC: rootsRPC}, &cache.RegisterOptions{ + // Disable refresh so that the gated channel controls the + // request directly. Otherwise, we get background refreshes and + // it screws up the ordering of the channel reads of the + // testGatedRootsRPC implementation. + Refresh: false, + }) + + // Create the leaf type + return &ConnectCALeaf{RPC: rpc, Cache: c}, rootsCh +} + +// testGatedRootsRPC will send each subsequent value on the channel as the +// RPC response, blocking if it is waiting for a value on the channel. This +// can be used to control when background fetches are returned and what they +// return. +// +// This should be used with Refresh = false for the registration options so +// automatic refreshes don't mess up the channel read ordering. +type testGatedRootsRPC struct { + ValueCh chan structs.IndexedCARoots +} + +func (r *testGatedRootsRPC) RPC(method string, args interface{}, reply interface{}) error { + if method != "ConnectCA.Roots" { + return fmt.Errorf("invalid RPC method: %s", method) + } + + replyReal := reply.(*structs.IndexedCARoots) + *replyReal = <-r.ValueCh + return nil +} diff --git a/agent/cache-types/testing.go b/agent/cache-types/testing.go index bf68ec478..fcffe45a9 100644 --- a/agent/cache-types/testing.go +++ b/agent/cache-types/testing.go @@ -1,6 +1,10 @@ package cachetype import ( + "reflect" + "time" + + "github.com/hashicorp/consul/agent/cache" "github.com/mitchellh/go-testing-interface" ) @@ -10,3 +14,47 @@ func TestRPC(t testing.T) *MockRPC { // perform some initialization later. return &MockRPC{} } + +// TestFetchCh returns a channel that returns the result of the Fetch call. +// This is useful for testing timing and concurrency with Fetch calls. +// Errors will show up as an error type on the resulting channel so a +// type switch should be used. +func TestFetchCh( + t testing.T, + typ cache.Type, + opts cache.FetchOptions, + req cache.Request) <-chan interface{} { + resultCh := make(chan interface{}) + go func() { + result, err := typ.Fetch(opts, req) + if err != nil { + resultCh <- err + return + } + + resultCh <- result + }() + + return resultCh +} + +// TestFetchChResult tests that the result from TestFetchCh matches +// within a reasonable period of time (it expects it to be "immediate" but +// waits some milliseconds). +func TestFetchChResult(t testing.T, ch <-chan interface{}, expected interface{}) { + t.Helper() + + select { + case result := <-ch: + if err, ok := result.(error); ok { + t.Fatalf("Result was error: %s", err) + return + } + + if !reflect.DeepEqual(result, expected) { + t.Fatalf("Result doesn't match!\n\n%#v\n\n%#v", result, expected) + } + + case <-time.After(50 * time.Millisecond): + } +} From b0f70f17db2f54038d1883e62fae28a69abf3834 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 16 Apr 2018 11:31:03 +0200 Subject: [PATCH 143/539] agent/cache-types: rename to separate root and leaf cache types --- .../{connect_ca.go => connect_ca_leaf.go} | 37 +----------- ...ect_ca_test.go => connect_ca_leaf_test.go} | 46 --------------- agent/cache-types/connect_ca_root.go | 43 ++++++++++++++ agent/cache-types/connect_ca_root_test.go | 57 +++++++++++++++++++ 4 files changed, 101 insertions(+), 82 deletions(-) rename agent/cache-types/{connect_ca.go => connect_ca_leaf.go} (84%) rename agent/cache-types/{connect_ca_test.go => connect_ca_leaf_test.go} (82%) create mode 100644 agent/cache-types/connect_ca_root.go create mode 100644 agent/cache-types/connect_ca_root_test.go diff --git a/agent/cache-types/connect_ca.go b/agent/cache-types/connect_ca_leaf.go similarity index 84% rename from agent/cache-types/connect_ca.go rename to agent/cache-types/connect_ca_leaf.go index 22549ed49..d90bc19bb 100644 --- a/agent/cache-types/connect_ca.go +++ b/agent/cache-types/connect_ca_leaf.go @@ -15,42 +15,7 @@ import ( ) // Recommended name for registration. -const ( - ConnectCARootName = "connect-ca-root" - ConnectCALeafName = "connect-ca-leaf" -) - -// ConnectCARoot supports fetching the Connect CA roots. This is a -// straightforward cache type since it only has to block on the given -// index and return the data. -type ConnectCARoot struct { - RPC RPC -} - -func (c *ConnectCARoot) Fetch(opts cache.FetchOptions, req cache.Request) (cache.FetchResult, error) { - var result cache.FetchResult - - // The request should be a DCSpecificRequest. - reqReal, ok := req.(*structs.DCSpecificRequest) - if !ok { - return result, fmt.Errorf( - "Internal cache failure: request wrong type: %T", req) - } - - // Set the minimum query index to our current index so we block - reqReal.QueryOptions.MinQueryIndex = opts.MinIndex - reqReal.QueryOptions.MaxQueryTime = opts.Timeout - - // Fetch - var reply structs.IndexedCARoots - if err := c.RPC.RPC("ConnectCA.Roots", reqReal, &reply); err != nil { - return result, err - } - - result.Value = &reply - result.Index = reply.QueryMeta.Index - return result, nil -} +const ConnectCALeafName = "connect-ca-leaf" // ConnectCALeaf supports fetching and generating Connect leaf // certificates. diff --git a/agent/cache-types/connect_ca_test.go b/agent/cache-types/connect_ca_leaf_test.go similarity index 82% rename from agent/cache-types/connect_ca_test.go rename to agent/cache-types/connect_ca_leaf_test.go index 43953e7f8..0612aed21 100644 --- a/agent/cache-types/connect_ca_test.go +++ b/agent/cache-types/connect_ca_leaf_test.go @@ -12,52 +12,6 @@ import ( "github.com/stretchr/testify/require" ) -func TestConnectCARoot(t *testing.T) { - require := require.New(t) - rpc := TestRPC(t) - defer rpc.AssertExpectations(t) - typ := &ConnectCARoot{RPC: rpc} - - // Expect the proper RPC call. This also sets the expected value - // since that is return-by-pointer in the arguments. - var resp *structs.IndexedCARoots - rpc.On("RPC", "ConnectCA.Roots", mock.Anything, mock.Anything).Return(nil). - Run(func(args mock.Arguments) { - req := args.Get(1).(*structs.DCSpecificRequest) - require.Equal(uint64(24), req.QueryOptions.MinQueryIndex) - require.Equal(1*time.Second, req.QueryOptions.MaxQueryTime) - - reply := args.Get(2).(*structs.IndexedCARoots) - reply.QueryMeta.Index = 48 - resp = reply - }) - - // Fetch - result, err := typ.Fetch(cache.FetchOptions{ - MinIndex: 24, - Timeout: 1 * time.Second, - }, &structs.DCSpecificRequest{Datacenter: "dc1"}) - require.Nil(err) - require.Equal(cache.FetchResult{ - Value: resp, - Index: 48, - }, result) -} - -func TestConnectCARoot_badReqType(t *testing.T) { - require := require.New(t) - rpc := TestRPC(t) - defer rpc.AssertExpectations(t) - typ := &ConnectCARoot{RPC: rpc} - - // Fetch - _, err := typ.Fetch(cache.FetchOptions{}, cache.TestRequest( - t, cache.RequestInfo{Key: "foo", MinIndex: 64})) - require.NotNil(err) - require.Contains(err.Error(), "wrong type") - -} - // Test that after an initial signing, new CA roots (new ID) will // trigger a blocking query to execute. func TestConnectCALeaf_changingRoots(t *testing.T) { diff --git a/agent/cache-types/connect_ca_root.go b/agent/cache-types/connect_ca_root.go new file mode 100644 index 000000000..036cf53d2 --- /dev/null +++ b/agent/cache-types/connect_ca_root.go @@ -0,0 +1,43 @@ +package cachetype + +import ( + "fmt" + + "github.com/hashicorp/consul/agent/cache" + "github.com/hashicorp/consul/agent/structs" +) + +// Recommended name for registration. +const ConnectCARootName = "connect-ca-root" + +// ConnectCARoot supports fetching the Connect CA roots. This is a +// straightforward cache type since it only has to block on the given +// index and return the data. +type ConnectCARoot struct { + RPC RPC +} + +func (c *ConnectCARoot) Fetch(opts cache.FetchOptions, req cache.Request) (cache.FetchResult, error) { + var result cache.FetchResult + + // The request should be a DCSpecificRequest. + reqReal, ok := req.(*structs.DCSpecificRequest) + if !ok { + return result, fmt.Errorf( + "Internal cache failure: request wrong type: %T", req) + } + + // Set the minimum query index to our current index so we block + reqReal.QueryOptions.MinQueryIndex = opts.MinIndex + reqReal.QueryOptions.MaxQueryTime = opts.Timeout + + // Fetch + var reply structs.IndexedCARoots + if err := c.RPC.RPC("ConnectCA.Roots", reqReal, &reply); err != nil { + return result, err + } + + result.Value = &reply + result.Index = reply.QueryMeta.Index + return result, nil +} diff --git a/agent/cache-types/connect_ca_root_test.go b/agent/cache-types/connect_ca_root_test.go new file mode 100644 index 000000000..24c37f313 --- /dev/null +++ b/agent/cache-types/connect_ca_root_test.go @@ -0,0 +1,57 @@ +package cachetype + +import ( + "testing" + "time" + + "github.com/hashicorp/consul/agent/cache" + "github.com/hashicorp/consul/agent/structs" + "github.com/stretchr/testify/mock" + "github.com/stretchr/testify/require" +) + +func TestConnectCARoot(t *testing.T) { + require := require.New(t) + rpc := TestRPC(t) + defer rpc.AssertExpectations(t) + typ := &ConnectCARoot{RPC: rpc} + + // Expect the proper RPC call. This also sets the expected value + // since that is return-by-pointer in the arguments. + var resp *structs.IndexedCARoots + rpc.On("RPC", "ConnectCA.Roots", mock.Anything, mock.Anything).Return(nil). + Run(func(args mock.Arguments) { + req := args.Get(1).(*structs.DCSpecificRequest) + require.Equal(uint64(24), req.QueryOptions.MinQueryIndex) + require.Equal(1*time.Second, req.QueryOptions.MaxQueryTime) + + reply := args.Get(2).(*structs.IndexedCARoots) + reply.QueryMeta.Index = 48 + resp = reply + }) + + // Fetch + result, err := typ.Fetch(cache.FetchOptions{ + MinIndex: 24, + Timeout: 1 * time.Second, + }, &structs.DCSpecificRequest{Datacenter: "dc1"}) + require.Nil(err) + require.Equal(cache.FetchResult{ + Value: resp, + Index: 48, + }, result) +} + +func TestConnectCARoot_badReqType(t *testing.T) { + require := require.New(t) + rpc := TestRPC(t) + defer rpc.AssertExpectations(t) + typ := &ConnectCARoot{RPC: rpc} + + // Fetch + _, err := typ.Fetch(cache.FetchOptions{}, cache.TestRequest( + t, cache.RequestInfo{Key: "foo", MinIndex: 64})) + require.NotNil(err) + require.Contains(err.Error(), "wrong type") + +} From 45095894278edaf0455eeea949d4fb54c29f5180 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 16 Apr 2018 12:06:08 +0200 Subject: [PATCH 144/539] agent/cache: support timeouts for cache reads and empty fetch results --- agent/cache/cache.go | 44 ++++++++++++++++++------ agent/cache/cache_test.go | 71 +++++++++++++++++++++++++++++++++++++++ agent/cache/request.go | 9 +++++ agent/cache/type.go | 6 ++++ 4 files changed, 120 insertions(+), 10 deletions(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index a57ab8343..a1b4570b0 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -136,6 +136,9 @@ func (c *Cache) Get(t string, r Request) (interface{}, error) { // First time through first := true + // timeoutCh for watching our tmeout + var timeoutCh <-chan time.Time + RETRY_GET: // Get the current value c.entriesLock.RLock() @@ -164,16 +167,27 @@ RETRY_GET: // No longer our first time through first = false + // Set our timeout channel if we must + if info.Timeout > 0 && timeoutCh == nil { + timeoutCh = time.After(info.Timeout) + } + // At this point, we know we either don't have a value at all or the // value we have is too old. We need to wait for new data. - waiter, err := c.fetch(t, key, r) + waiterCh, err := c.fetch(t, key, r) if err != nil { return nil, err } - // Wait on our waiter and then retry the cache load - <-waiter - goto RETRY_GET + select { + case <-waiterCh: + // Our fetch returned, retry the get from the cache + goto RETRY_GET + + case <-timeoutCh: + // Timeout on the cache read, just return whatever we have. + return entry.Value, nil + } } // entryKey returns the key for the entry in the cache. See the note @@ -216,16 +230,26 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { // The actual Fetch must be performed in a goroutine. go func() { // Start building the new entry by blocking on the fetch. - var newEntry cacheEntry result, err := tEntry.Type.Fetch(FetchOptions{ MinIndex: entry.Index, }, r) - newEntry.Value = result.Value - newEntry.Index = result.Index - newEntry.Error = err - // This is a valid entry with a result - newEntry.Valid = true + var newEntry cacheEntry + if result.Value == nil { + // If no value was set, then we do not change the prior entry. + // Instead, we just update the waiter to be new so that another + // Get will wait on the correct value. + newEntry = entry + newEntry.Fetching = false + } else { + // A new value was given, so we create a brand new entry. + newEntry.Value = result.Value + newEntry.Index = result.Index + newEntry.Error = err + + // This is a valid entry with a result + newEntry.Valid = true + } // Create a new waiter that will be used for the next fetch. newEntry.Waiter = make(chan struct{}) diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go index 1e75490a0..b8ca66dc4 100644 --- a/agent/cache/cache_test.go +++ b/agent/cache/cache_test.go @@ -194,6 +194,77 @@ func TestCacheGet_blockingIndex(t *testing.T) { TestCacheGetChResult(t, resultCh, 42) } +// Test a get with an index set will timeout if the fetch doesn't return +// anything. +func TestCacheGet_blockingIndexTimeout(t *testing.T) { + t.Parallel() + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + c.RegisterType("t", typ, nil) + + // Configure the type + triggerCh := make(chan time.Time) + typ.Static(FetchResult{Value: 1, Index: 4}, nil).Once() + typ.Static(FetchResult{Value: 12, Index: 5}, nil).Once() + typ.Static(FetchResult{Value: 42, Index: 6}, nil).WaitUntil(triggerCh) + + // Fetch should block + resultCh := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{ + Key: "hello", MinIndex: 5, Timeout: 200 * time.Millisecond})) + + // Should block + select { + case <-resultCh: + t.Fatal("should block") + case <-time.After(50 * time.Millisecond): + } + + // Should return after more of the timeout + select { + case result := <-resultCh: + require.Equal(t, 12, result) + case <-time.After(300 * time.Millisecond): + t.Fatal("should've returned") + } +} + +// Test that if a Type returns an empty value on Fetch that the previous +// value is preserved. +func TestCacheGet_emptyFetchResult(t *testing.T) { + t.Parallel() + + require := require.New(t) + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + c.RegisterType("t", typ, nil) + + // Configure the type + typ.Static(FetchResult{Value: 42, Index: 1}, nil).Times(1) + typ.Static(FetchResult{Value: nil}, nil) + + // Get, should fetch + req := TestRequest(t, RequestInfo{Key: "hello"}) + result, err := c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Get, should not fetch since we already have a satisfying value + req = TestRequest(t, RequestInfo{ + Key: "hello", MinIndex: 1, Timeout: 100 * time.Millisecond}) + result, err = c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Sleep a tiny bit just to let maybe some background calls happen + // then verify that we still only got the one call + time.Sleep(20 * time.Millisecond) + typ.AssertExpectations(t) +} + // Test that a type registered with a periodic refresh will perform // that refresh after the timer is up. func TestCacheGet_periodicRefresh(t *testing.T) { diff --git a/agent/cache/request.go b/agent/cache/request.go index b4a1b75d0..7beec58e8 100644 --- a/agent/cache/request.go +++ b/agent/cache/request.go @@ -1,5 +1,9 @@ package cache +import ( + "time" +) + // Request is a cache-able request. // // This interface is typically implemented by request structures in @@ -36,4 +40,9 @@ type RequestInfo struct { // to block until new data is available. If no index is available, the // default value (zero) is acceptable. MinIndex uint64 + + // Timeout is the timeout for waiting on a blocking query. When the + // timeout is reached, the last known value is returned (or maybe nil + // if there was no prior value). + Timeout time.Duration } diff --git a/agent/cache/type.go b/agent/cache/type.go index 6e8edeb5f..cccb10b94 100644 --- a/agent/cache/type.go +++ b/agent/cache/type.go @@ -15,6 +15,12 @@ type Type interface { // // The return value is a FetchResult which contains information about // the fetch. + // + // On timeout, FetchResult can behave one of two ways. First, it can + // return the last known value. This is the default behavior of blocking + // RPC calls in Consul so this allows cache types to be implemented with + // no extra logic. Second, FetchResult can return an unset value and index. + // In this case, the cache will reuse the last value automatically. Fetch(FetchOptions, Request) (FetchResult, error) } From ccd7eeef1ad5f4eaf2fedbfcfd6142c1b0c7803b Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 16 Apr 2018 12:25:35 +0200 Subject: [PATCH 145/539] agent/cache-types/ca-leaf: proper result for timeout, race on setting CA --- agent/cache-types/connect_ca_leaf.go | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/agent/cache-types/connect_ca_leaf.go b/agent/cache-types/connect_ca_leaf.go index d90bc19bb..70d5e3c24 100644 --- a/agent/cache-types/connect_ca_leaf.go +++ b/agent/cache-types/connect_ca_leaf.go @@ -77,8 +77,11 @@ func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache // Block on the events that wake us up. select { case <-timeoutCh: - // TODO: what is the right error for a timeout? - return result, fmt.Errorf("timeout") + // On a timeout, we just return the empty result and no error. + // It isn't an error to timeout, its just the limit of time the + // caching system wants us to block for. By returning an empty result + // the caching system will ignore. + return result, nil case err := <-newRootCACh: // A new root CA triggers us to refresh the leaf certificate. @@ -122,6 +125,7 @@ func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache if c.issuedCerts == nil { c.issuedCerts = make(map[string]*structs.IssuedCert) } + c.issuedCerts[reqReal.Service] = &reply lastCert = &reply } @@ -156,8 +160,24 @@ func (c *ConnectCALeaf) waitNewRootCA(ch chan<- error, timeout time.Duration) { return } - // Set the new index - atomic.StoreUint64(&c.caIndex, roots.QueryMeta.Index) + // We do a loop here because there can be multiple waitNewRootCA calls + // happening simultaneously. Each Fetch kicks off one call. These are + // multiplexed through Cache.Get which should ensure we only ever + // actually make a single RPC call. However, there is a race to set + // the caIndex field so do a basic CAS loop here. + for { + // We only set our index if its newer than what is previously set. + old := atomic.LoadUint64(&c.caIndex) + if old == roots.Index || old > roots.Index { + break + } + + // Set the new index atomically. If the caIndex value changed + // in the meantime, retry. + if atomic.CompareAndSwapUint64(&c.caIndex, old, roots.Index) { + break + } + } // Trigger the channel since we updated. ch <- nil From 3b6c46b7d783fd23dbb3b2b99f3f0ba8dcef9182 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 16 Apr 2018 12:28:18 +0200 Subject: [PATCH 146/539] agent/structs: DCSpecificRequest sets all the proper fields for CacheInfo --- agent/structs/structs.go | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/agent/structs/structs.go b/agent/structs/structs.go index 19a9c7313..d65b50639 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -281,7 +281,10 @@ func (r *DCSpecificRequest) RequestDatacenter() string { func (r *DCSpecificRequest) CacheInfo() cache.RequestInfo { info := cache.RequestInfo{ - MinIndex: r.QueryOptions.MinQueryIndex, + Token: r.Token, + Datacenter: r.Datacenter, + MinIndex: r.MinQueryIndex, + Timeout: r.MaxQueryTime, } // To calculate the cache key we only hash the node filters. The From 6ecc2da7ffbf5ff6f2857be0f82cec47f2ae091b Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 17 Apr 2018 18:03:13 -0500 Subject: [PATCH 147/539] agent/cache: integrate go-metrics so the cache is debuggable --- agent/cache/cache.go | 44 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 43 insertions(+), 1 deletion(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index a1b4570b0..db9f11a0e 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -17,6 +17,8 @@ import ( "sync" "sync/atomic" "time" + + "github.com/armon/go-metrics" ) //go:generate mockery -all -inpkg @@ -109,6 +111,9 @@ type RegisterOptions struct { } // RegisterType registers a cacheable type. +// +// This makes the type available for Get but does not automatically perform +// any prefetching. In order to populate the cache, Get must be called. func (c *Cache) RegisterType(n string, typ Type, opts *RegisterOptions) { c.typesLock.Lock() defer c.typesLock.Unlock() @@ -122,9 +127,18 @@ func (c *Cache) RegisterType(n string, typ Type, opts *RegisterOptions) { // // Multiple Get calls for the same Request (matching CacheKey value) will // block on a single network request. +// +// The timeout specified by the Request will be the timeout on the cache +// Get, and does not correspond to the timeout of any background data +// fetching. If the timeout is reached before data satisfying the minimum +// index is retrieved, the last known value (maybe nil) is returned. No +// error is returned on timeout. This matches the behavior of Consul blocking +// queries. func (c *Cache) Get(t string, r Request) (interface{}, error) { info := r.CacheInfo() if info.Key == "" { + metrics.IncrCounter([]string{"consul", "cache", "bypass"}, 1) + // If no key is specified, then we do not cache this request. // Pass directly through to the backend. return c.fetchDirect(t, r) @@ -152,6 +166,7 @@ RETRY_GET: if ok && entry.Valid { if info.MinIndex == 0 || info.MinIndex < entry.Index { if first { + metrics.IncrCounter([]string{"consul", "cache", t, "hit"}, 1) atomic.AddUint64(&c.hits, 1) } @@ -162,6 +177,15 @@ RETRY_GET: if first { // Record the miss if its our first time through atomic.AddUint64(&c.misses, 1) + + // We increment two different counters for cache misses depending on + // whether we're missing because we didn't have the data at all, + // or if we're missing because we're blocking on a set index. + if info.MinIndex == 0 { + metrics.IncrCounter([]string{"consul", "cache", t, "miss_new"}, 1) + } else { + metrics.IncrCounter([]string{"consul", "cache", t, "miss_block"}, 1) + } } // No longer our first time through @@ -196,6 +220,10 @@ func (c *Cache) entryKey(r *RequestInfo) string { return fmt.Sprintf("%s/%s/%s", r.Datacenter, r.Token, r.Key) } +// fetch triggers a new background fetch for the given Request. If a +// background fetch is already running for a matching Request, the waiter +// channel for that request is returned. The effect of this is that there +// is only ever one blocking query for any matching requests. func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { // Get the type that we're fetching c.typesLock.RLock() @@ -205,6 +233,7 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { return nil, fmt.Errorf("unknown type in cache: %s", t) } + // We acquire a write lock because we may have to set Fetching to true. c.entriesLock.Lock() defer c.entriesLock.Unlock() entry, ok := c.entries[key] @@ -226,6 +255,7 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { // perform multiple fetches. entry.Fetching = true c.entries[key] = entry + metrics.SetGauge([]string{"consul", "cache", "entries_count"}, float32(len(c.entries))) // The actual Fetch must be performed in a goroutine. go func() { @@ -234,6 +264,14 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { MinIndex: entry.Index, }, r) + if err == nil { + metrics.IncrCounter([]string{"consul", "cache", "fetch_success"}, 1) + metrics.IncrCounter([]string{"consul", "cache", t, "fetch_success"}, 1) + } else { + metrics.IncrCounter([]string{"consul", "cache", "fetch_error"}, 1) + metrics.IncrCounter([]string{"consul", "cache", t, "fetch_error"}, 1) + } + var newEntry cacheEntry if result.Value == nil { // If no value was set, then we do not change the prior entry. @@ -272,7 +310,9 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { return entry.Waiter, nil } -// fetchDirect fetches the given request with no caching. +// fetchDirect fetches the given request with no caching. Because this +// bypasses the caching entirely, multiple matching requests will result +// in multiple actual RPC calls (unlike fetch). func (c *Cache) fetchDirect(t string, r Request) (interface{}, error) { // Get the type that we're fetching c.typesLock.RLock() @@ -294,6 +334,8 @@ func (c *Cache) fetchDirect(t string, r Request) (interface{}, error) { return result.Value, nil } +// refresh triggers a fetch for a specific Request according to the +// registration options. func (c *Cache) refresh(opts *RegisterOptions, t string, key string, r Request) { // Sanity-check, we should not schedule anything that has refresh disabled if !opts.Refresh { From 109bb946e931885b55532961379dce3af9a96b79 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 17 Apr 2018 18:07:47 -0500 Subject: [PATCH 148/539] agent/cache: return the error as part of Get --- agent/cache/cache.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index db9f11a0e..a9727ab8e 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -170,7 +170,7 @@ RETRY_GET: atomic.AddUint64(&c.hits, 1) } - return entry.Value, nil + return entry.Value, entry.Error } } From 56774f24d0df89bc22bc1d1b0fe244f58fb71653 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 17 Apr 2018 18:18:16 -0500 Subject: [PATCH 149/539] agent/cache-types: support intention match queries --- agent/cache-types/intention_match.go | 41 ++++++++++++++++ agent/cache-types/intention_match_test.go | 57 +++++++++++++++++++++++ agent/structs/intention.go | 33 +++++++++++++ 3 files changed, 131 insertions(+) create mode 100644 agent/cache-types/intention_match.go create mode 100644 agent/cache-types/intention_match_test.go diff --git a/agent/cache-types/intention_match.go b/agent/cache-types/intention_match.go new file mode 100644 index 000000000..4c42725a1 --- /dev/null +++ b/agent/cache-types/intention_match.go @@ -0,0 +1,41 @@ +package cachetype + +import ( + "fmt" + + "github.com/hashicorp/consul/agent/cache" + "github.com/hashicorp/consul/agent/structs" +) + +// Recommended name for registration. +const IntentionMatchName = "intention-match" + +// IntentionMatch supports fetching the intentions via match queries. +type IntentionMatch struct { + RPC RPC +} + +func (c *IntentionMatch) Fetch(opts cache.FetchOptions, req cache.Request) (cache.FetchResult, error) { + var result cache.FetchResult + + // The request should be an IntentionQueryRequest. + reqReal, ok := req.(*structs.IntentionQueryRequest) + if !ok { + return result, fmt.Errorf( + "Internal cache failure: request wrong type: %T", req) + } + + // Set the minimum query index to our current index so we block + reqReal.MinQueryIndex = opts.MinIndex + reqReal.MaxQueryTime = opts.Timeout + + // Fetch + var reply structs.IndexedIntentionMatches + if err := c.RPC.RPC("Intention.Match", reqReal, &reply); err != nil { + return result, err + } + + result.Value = &reply + result.Index = reply.Index + return result, nil +} diff --git a/agent/cache-types/intention_match_test.go b/agent/cache-types/intention_match_test.go new file mode 100644 index 000000000..97b2951b3 --- /dev/null +++ b/agent/cache-types/intention_match_test.go @@ -0,0 +1,57 @@ +package cachetype + +import ( + "testing" + "time" + + "github.com/hashicorp/consul/agent/cache" + "github.com/hashicorp/consul/agent/structs" + "github.com/stretchr/testify/mock" + "github.com/stretchr/testify/require" +) + +func TestIntentionMatch(t *testing.T) { + require := require.New(t) + rpc := TestRPC(t) + defer rpc.AssertExpectations(t) + typ := &IntentionMatch{RPC: rpc} + + // Expect the proper RPC call. This also sets the expected value + // since that is return-by-pointer in the arguments. + var resp *structs.IndexedIntentionMatches + rpc.On("RPC", "Intention.Match", mock.Anything, mock.Anything).Return(nil). + Run(func(args mock.Arguments) { + req := args.Get(1).(*structs.IntentionQueryRequest) + require.Equal(uint64(24), req.MinQueryIndex) + require.Equal(1*time.Second, req.MaxQueryTime) + + reply := args.Get(2).(*structs.IndexedIntentionMatches) + reply.Index = 48 + resp = reply + }) + + // Fetch + result, err := typ.Fetch(cache.FetchOptions{ + MinIndex: 24, + Timeout: 1 * time.Second, + }, &structs.IntentionQueryRequest{Datacenter: "dc1"}) + require.Nil(err) + require.Equal(cache.FetchResult{ + Value: resp, + Index: 48, + }, result) +} + +func TestIntentionMatch_badReqType(t *testing.T) { + require := require.New(t) + rpc := TestRPC(t) + defer rpc.AssertExpectations(t) + typ := &IntentionMatch{RPC: rpc} + + // Fetch + _, err := typ.Fetch(cache.FetchOptions{}, cache.TestRequest( + t, cache.RequestInfo{Key: "foo", MinIndex: 64})) + require.NotNil(err) + require.Contains(err.Error(), "wrong type") + +} diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 316c9632b..6ad1a9835 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -2,10 +2,13 @@ package structs import ( "fmt" + "strconv" "strings" "time" + "github.com/hashicorp/consul/agent/cache" "github.com/hashicorp/go-multierror" + "github.com/mitchellh/hashstructure" ) const ( @@ -267,6 +270,36 @@ func (q *IntentionQueryRequest) RequestDatacenter() string { return q.Datacenter } +// cache.Request impl. +func (q *IntentionQueryRequest) CacheInfo() cache.RequestInfo { + // We only support caching Match queries, so if Match isn't set, + // then return an empty info object which will cause a pass-through + // (and likely fail). + if q.Match == nil { + return cache.RequestInfo{} + } + + info := cache.RequestInfo{ + Token: q.Token, + Datacenter: q.Datacenter, + MinIndex: q.MinQueryIndex, + Timeout: q.MaxQueryTime, + } + + // Calculate the cache key via just hashing the Match struct. This + // has been configured so things like ordering of entries has no + // effect (via struct tags). + v, err := hashstructure.Hash(q.Match, nil) + if err == nil { + // If there is an error, we don't set the key. A blank key forces + // no cache for this request so the request is forwarded directly + // to the server. + info.Key = strconv.FormatUint(v, 10) + } + + return info +} + // IntentionQueryMatch are the parameters for performing a match request // against the state store. type IntentionQueryMatch struct { From a1f8cb95706144bd70b12baa63901a2bf65e465e Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 17 Apr 2018 18:26:58 -0500 Subject: [PATCH 150/539] agent: augment /v1/connect/authorize to cache intentions --- agent/agent.go | 9 +++++ agent/agent_endpoint.go | 10 ++++- agent/agent_endpoint_test.go | 75 ++++++++++++++++++++++++++++++++---- 3 files changed, 85 insertions(+), 9 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index b6e923ee3..610aeb64f 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2649,4 +2649,13 @@ func (a *Agent) registerCache() { RefreshTimer: 0, RefreshTimeout: 10 * time.Minute, }) + + a.cache.RegisterType(cachetype.IntentionMatchName, &cachetype.IntentionMatch{ + RPC: a.delegate, + }, &cache.RegisterOptions{ + // Maintain a blocking query, retry dropped connections quickly + Refresh: true, + RefreshTimer: 0, + RefreshTimeout: 10 * time.Minute, + }) } diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index c64eb7a92..798c370b2 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -1124,10 +1124,16 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R }, } args.Token = token - var reply structs.IndexedIntentionMatches - if err := s.agent.RPC("Intention.Match", args, &reply); err != nil { + + raw, err := s.agent.cache.Get(cachetype.IntentionMatchName, args) + if err != nil { return nil, err } + + reply, ok := raw.(*structs.IndexedIntentionMatches) + if !ok { + return nil, fmt.Errorf("internal error: response type not correct") + } if len(reply.Matches) != 1 { return nil, fmt.Errorf("Internal error loading matches") } diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 2e583ec4f..93cffa617 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2495,13 +2495,14 @@ func TestAgentConnectAuthorize_idNotService(t *testing.T) { func TestAgentConnectAuthorize_allow(t *testing.T) { t.Parallel() - assert := assert.New(t) + require := require.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() target := "db" // Create some intentions + var ixnId string { req := structs.IntentionRequest{ Datacenter: "dc1", @@ -2514,10 +2515,12 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { req.Intention.DestinationName = target req.Intention.Action = structs.IntentionActionAllow - var reply string - assert.Nil(a.RPC("Intention.Apply", &req, &reply)) + require.Nil(a.RPC("Intention.Apply", &req, &ixnId)) } + // Grab the initial cache hit count + cacheHits := a.cache.Hits() + args := &structs.ConnectAuthorizeRequest{ Target: target, ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), @@ -2525,12 +2528,70 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() respRaw, err := a.srv.AgentConnectAuthorize(resp, req) - assert.Nil(err) - assert.Equal(200, resp.Code) + require.Nil(err) + require.Equal(200, resp.Code) obj := respRaw.(*connectAuthorizeResp) - assert.True(obj.Authorized) - assert.Contains(obj.Reason, "Matched") + require.True(obj.Authorized) + require.Contains(obj.Reason, "Matched") + + // That should've been a cache miss, so not hit change + require.Equal(cacheHits, a.cache.Hits()) + + // Make the request again + { + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + require.Nil(err) + require.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + require.True(obj.Authorized) + require.Contains(obj.Reason, "Matched") + } + + // That should've been a cache hit + require.Equal(cacheHits+1, a.cache.Hits()) + cacheHits++ + + // Change the intention + { + req := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpUpdate, + Intention: structs.TestIntention(t), + } + req.Intention.ID = ixnId + req.Intention.SourceNS = structs.IntentionDefaultNamespace + req.Intention.SourceName = "web" + req.Intention.DestinationNS = structs.IntentionDefaultNamespace + req.Intention.DestinationName = target + req.Intention.Action = structs.IntentionActionDeny + + require.Nil(a.RPC("Intention.Apply", &req, &ixnId)) + } + + // Short sleep lets the cache background refresh happen + time.Sleep(100 * time.Millisecond) + + // Make the request again + { + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + require.Nil(err) + require.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + require.False(obj.Authorized) + require.Contains(obj.Reason, "Matched") + } + + // That should've been a cache hit, too, since it updated in the + // background. + require.Equal(cacheHits+1, a.cache.Hits()) + cacheHits++ } // Test when there is an intention denying the connection From e9d58ca219e55aff5d691c2a1084729154fd1228 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 17 Apr 2018 18:42:49 -0500 Subject: [PATCH 151/539] agent/cache: lots of comment/doc updates --- agent/cache/cache.go | 27 +++++++++++++++++++++++---- agent/cache/request.go | 7 +++++-- 2 files changed, 28 insertions(+), 6 deletions(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index a9727ab8e..5bf7b787d 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -7,9 +7,11 @@ // balance performance and correctness, depending on the type of data being // requested. // -// Currently, the cache package supports only continuous, blocking query -// caching. This means that the cache update is edge-triggered by Consul -// server blocking queries. +// The types of data that can be cached is configurable via the Type interface. +// This allows specialized behavior for certain types of data. Each type of +// Consul data (CA roots, leaf certs, intentions, KV, catalog, etc.) will +// have to be manually implemented. This usually is not much work, see +// the "agent/cache-types" package. package cache import ( @@ -23,7 +25,24 @@ import ( //go:generate mockery -all -inpkg -// Cache is a agent-local cache of Consul data. +// Cache is a agent-local cache of Consul data. Create a Cache using the +// New function. A zero-value Cache is not ready for usage and will result +// in a panic. +// +// The types of data to be cached must be registered via RegisterType. Then, +// calls to Get specify the type and a Request implementation. The +// implementation of Request is usually done directly on the standard RPC +// struct in agent/structs. This API makes cache usage a mostly drop-in +// replacement for non-cached RPC calls. +// +// The cache is partitioned by ACL and datacenter. This allows the cache +// to be safe for multi-DC queries and for queries where the data is modified +// due to ACLs all without the cache having to have any clever logic, at +// the slight expense of a less perfect cache. +// +// The Cache exposes various metrics via go-metrics. Please view the source +// searching for "metrics." to see the various metrics exposed. These can be +// used to explore the performance of the cache. type Cache struct { // Keeps track of the cache hits and misses in total. This is used by // tests currently to verify cache behavior and is not meant for general diff --git a/agent/cache/request.go b/agent/cache/request.go index 7beec58e8..7cd53df25 100644 --- a/agent/cache/request.go +++ b/agent/cache/request.go @@ -4,7 +4,7 @@ import ( "time" ) -// Request is a cache-able request. +// Request is a cacheable request. // // This interface is typically implemented by request structures in // the agent/structs package. @@ -20,6 +20,8 @@ type RequestInfo struct { // Key is a unique cache key for this request. This key should // absolutely uniquely identify this request, since any conflicting // cache keys could result in invalid data being returned from the cache. + // The Key does not need to include ACL or DC information, since the + // cache already partitions by these values prior to using this key. Key string // Token is the ACL token associated with this request. @@ -43,6 +45,7 @@ type RequestInfo struct { // Timeout is the timeout for waiting on a blocking query. When the // timeout is reached, the last known value is returned (or maybe nil - // if there was no prior value). + // if there was no prior value). This "last known value" behavior matches + // normal Consul blocking queries. Timeout time.Duration } From 67604503e2e0e85d007ef17a7452b27cef8d1e81 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 19 Apr 2018 08:13:57 -0700 Subject: [PATCH 152/539] Add Makefile hack for tests to run --- GNUmakefile | 1 + 1 file changed, 1 insertion(+) diff --git a/GNUmakefile b/GNUmakefile index 2c412d9e5..660a82725 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -42,6 +42,7 @@ dev: changelogfmt vendorfmt dev-build dev-build: @echo "--> TEMPORARY HACK: installing hashstructure to make CI pass until we vendor it upstream" go get github.com/mitchellh/hashstructure + go get github.com/stretchr/testify/mock @echo "--> Building consul" mkdir -p pkg/$(GOOS)_$(GOARCH)/ bin/ go install -ldflags '$(GOLDFLAGS)' -tags '$(GOTAGS)' From 257fc34e51fb33560bfaf74b335e8f24068a07bd Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 19 Apr 2018 09:19:55 -0700 Subject: [PATCH 153/539] agent/cache: on error, return from Get immediately, don't block forever --- agent/cache/cache.go | 15 +++++++++++++++ agent/cache/cache_test.go | 32 ++++++++++++++++++++++++++++++++ 2 files changed, 47 insertions(+) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index 5bf7b787d..d58d79729 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -193,6 +193,15 @@ RETRY_GET: } } + // If this isn't our first time through and our last value has an error, + // then we return the error. This has the behavior that we don't sit in + // a retry loop getting the same error for the entire duration of the + // timeout. Instead, we make one effort to fetch a new value, and if + // there was an error, we return. + if !first && entry.Error != nil { + return entry.Value, entry.Error + } + if first { // Record the miss if its our first time through atomic.AddUint64(&c.misses, 1) @@ -308,6 +317,12 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { newEntry.Valid = true } + // If we have an error and the prior entry wasn't valid, then we + // set the error at least. + if err != nil && !newEntry.Valid { + newEntry.Error = err + } + // Create a new waiter that will be used for the next fetch. newEntry.Waiter = make(chan struct{}) diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go index b8ca66dc4..e5db006e6 100644 --- a/agent/cache/cache_test.go +++ b/agent/cache/cache_test.go @@ -42,6 +42,38 @@ func TestCacheGet_noIndex(t *testing.T) { typ.AssertExpectations(t) } +// Test a basic Get with no index and a failed fetch. +func TestCacheGet_initError(t *testing.T) { + t.Parallel() + + require := require.New(t) + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + c.RegisterType("t", typ, nil) + + // Configure the type + fetcherr := fmt.Errorf("error") + typ.Static(FetchResult{}, fetcherr).Times(2) + + // Get, should fetch + req := TestRequest(t, RequestInfo{Key: "hello"}) + result, err := c.Get("t", req) + require.Error(err) + require.Nil(result) + + // Get, should fetch again since our last fetch was an error + result, err = c.Get("t", req) + require.Error(err) + require.Nil(result) + + // Sleep a tiny bit just to let maybe some background calls happen + // then verify that we still only got the one call + time.Sleep(20 * time.Millisecond) + typ.AssertExpectations(t) +} + // Test a Get with a request that returns a blank cache key. This should // force a backend request and skip the cache entirely. func TestCacheGet_blankCacheKey(t *testing.T) { From 3c6acbda5d9d02353ea385a9a77a4abb305a9f86 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 19 Apr 2018 11:36:14 -0700 Subject: [PATCH 154/539] agent/cache: send the RefreshTimeout into the backend fetch --- agent/cache/cache.go | 7 ++++++- agent/cache/cache_test.go | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+), 1 deletion(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index d58d79729..9296a5fb1 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -134,6 +134,10 @@ type RegisterOptions struct { // This makes the type available for Get but does not automatically perform // any prefetching. In order to populate the cache, Get must be called. func (c *Cache) RegisterType(n string, typ Type, opts *RegisterOptions) { + if opts == nil { + opts = &RegisterOptions{} + } + c.typesLock.Lock() defer c.typesLock.Unlock() c.types[n] = typeEntry{Type: typ, Opts: opts} @@ -290,6 +294,7 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { // Start building the new entry by blocking on the fetch. result, err := tEntry.Type.Fetch(FetchOptions{ MinIndex: entry.Index, + Timeout: tEntry.Opts.RefreshTimeout, }, r) if err == nil { @@ -336,7 +341,7 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { // If refresh is enabled, run the refresh in due time. The refresh // below might block, but saves us from spawning another goroutine. - if tEntry.Opts != nil && tEntry.Opts.Refresh { + if tEntry.Opts.Refresh { c.refresh(tEntry.Opts, t, key, r) } }() diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go index e5db006e6..49edc6e28 100644 --- a/agent/cache/cache_test.go +++ b/agent/cache/cache_test.go @@ -336,6 +336,39 @@ func TestCacheGet_periodicRefresh(t *testing.T) { TestCacheGetChResult(t, resultCh, 12) } +// Test that the backend fetch sets the proper timeout. +func TestCacheGet_fetchTimeout(t *testing.T) { + t.Parallel() + + require := require.New(t) + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + + // Register the type with a timeout + timeout := 10 * time.Minute + c.RegisterType("t", typ, &RegisterOptions{ + RefreshTimeout: timeout, + }) + + // Configure the type + var actual time.Duration + typ.Static(FetchResult{Value: 42}, nil).Times(1).Run(func(args mock.Arguments) { + opts := args.Get(0).(FetchOptions) + actual = opts.Timeout + }) + + // Get, should fetch + req := TestRequest(t, RequestInfo{Key: "hello"}) + result, err := c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Test the timeout + require.Equal(timeout, actual) +} + // Test that Get partitions the caches based on DC so two equivalent requests // to different datacenters are automatically cached even if their keys are // the same. From 449bbd817df74d62a6a7cdaaabbef4023af981f3 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 19 Apr 2018 17:31:50 -0700 Subject: [PATCH 155/539] agent/cache: initial TTL work --- agent/cache/cache.go | 155 ++++++++++++++++++++++++++++++-------- agent/cache/cache_test.go | 45 +++++++++++ agent/cache/entry.go | 103 +++++++++++++++++++++++++ agent/cache/entry_test.go | 10 +++ 4 files changed, 280 insertions(+), 33 deletions(-) create mode 100644 agent/cache/entry.go create mode 100644 agent/cache/entry_test.go diff --git a/agent/cache/cache.go b/agent/cache/cache.go index 9296a5fb1..0d332a21e 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -15,6 +15,7 @@ package cache import ( + "container/heap" "fmt" "sync" "sync/atomic" @@ -54,7 +55,11 @@ type Cache struct { typesLock sync.RWMutex types map[string]typeEntry - // entries contains the actual cache data. + // entries contains the actual cache data. Access to entries and + // entriesExpiryHeap must be protected by entriesLock. + // + // entriesExpiryHeap is a heap of *cacheEntry values ordered by + // expiry, with the soonest to expire being first in the list (index 0). // // NOTE(mitchellh): The entry map key is currently a string in the format // of "//" in order to properly partition @@ -62,21 +67,9 @@ type Cache struct { // big drawbacks: we can't evict by datacenter, ACL token, etc. For an // initial implementaiton this works and the tests are agnostic to the // internal storage format so changing this should be possible safely. - entriesLock sync.RWMutex - entries map[string]cacheEntry -} - -// cacheEntry stores a single cache entry. -type cacheEntry struct { - // Fields pertaining to the actual value - Value interface{} - Error error - Index uint64 - - // Metadata that is used for internal accounting - Valid bool - Fetching bool - Waiter chan struct{} + entriesLock sync.RWMutex + entries map[string]cacheEntry + entriesExpiryHeap *expiryHeap } // typeEntry is a single type that is registered with a Cache. @@ -93,16 +86,34 @@ type Options struct { // New creates a new cache with the given RPC client and reasonable defaults. // Further settings can be tweaked on the returned value. func New(*Options) *Cache { - return &Cache{ - entries: make(map[string]cacheEntry), - types: make(map[string]typeEntry), + // Initialize the heap. The buffer of 1 is really important because + // its possible for the expiry loop to trigger the heap to update + // itself and it'd block forever otherwise. + h := &expiryHeap{NotifyCh: make(chan struct{}, 1)} + heap.Init(h) + + c := &Cache{ + types: make(map[string]typeEntry), + entries: make(map[string]cacheEntry), + entriesExpiryHeap: h, } + + // Start the expiry watcher + go c.runExpiryLoop() + + return c } // RegisterOptions are options that can be associated with a type being // registered for the cache. This changes the behavior of the cache for // this type. type RegisterOptions struct { + // LastGetTTL is the time that the values returned by this type remain + // in the cache after the last get operation. If a value isn't accessed + // within this duration, the value is purged from the cache and + // background refreshing will cease. + LastGetTTL time.Duration + // Refresh configures whether the data is actively refreshed or if // the data is only refreshed on an explicit Get. The default (false) // is to only request data on explicit Get. @@ -137,6 +148,9 @@ func (c *Cache) RegisterType(n string, typ Type, opts *RegisterOptions) { if opts == nil { opts = &RegisterOptions{} } + if opts.LastGetTTL == 0 { + opts.LastGetTTL = 72 * time.Hour // reasonable default is days + } c.typesLock.Lock() defer c.typesLock.Unlock() @@ -193,6 +207,12 @@ RETRY_GET: atomic.AddUint64(&c.hits, 1) } + // Touch the expiration and fix the heap + entry.ResetExpires() + c.entriesLock.Lock() + heap.Fix(c.entriesExpiryHeap, *entry.ExpiryHeapIndex) + c.entriesLock.Unlock() + return entry.Value, entry.Error } } @@ -230,7 +250,7 @@ RETRY_GET: // At this point, we know we either don't have a value at all or the // value we have is too old. We need to wait for new data. - waiterCh, err := c.fetch(t, key, r) + waiterCh, err := c.fetch(t, key, r, true) if err != nil { return nil, err } @@ -256,7 +276,11 @@ func (c *Cache) entryKey(r *RequestInfo) string { // background fetch is already running for a matching Request, the waiter // channel for that request is returned. The effect of this is that there // is only ever one blocking query for any matching requests. -func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { +// +// If allowNew is true then the fetch should create the cache entry +// if it doesn't exist. If this is false, then fetch will do nothing +// if the entry doesn't exist. This latter case is to support refreshing. +func (c *Cache) fetch(t, key string, r Request, allowNew bool) (<-chan struct{}, error) { // Get the type that we're fetching c.typesLock.RLock() tEntry, ok := c.types[t] @@ -270,6 +294,15 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { defer c.entriesLock.Unlock() entry, ok := c.entries[key] + // If we aren't allowing new values and we don't have an existing value, + // return immediately. We return an immediately-closed channel so nothing + // blocks. + if !ok && !allowNew { + ch := make(chan struct{}) + close(ch) + return ch, nil + } + // If we already have an entry and it is actively fetching, then return // the currently active waiter. if ok && entry.Fetching { @@ -305,14 +338,10 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { metrics.IncrCounter([]string{"consul", "cache", t, "fetch_error"}, 1) } - var newEntry cacheEntry - if result.Value == nil { - // If no value was set, then we do not change the prior entry. - // Instead, we just update the waiter to be new so that another - // Get will wait on the correct value. - newEntry = entry - newEntry.Fetching = false - } else { + // Copy the existing entry to start. + newEntry := entry + newEntry.Fetching = false + if result.Value != nil { // A new value was given, so we create a brand new entry. newEntry.Value = result.Value newEntry.Index = result.Index @@ -331,12 +360,33 @@ func (c *Cache) fetch(t, key string, r Request) (<-chan struct{}, error) { // Create a new waiter that will be used for the next fetch. newEntry.Waiter = make(chan struct{}) - // Insert + // The key needs to always be set since this is used by the + // expiration loop to know what entry to delete. + newEntry.Key = key + + // If this is a new entry (not in the heap yet), then set the + // initial expiration TTL. + if newEntry.ExpiryHeapIndex == nil { + newEntry.ExpiresTTL = tEntry.Opts.LastGetTTL + newEntry.ResetExpires() + } + + // Set our entry c.entriesLock.Lock() + if newEntry.ExpiryHeapIndex != nil { + // If we're already in the heap, just change the value in-place. + // We don't need to call heap.Fix because the expiry doesn't + // change. + c.entriesExpiryHeap.Entries[*newEntry.ExpiryHeapIndex] = &newEntry + } else { + // Add the new value + newEntry.ExpiryHeapIndex = new(int) + heap.Push(c.entriesExpiryHeap, &newEntry) + } c.entries[key] = newEntry c.entriesLock.Unlock() - // Trigger the waiter + // Trigger the old waiter close(entry.Waiter) // If refresh is enabled, run the refresh in due time. The refresh @@ -386,8 +436,47 @@ func (c *Cache) refresh(opts *RegisterOptions, t string, key string, r Request) time.Sleep(opts.RefreshTimer) } - // Trigger - c.fetch(t, key, r) + // Trigger. The "allowNew" field is false because in the time we were + // waiting to refresh we may have expired and got evicted. If that + // happened, we don't want to create a new entry. + c.fetch(t, key, r, false) +} + +// runExpiryLoop is a blocking function that watches the expiration +// heap and invalidates entries that have expired. +func (c *Cache) runExpiryLoop() { + var expiryTimer *time.Timer + for { + // If we have a previous timer, stop it. + if expiryTimer != nil { + expiryTimer.Stop() + } + + // Get the entry expiring soonest + var entry *cacheEntry + var expiryCh <-chan time.Time + c.entriesLock.RLock() + if len(c.entriesExpiryHeap.Entries) > 0 { + entry = c.entriesExpiryHeap.Entries[0] + expiryTimer = time.NewTimer(entry.Expires().Sub(time.Now())) + expiryCh = expiryTimer.C + } + c.entriesLock.RUnlock() + + select { + case <-c.entriesExpiryHeap.NotifyCh: + // Entries changed, so the heap may have changed. Restart loop. + + case <-expiryCh: + // Entry expired! Remove it. + c.entriesLock.Lock() + delete(c.entries, entry.Key) + heap.Remove(c.entriesExpiryHeap, *entry.ExpiryHeapIndex) + c.entriesLock.Unlock() + + metrics.IncrCounter([]string{"consul", "cache", "evict_expired"}, 1) + } + } } // Returns the number of cache hits. Safe to call concurrently. diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go index 49edc6e28..7ac8213f3 100644 --- a/agent/cache/cache_test.go +++ b/agent/cache/cache_test.go @@ -369,6 +369,51 @@ func TestCacheGet_fetchTimeout(t *testing.T) { require.Equal(timeout, actual) } +// Test that entries expire +func TestCacheGet_expire(t *testing.T) { + t.Parallel() + + require := require.New(t) + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + + // Register the type with a timeout + c.RegisterType("t", typ, &RegisterOptions{ + LastGetTTL: 400 * time.Millisecond, + }) + + // Configure the type + typ.Static(FetchResult{Value: 42}, nil).Times(2) + + // Get, should fetch + req := TestRequest(t, RequestInfo{Key: "hello"}) + result, err := c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Get, should not fetch + req = TestRequest(t, RequestInfo{Key: "hello"}) + result, err = c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Sleep for the expiry + time.Sleep(500 * time.Millisecond) + + // Get, should fetch + req = TestRequest(t, RequestInfo{Key: "hello"}) + result, err = c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Sleep a tiny bit just to let maybe some background calls happen + // then verify that we still only got the one call + time.Sleep(20 * time.Millisecond) + typ.AssertExpectations(t) +} + // Test that Get partitions the caches based on DC so two equivalent requests // to different datacenters are automatically cached even if their keys are // the same. diff --git a/agent/cache/entry.go b/agent/cache/entry.go new file mode 100644 index 000000000..99636be6f --- /dev/null +++ b/agent/cache/entry.go @@ -0,0 +1,103 @@ +package cache + +import ( + "sync/atomic" + "time" +) + +// cacheEntry stores a single cache entry. +// +// Note that this isn't a very optimized structure currently. There are +// a lot of improvements that can be made here in the long term. +type cacheEntry struct { + // Fields pertaining to the actual value + Key string + Value interface{} + Error error + Index uint64 + + // Metadata that is used for internal accounting + Valid bool // True if the Value is set + Fetching bool // True if a fetch is already active + Waiter chan struct{} // Closed when this entry is invalidated + + // ExpiresRaw is the time.Time that this value expires. The time.Time + // is immune to wall clock changes since we only use APIs that + // operate on the monotonic value. The value is in an atomic.Value + // so we have an efficient way to "touch" the value while maybe being + // read without introducing complex locking. + ExpiresRaw atomic.Value + ExpiresTTL time.Duration + ExpiryHeapIndex *int +} + +// Expires is the time that this entry expires. The time.Time value returned +// has the monotonic clock preserved and should be used only with +// monotonic-safe operations to prevent wall clock changes affecting +// cache behavior. +func (e *cacheEntry) Expires() time.Time { + return e.ExpiresRaw.Load().(time.Time) +} + +// ResetExpires resets the expiration to be the ttl duration from now. +func (e *cacheEntry) ResetExpires() { + e.ExpiresRaw.Store(time.Now().Add(e.ExpiresTTL)) +} + +// expiryHeap is a heap implementation that stores information about +// when entires expire. Implements container/heap.Interface. +// +// All operations on the heap and read/write of the heap contents require +// the proper entriesLock to be held on Cache. +type expiryHeap struct { + Entries []*cacheEntry + + // NotifyCh is sent a value whenever the 0 index value of the heap + // changes. This can be used to detect when the earliest value + // changes. + NotifyCh chan struct{} +} + +func (h *expiryHeap) Len() int { return len(h.Entries) } + +func (h *expiryHeap) Swap(i, j int) { + h.Entries[i], h.Entries[j] = h.Entries[j], h.Entries[i] + *h.Entries[i].ExpiryHeapIndex = i + *h.Entries[j].ExpiryHeapIndex = j + + // If we're moving the 0 index, update the channel since we need + // to re-update the timer we're waiting on for the soonest expiring + // value. + if i == 0 || j == 0 { + h.NotifyCh <- struct{}{} + } +} + +func (h *expiryHeap) Less(i, j int) bool { + // The usage of Before here is important (despite being obvious): + // this function uses the monotonic time that should be available + // on the time.Time value so the heap is immune to wall clock changes. + return h.Entries[i].Expires().Before(h.Entries[j].Expires()) +} + +func (h *expiryHeap) Push(x interface{}) { + entry := x.(*cacheEntry) + + // For the first entry, we need to trigger a channel send because + // Swap won't be called; nothing to swap! We can call it right away + // because all heap operations are within a lock. + if len(h.Entries) == 0 { + *entry.ExpiryHeapIndex = 0 // Set correct initial index + h.NotifyCh <- struct{}{} + } + + h.Entries = append(h.Entries, entry) +} + +func (h *expiryHeap) Pop() interface{} { + old := h.Entries + n := len(old) + x := old[n-1] + h.Entries = old[0 : n-1] + return x +} diff --git a/agent/cache/entry_test.go b/agent/cache/entry_test.go new file mode 100644 index 000000000..0ebf0682d --- /dev/null +++ b/agent/cache/entry_test.go @@ -0,0 +1,10 @@ +package cache + +import ( + "container/heap" + "testing" +) + +func TestExpiryHeap_impl(t *testing.T) { + var _ heap.Interface = new(expiryHeap) +} From b319d06276716401fba6d7aa867f33ea9df9025d Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 19 Apr 2018 18:28:01 -0700 Subject: [PATCH 156/539] agent/cache: rework how expiry data is stored to be more efficient --- agent/cache/cache.go | 59 +++++++++++++++++++++------------------ agent/cache/cache_test.go | 51 +++++++++++++++++++++++++++++++++ agent/cache/entry.go | 50 ++++++++++++++++----------------- 3 files changed, 108 insertions(+), 52 deletions(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index 0d332a21e..1d9b732b8 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -207,10 +207,16 @@ RETRY_GET: atomic.AddUint64(&c.hits, 1) } - // Touch the expiration and fix the heap - entry.ResetExpires() + // Touch the expiration and fix the heap. c.entriesLock.Lock() - heap.Fix(c.entriesExpiryHeap, *entry.ExpiryHeapIndex) + entry.Expiry.Reset() + idx := entry.Expiry.HeapIndex + heap.Fix(c.entriesExpiryHeap, entry.Expiry.HeapIndex) + if idx == 0 && entry.Expiry.HeapIndex == 0 { + // We didn't move and we were at the head of the heap. + // We need to let the loop know that the value changed. + c.entriesExpiryHeap.Notify() + } c.entriesLock.Unlock() return entry.Value, entry.Error @@ -360,29 +366,21 @@ func (c *Cache) fetch(t, key string, r Request, allowNew bool) (<-chan struct{}, // Create a new waiter that will be used for the next fetch. newEntry.Waiter = make(chan struct{}) - // The key needs to always be set since this is used by the - // expiration loop to know what entry to delete. - newEntry.Key = key - - // If this is a new entry (not in the heap yet), then set the - // initial expiration TTL. - if newEntry.ExpiryHeapIndex == nil { - newEntry.ExpiresTTL = tEntry.Opts.LastGetTTL - newEntry.ResetExpires() - } - // Set our entry c.entriesLock.Lock() - if newEntry.ExpiryHeapIndex != nil { - // If we're already in the heap, just change the value in-place. - // We don't need to call heap.Fix because the expiry doesn't - // change. - c.entriesExpiryHeap.Entries[*newEntry.ExpiryHeapIndex] = &newEntry - } else { - // Add the new value - newEntry.ExpiryHeapIndex = new(int) - heap.Push(c.entriesExpiryHeap, &newEntry) + + // If this is a new entry (not in the heap yet), then setup the + // initial expiry information and insert. If we're already in + // the heap we do nothing since we're reusing the same entry. + if newEntry.Expiry == nil || newEntry.Expiry.HeapIndex == -1 { + newEntry.Expiry = &cacheEntryExpiry{ + Key: key, + TTL: tEntry.Opts.LastGetTTL, + } + newEntry.Expiry.Reset() + heap.Push(c.entriesExpiryHeap, newEntry.Expiry) } + c.entries[key] = newEntry c.entriesLock.Unlock() @@ -453,12 +451,12 @@ func (c *Cache) runExpiryLoop() { } // Get the entry expiring soonest - var entry *cacheEntry + var entry *cacheEntryExpiry var expiryCh <-chan time.Time c.entriesLock.RLock() if len(c.entriesExpiryHeap.Entries) > 0 { entry = c.entriesExpiryHeap.Entries[0] - expiryTimer = time.NewTimer(entry.Expires().Sub(time.Now())) + expiryTimer = time.NewTimer(entry.Expires.Sub(time.Now())) expiryCh = expiryTimer.C } c.entriesLock.RUnlock() @@ -468,10 +466,17 @@ func (c *Cache) runExpiryLoop() { // Entries changed, so the heap may have changed. Restart loop. case <-expiryCh: - // Entry expired! Remove it. c.entriesLock.Lock() + + // Entry expired! Remove it. delete(c.entries, entry.Key) - heap.Remove(c.entriesExpiryHeap, *entry.ExpiryHeapIndex) + heap.Remove(c.entriesExpiryHeap, entry.HeapIndex) + + // This is subtle but important: if we race and simultaneously + // evict and fetch a new value, then we set this to -1 to + // have it treated as a new value so that the TTL is extended. + entry.HeapIndex = -1 + c.entriesLock.Unlock() metrics.IncrCounter([]string{"consul", "cache", "evict_expired"}, 1) diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go index 7ac8213f3..82bdf7814 100644 --- a/agent/cache/cache_test.go +++ b/agent/cache/cache_test.go @@ -414,6 +414,57 @@ func TestCacheGet_expire(t *testing.T) { typ.AssertExpectations(t) } +// Test that entries reset their TTL on Get +func TestCacheGet_expireResetGet(t *testing.T) { + t.Parallel() + + require := require.New(t) + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + + // Register the type with a timeout + c.RegisterType("t", typ, &RegisterOptions{ + LastGetTTL: 150 * time.Millisecond, + }) + + // Configure the type + typ.Static(FetchResult{Value: 42}, nil).Times(2) + + // Get, should fetch + req := TestRequest(t, RequestInfo{Key: "hello"}) + result, err := c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Fetch multiple times, where the total time is well beyond + // the TTL. We should not trigger any fetches during this time. + for i := 0; i < 5; i++ { + // Sleep a bit + time.Sleep(50 * time.Millisecond) + + // Get, should not fetch + req = TestRequest(t, RequestInfo{Key: "hello"}) + result, err = c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + } + + time.Sleep(200 * time.Millisecond) + + // Get, should fetch + req = TestRequest(t, RequestInfo{Key: "hello"}) + result, err = c.Get("t", req) + require.Nil(err) + require.Equal(42, result) + + // Sleep a tiny bit just to let maybe some background calls happen + // then verify that we still only got the one call + time.Sleep(20 * time.Millisecond) + typ.AssertExpectations(t) +} + // Test that Get partitions the caches based on DC so two equivalent requests // to different datacenters are automatically cached even if their keys are // the same. diff --git a/agent/cache/entry.go b/agent/cache/entry.go index 99636be6f..b86f80ea8 100644 --- a/agent/cache/entry.go +++ b/agent/cache/entry.go @@ -1,7 +1,6 @@ package cache import ( - "sync/atomic" "time" ) @@ -11,7 +10,6 @@ import ( // a lot of improvements that can be made here in the long term. type cacheEntry struct { // Fields pertaining to the actual value - Key string Value interface{} Error error Index uint64 @@ -21,27 +19,25 @@ type cacheEntry struct { Fetching bool // True if a fetch is already active Waiter chan struct{} // Closed when this entry is invalidated - // ExpiresRaw is the time.Time that this value expires. The time.Time - // is immune to wall clock changes since we only use APIs that - // operate on the monotonic value. The value is in an atomic.Value - // so we have an efficient way to "touch" the value while maybe being - // read without introducing complex locking. - ExpiresRaw atomic.Value - ExpiresTTL time.Duration - ExpiryHeapIndex *int + // Expiry contains information about the expiration of this + // entry. This is a pointer as its shared as a value in the + // expiryHeap as well. + Expiry *cacheEntryExpiry } -// Expires is the time that this entry expires. The time.Time value returned -// has the monotonic clock preserved and should be used only with -// monotonic-safe operations to prevent wall clock changes affecting -// cache behavior. -func (e *cacheEntry) Expires() time.Time { - return e.ExpiresRaw.Load().(time.Time) +// cacheEntryExpiry contains the expiration information for a cache +// entry. Any modifications to this struct should be done only while +// the Cache entriesLock is held. +type cacheEntryExpiry struct { + Key string // Key in the cache map + Expires time.Time // Time when entry expires (monotonic clock) + TTL time.Duration // TTL for this entry to extend when resetting + HeapIndex int // Index in the heap } -// ResetExpires resets the expiration to be the ttl duration from now. -func (e *cacheEntry) ResetExpires() { - e.ExpiresRaw.Store(time.Now().Add(e.ExpiresTTL)) +// Reset resets the expiration to be the ttl duration from now. +func (e *cacheEntryExpiry) Reset() { + e.Expires = time.Now().Add(e.TTL) } // expiryHeap is a heap implementation that stores information about @@ -50,7 +46,7 @@ func (e *cacheEntry) ResetExpires() { // All operations on the heap and read/write of the heap contents require // the proper entriesLock to be held on Cache. type expiryHeap struct { - Entries []*cacheEntry + Entries []*cacheEntryExpiry // NotifyCh is sent a value whenever the 0 index value of the heap // changes. This can be used to detect when the earliest value @@ -62,8 +58,8 @@ func (h *expiryHeap) Len() int { return len(h.Entries) } func (h *expiryHeap) Swap(i, j int) { h.Entries[i], h.Entries[j] = h.Entries[j], h.Entries[i] - *h.Entries[i].ExpiryHeapIndex = i - *h.Entries[j].ExpiryHeapIndex = j + h.Entries[i].HeapIndex = i + h.Entries[j].HeapIndex = j // If we're moving the 0 index, update the channel since we need // to re-update the timer we're waiting on for the soonest expiring @@ -77,17 +73,17 @@ func (h *expiryHeap) Less(i, j int) bool { // The usage of Before here is important (despite being obvious): // this function uses the monotonic time that should be available // on the time.Time value so the heap is immune to wall clock changes. - return h.Entries[i].Expires().Before(h.Entries[j].Expires()) + return h.Entries[i].Expires.Before(h.Entries[j].Expires) } func (h *expiryHeap) Push(x interface{}) { - entry := x.(*cacheEntry) + entry := x.(*cacheEntryExpiry) // For the first entry, we need to trigger a channel send because // Swap won't be called; nothing to swap! We can call it right away // because all heap operations are within a lock. if len(h.Entries) == 0 { - *entry.ExpiryHeapIndex = 0 // Set correct initial index + entry.HeapIndex = 0 // Set correct initial index h.NotifyCh <- struct{}{} } @@ -101,3 +97,7 @@ func (h *expiryHeap) Pop() interface{} { h.Entries = old[0 : n-1] return x } + +func (h *expiryHeap) Notify() { + h.NotifyCh <- struct{}{} +} From ec559d77bd193eeca9885ca6628e05ac44b60d20 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 19 Apr 2018 18:35:10 -0700 Subject: [PATCH 157/539] agent/cache: make edge case with prev/next idx == 0 handled better --- agent/cache/cache.go | 8 +------- agent/cache/entry.go | 30 ++++++++++++++++++++++++++---- 2 files changed, 27 insertions(+), 11 deletions(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index 1d9b732b8..759d2bc1d 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -210,13 +210,7 @@ RETRY_GET: // Touch the expiration and fix the heap. c.entriesLock.Lock() entry.Expiry.Reset() - idx := entry.Expiry.HeapIndex - heap.Fix(c.entriesExpiryHeap, entry.Expiry.HeapIndex) - if idx == 0 && entry.Expiry.HeapIndex == 0 { - // We didn't move and we were at the head of the heap. - // We need to let the loop know that the value changed. - c.entriesExpiryHeap.Notify() - } + c.entriesExpiryHeap.Fix(entry.Expiry) c.entriesLock.Unlock() return entry.Value, entry.Error diff --git a/agent/cache/entry.go b/agent/cache/entry.go index b86f80ea8..8174d3f12 100644 --- a/agent/cache/entry.go +++ b/agent/cache/entry.go @@ -1,6 +1,7 @@ package cache import ( + "container/heap" "time" ) @@ -51,9 +52,34 @@ type expiryHeap struct { // NotifyCh is sent a value whenever the 0 index value of the heap // changes. This can be used to detect when the earliest value // changes. + // + // There is a single edge case where the heap will not automatically + // send a notification: if heap.Fix is called manually and the index + // changed is 0 and the change doesn't result in any moves (stays at index + // 0), then we won't detect the change. To work around this, please + // always call the expiryHeap.Fix method instead. NotifyCh chan struct{} } +// Identical to heap.Fix for this heap instance but will properly handle +// the edge case where idx == 0 and no heap modification is necessary, +// and still notify the NotifyCh. +// +// This is important for cache expiry since the expiry time may have been +// extended and if we don't send a message to the NotifyCh then we'll never +// reset the timer and the entry will be evicted early. +func (h *expiryHeap) Fix(entry *cacheEntryExpiry) { + idx := entry.HeapIndex + heap.Fix(h, idx) + + // This is the edge case we handle: if the prev and current index + // is zero, it means the head-of-line didn't change while the value + // changed. Notify to reset our expiry worker. + if idx == 0 && entry.HeapIndex == 0 { + h.NotifyCh <- struct{}{} + } +} + func (h *expiryHeap) Len() int { return len(h.Entries) } func (h *expiryHeap) Swap(i, j int) { @@ -97,7 +123,3 @@ func (h *expiryHeap) Pop() interface{} { h.Entries = old[0 : n-1] return x } - -func (h *expiryHeap) Notify() { - h.NotifyCh <- struct{}{} -} From 1c31e34e5b5cf86b815dbc3c907f5a883c558227 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 19 Apr 2018 18:40:12 -0700 Subject: [PATCH 158/539] agent/cache: send the total entries count on eviction to go-metrics --- agent/cache/cache.go | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index 759d2bc1d..a5aa575d8 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -471,9 +471,11 @@ func (c *Cache) runExpiryLoop() { // have it treated as a new value so that the TTL is extended. entry.HeapIndex = -1 - c.entriesLock.Unlock() - + // Set some metrics metrics.IncrCounter([]string{"consul", "cache", "evict_expired"}, 1) + metrics.SetGauge([]string{"consul", "cache", "entries_count"}, float32(len(c.entries))) + + c.entriesLock.Unlock() } } } From 3f80a9f330ca68cbc7094fc5454624b9c7acff6f Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 20 Apr 2018 10:20:39 -0700 Subject: [PATCH 159/539] agent/cache: unit tests for ExpiryHeap, found a bug! --- agent/cache/entry.go | 5 ++- agent/cache/entry_test.go | 81 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+), 1 deletion(-) diff --git a/agent/cache/entry.go b/agent/cache/entry.go index 8174d3f12..f651e6421 100644 --- a/agent/cache/entry.go +++ b/agent/cache/entry.go @@ -105,11 +105,14 @@ func (h *expiryHeap) Less(i, j int) bool { func (h *expiryHeap) Push(x interface{}) { entry := x.(*cacheEntryExpiry) + // Set initial heap index, if we're going to the end then Swap + // won't be called so we need to initialize + entry.HeapIndex = len(h.Entries) + // For the first entry, we need to trigger a channel send because // Swap won't be called; nothing to swap! We can call it right away // because all heap operations are within a lock. if len(h.Entries) == 0 { - entry.HeapIndex = 0 // Set correct initial index h.NotifyCh <- struct{}{} } diff --git a/agent/cache/entry_test.go b/agent/cache/entry_test.go index 0ebf0682d..fe4073363 100644 --- a/agent/cache/entry_test.go +++ b/agent/cache/entry_test.go @@ -3,8 +3,89 @@ package cache import ( "container/heap" "testing" + "time" + + "github.com/stretchr/testify/require" ) func TestExpiryHeap_impl(t *testing.T) { var _ heap.Interface = new(expiryHeap) } + +func TestExpiryHeap(t *testing.T) { + require := require.New(t) + now := time.Now() + ch := make(chan struct{}, 10) // buffered to prevent blocking in tests + h := &expiryHeap{NotifyCh: ch} + + // Init, shouldn't trigger anything + heap.Init(h) + testNoMessage(t, ch) + + // Push an initial value, expect one message + entry := &cacheEntryExpiry{Key: "foo", HeapIndex: -1, Expires: now.Add(100)} + heap.Push(h, entry) + require.Equal(0, entry.HeapIndex) + testMessage(t, ch) + testNoMessage(t, ch) // exactly one asserted above + + // Push another that goes earlier than entry + entry2 := &cacheEntryExpiry{Key: "bar", HeapIndex: -1, Expires: now.Add(50)} + heap.Push(h, entry2) + require.Equal(0, entry2.HeapIndex) + require.Equal(1, entry.HeapIndex) + testMessage(t, ch) + testNoMessage(t, ch) // exactly one asserted above + + // Push another that goes at the end + entry3 := &cacheEntryExpiry{Key: "bar", HeapIndex: -1, Expires: now.Add(1000)} + heap.Push(h, entry3) + require.Equal(2, entry3.HeapIndex) + testNoMessage(t, ch) // no notify cause index 0 stayed the same + + // Remove the first entry (not Pop, since we don't use Pop, but that works too) + remove := h.Entries[0] + heap.Remove(h, remove.HeapIndex) + require.Equal(0, entry.HeapIndex) + require.Equal(1, entry3.HeapIndex) + testMessage(t, ch) + testMessage(t, ch) // we have two because two swaps happen + testNoMessage(t, ch) + + // Let's change entry 3 to be early, and fix it + entry3.Expires = now.Add(10) + h.Fix(entry3) + require.Equal(1, entry.HeapIndex) + require.Equal(0, entry3.HeapIndex) + testMessage(t, ch) + testNoMessage(t, ch) + + // Let's change entry 3 again, this is an edge case where if the 0th + // element changed, we didn't trigger the channel. Our Fix func should. + entry.Expires = now.Add(20) + h.Fix(entry3) + require.Equal(1, entry.HeapIndex) // no move + require.Equal(0, entry3.HeapIndex) + testMessage(t, ch) + testNoMessage(t, ch) // one message +} + +func testNoMessage(t *testing.T, ch <-chan struct{}) { + t.Helper() + + select { + case <-ch: + t.Fatal("should not have a message") + default: + } +} + +func testMessage(t *testing.T, ch <-chan struct{}) { + t.Helper() + + select { + case <-ch: + default: + t.Fatal("should have a message") + } +} From ad3928b6bdd54e7c0eea49a91343ad23fed561d0 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 20 Apr 2018 10:43:50 -0700 Subject: [PATCH 160/539] agent/cache: don't every block on NotifyCh --- agent/cache/entry.go | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/agent/cache/entry.go b/agent/cache/entry.go index f651e6421..a1af4801c 100644 --- a/agent/cache/entry.go +++ b/agent/cache/entry.go @@ -76,7 +76,7 @@ func (h *expiryHeap) Fix(entry *cacheEntryExpiry) { // is zero, it means the head-of-line didn't change while the value // changed. Notify to reset our expiry worker. if idx == 0 && entry.HeapIndex == 0 { - h.NotifyCh <- struct{}{} + h.notify() } } @@ -91,7 +91,7 @@ func (h *expiryHeap) Swap(i, j int) { // to re-update the timer we're waiting on for the soonest expiring // value. if i == 0 || j == 0 { - h.NotifyCh <- struct{}{} + h.notify() } } @@ -113,7 +113,7 @@ func (h *expiryHeap) Push(x interface{}) { // Swap won't be called; nothing to swap! We can call it right away // because all heap operations are within a lock. if len(h.Entries) == 0 { - h.NotifyCh <- struct{}{} + h.notify() } h.Entries = append(h.Entries, entry) @@ -126,3 +126,16 @@ func (h *expiryHeap) Pop() interface{} { h.Entries = old[0 : n-1] return x } + +func (h *expiryHeap) notify() { + select { + case h.NotifyCh <- struct{}{}: + // Good + + default: + // If the send would've blocked, we just ignore it. The reason this + // is safe is because NotifyCh should always be a buffered channel. + // If this blocks, it means that there is a pending message anyways + // so the receiver will restart regardless. + } +} From 07d878a157cea226a79c37b9af5505094c002a91 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 20 Apr 2018 12:58:23 -0700 Subject: [PATCH 161/539] agent/cache: address feedback, clarify comments --- agent/cache/cache_test.go | 30 ++++++++++++++++-------------- agent/cache/entry.go | 4 +++- 2 files changed, 19 insertions(+), 15 deletions(-) diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go index 82bdf7814..cf179b2ab 100644 --- a/agent/cache/cache_test.go +++ b/agent/cache/cache_test.go @@ -28,12 +28,12 @@ func TestCacheGet_noIndex(t *testing.T) { // Get, should fetch req := TestRequest(t, RequestInfo{Key: "hello"}) result, err := c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) // Get, should not fetch since we already have a satisfying value result, err = c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) // Sleep a tiny bit just to let maybe some background calls happen @@ -92,12 +92,12 @@ func TestCacheGet_blankCacheKey(t *testing.T) { // Get, should fetch req := TestRequest(t, RequestInfo{Key: ""}) result, err := c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) // Get, should not fetch since we already have a satisfying value result, err = c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) // Sleep a tiny bit just to let maybe some background calls happen @@ -281,14 +281,14 @@ func TestCacheGet_emptyFetchResult(t *testing.T) { // Get, should fetch req := TestRequest(t, RequestInfo{Key: "hello"}) result, err := c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) // Get, should not fetch since we already have a satisfying value req = TestRequest(t, RequestInfo{ Key: "hello", MinIndex: 1, Timeout: 100 * time.Millisecond}) result, err = c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) // Sleep a tiny bit just to let maybe some background calls happen @@ -362,7 +362,7 @@ func TestCacheGet_fetchTimeout(t *testing.T) { // Get, should fetch req := TestRequest(t, RequestInfo{Key: "hello"}) result, err := c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) // Test the timeout @@ -390,14 +390,16 @@ func TestCacheGet_expire(t *testing.T) { // Get, should fetch req := TestRequest(t, RequestInfo{Key: "hello"}) result, err := c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) - // Get, should not fetch + // Get, should not fetch, verified via the mock assertions above + hits := c.Hits() req = TestRequest(t, RequestInfo{Key: "hello"}) result, err = c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) + require.Equal(hits+1, c.Hits()) // Sleep for the expiry time.Sleep(500 * time.Millisecond) @@ -405,7 +407,7 @@ func TestCacheGet_expire(t *testing.T) { // Get, should fetch req = TestRequest(t, RequestInfo{Key: "hello"}) result, err = c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) // Sleep a tiny bit just to let maybe some background calls happen @@ -435,7 +437,7 @@ func TestCacheGet_expireResetGet(t *testing.T) { // Get, should fetch req := TestRequest(t, RequestInfo{Key: "hello"}) result, err := c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) // Fetch multiple times, where the total time is well beyond @@ -447,7 +449,7 @@ func TestCacheGet_expireResetGet(t *testing.T) { // Get, should not fetch req = TestRequest(t, RequestInfo{Key: "hello"}) result, err = c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) } @@ -456,7 +458,7 @@ func TestCacheGet_expireResetGet(t *testing.T) { // Get, should fetch req = TestRequest(t, RequestInfo{Key: "hello"}) result, err = c.Get("t", req) - require.Nil(err) + require.NoError(err) require.Equal(42, result) // Sleep a tiny bit just to let maybe some background calls happen diff --git a/agent/cache/entry.go b/agent/cache/entry.go index a1af4801c..50c575ff7 100644 --- a/agent/cache/entry.go +++ b/agent/cache/entry.go @@ -72,7 +72,7 @@ func (h *expiryHeap) Fix(entry *cacheEntryExpiry) { idx := entry.HeapIndex heap.Fix(h, idx) - // This is the edge case we handle: if the prev and current index + // This is the edge case we handle: if the prev (idx) and current (HeapIndex) // is zero, it means the head-of-line didn't change while the value // changed. Notify to reset our expiry worker. if idx == 0 && entry.HeapIndex == 0 { @@ -102,6 +102,7 @@ func (h *expiryHeap) Less(i, j int) bool { return h.Entries[i].Expires.Before(h.Entries[j].Expires) } +// heap.Interface, this isn't expected to be called directly. func (h *expiryHeap) Push(x interface{}) { entry := x.(*cacheEntryExpiry) @@ -119,6 +120,7 @@ func (h *expiryHeap) Push(x interface{}) { h.Entries = append(h.Entries, entry) } +// heap.Interface, this isn't expected to be called directly. func (h *expiryHeap) Pop() interface{} { old := h.Entries n := len(old) From dcb2671d10e34e7af39dd9036b1438ecc929bf27 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 22 Apr 2018 13:52:48 -0700 Subject: [PATCH 162/539] agent/cache: address PR feedback, lots of typos --- agent/agent.go | 6 ++---- agent/cache-types/connect_ca_leaf.go | 4 +++- agent/cache-types/intention_match_test.go | 4 ++-- agent/cache/cache.go | 2 +- agent/cache/request.go | 2 +- agent/cache/type.go | 4 ++-- agent/structs/intention.go | 4 ++-- 7 files changed, 13 insertions(+), 13 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 610aeb64f..4d0246b99 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -295,8 +295,9 @@ func (a *Agent) Start() error { // regular and on-demand state synchronizations (anti-entropy). a.sync = ae.NewStateSyncer(a.State, c.AEInterval, a.shutdownCh, a.logger) - // create the cache + // create the cache and register types a.cache = cache.New(nil) + a.registerCache() // create the config for the rpc server/client consulCfg, err := a.consulConfig() @@ -334,9 +335,6 @@ func (a *Agent) Start() error { a.State.Delegate = a.delegate a.State.TriggerSyncChanges = a.sync.SyncChanges.Trigger - // Register the cache - a.registerCache() - // Load checks/services/metadata. if err := a.loadServices(c); err != nil { return err diff --git a/agent/cache-types/connect_ca_leaf.go b/agent/cache-types/connect_ca_leaf.go index 70d5e3c24..c6a2eee73 100644 --- a/agent/cache-types/connect_ca_leaf.go +++ b/agent/cache-types/connect_ca_leaf.go @@ -98,7 +98,9 @@ func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache } // Create a CSR. - // TODO(mitchellh): This is obviously not production ready! + // TODO(mitchellh): This is obviously not production ready! The host + // needs a correct host ID, and we probably don't want to use TestCSR + // and want a non-test-specific way to create a CSR. csr, pk := connect.TestCSR(&testing.RuntimeT{}, &connect.SpiffeIDService{ Host: "1234.consul", Namespace: "default", diff --git a/agent/cache-types/intention_match_test.go b/agent/cache-types/intention_match_test.go index 97b2951b3..d94d7d935 100644 --- a/agent/cache-types/intention_match_test.go +++ b/agent/cache-types/intention_match_test.go @@ -35,7 +35,7 @@ func TestIntentionMatch(t *testing.T) { MinIndex: 24, Timeout: 1 * time.Second, }, &structs.IntentionQueryRequest{Datacenter: "dc1"}) - require.Nil(err) + require.NoError(err) require.Equal(cache.FetchResult{ Value: resp, Index: 48, @@ -51,7 +51,7 @@ func TestIntentionMatch_badReqType(t *testing.T) { // Fetch _, err := typ.Fetch(cache.FetchOptions{}, cache.TestRequest( t, cache.RequestInfo{Key: "foo", MinIndex: 64})) - require.NotNil(err) + require.Error(err) require.Contains(err.Error(), "wrong type") } diff --git a/agent/cache/cache.go b/agent/cache/cache.go index a5aa575d8..cdcaffc58 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -187,7 +187,7 @@ func (c *Cache) Get(t string, r Request) (interface{}, error) { // First time through first := true - // timeoutCh for watching our tmeout + // timeoutCh for watching our timeout var timeoutCh <-chan time.Time RETRY_GET: diff --git a/agent/cache/request.go b/agent/cache/request.go index 7cd53df25..6a20a9c1f 100644 --- a/agent/cache/request.go +++ b/agent/cache/request.go @@ -18,7 +18,7 @@ type Request interface { // cacheability. type RequestInfo struct { // Key is a unique cache key for this request. This key should - // absolutely uniquely identify this request, since any conflicting + // be globally unique to identify this request, since any conflicting // cache keys could result in invalid data being returned from the cache. // The Key does not need to include ACL or DC information, since the // cache already partitions by these values prior to using this key. diff --git a/agent/cache/type.go b/agent/cache/type.go index cccb10b94..b4f630d2b 100644 --- a/agent/cache/type.go +++ b/agent/cache/type.go @@ -4,12 +4,12 @@ import ( "time" ) -// Type implement the logic to fetch certain types of data. +// Type implements the logic to fetch certain types of data. type Type interface { // Fetch fetches a single unique item. // // The FetchOptions contain the index and timeouts for blocking queries. - // The CacheMinIndex value on the Request itself should NOT be used + // The MinIndex value on the Request itself should NOT be used // as the blocking index since a request may be reused multiple times // as part of Refresh behavior. // diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 6ad1a9835..5c6b1e991 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -270,7 +270,7 @@ func (q *IntentionQueryRequest) RequestDatacenter() string { return q.Datacenter } -// cache.Request impl. +// CacheInfo implements cache.Request func (q *IntentionQueryRequest) CacheInfo() cache.RequestInfo { // We only support caching Match queries, so if Match isn't set, // then return an empty info object which will cause a pass-through @@ -294,7 +294,7 @@ func (q *IntentionQueryRequest) CacheInfo() cache.RequestInfo { // If there is an error, we don't set the key. A blank key forces // no cache for this request so the request is forwarded directly // to the server. - info.Key = strconv.FormatUint(v, 10) + info.Key = strconv.FormatUint(v, 16) } return info From 73838c9afafd47a5d6350bd89d2fa9ce9383c9dc Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 22 Apr 2018 14:00:32 -0700 Subject: [PATCH 163/539] agent: use helper/retry instead of timing related tests --- agent/agent.go | 7 +++++-- agent/agent_endpoint_test.go | 24 ++++++++++++++---------- 2 files changed, 19 insertions(+), 12 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 4d0246b99..3c866bc48 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -295,9 +295,8 @@ func (a *Agent) Start() error { // regular and on-demand state synchronizations (anti-entropy). a.sync = ae.NewStateSyncer(a.State, c.AEInterval, a.shutdownCh, a.logger) - // create the cache and register types + // create the cache a.cache = cache.New(nil) - a.registerCache() // create the config for the rpc server/client consulCfg, err := a.consulConfig() @@ -335,6 +334,10 @@ func (a *Agent) Start() error { a.State.Delegate = a.delegate a.State.TriggerSyncChanges = a.sync.SyncChanges.Trigger + // Register the cache. We do this much later so the delegate is + // populated from above. + a.registerCache() + // Load checks/services/metadata. if err := a.loadServices(c); err != nil { return err diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 93cffa617..44dd02923 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2175,17 +2175,21 @@ func TestAgentConnectCARoots_list(t *testing.T) { require.Nil(a.RPC("Test.ConnectCASetRoots", []*structs.CARoot{ca}, &reply)) - // Sleep a bit to wait for the cache to update - time.Sleep(100 * time.Millisecond) + retry.Run(t, func(r *retry.R) { + // List it again + obj, err := a.srv.AgentConnectCARoots(httptest.NewRecorder(), req) + if err != nil { + r.Fatal(err) + } - // List it again - obj, err := a.srv.AgentConnectCARoots(httptest.NewRecorder(), req) - require.Nil(err) - require.Equal(obj, obj) - - value := obj.(structs.IndexedCARoots) - require.Equal(value.ActiveRootID, ca.ID) - require.Len(value.Roots, 1) + value := obj.(structs.IndexedCARoots) + if ca.ID != value.ActiveRootID { + r.Fatalf("%s != %s", ca.ID, value.ActiveRootID) + } + if len(value.Roots) != 1 { + r.Fatalf("bad len: %d", len(value.Roots)) + } + }) // Should be a cache hit! The data should've updated in the cache // in the background so this should've been fetched directly from From 5abd43a56701370687308fc66345ded587a60f5f Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 22 Apr 2018 14:09:06 -0700 Subject: [PATCH 164/539] agent: resolve flaky test by checking cache hits increase, rather than exact --- agent/agent_endpoint_test.go | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 44dd02923..d6b1996dd 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2194,8 +2194,10 @@ func TestAgentConnectCARoots_list(t *testing.T) { // Should be a cache hit! The data should've updated in the cache // in the background so this should've been fetched directly from // the cache. - require.Equal(cacheHits+1, a.cache.Hits()) - cacheHits++ + if v := a.cache.Hits(); v < cacheHits+1 { + t.Fatalf("expected at least one more cache hit, still at %d", v) + } + cacheHits = a.cache.Hits() } } From 93ff59a132b4c751cfb521c4296f29550da5d51d Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Fri, 20 Apr 2018 22:26:00 +0100 Subject: [PATCH 165/539] Fix racy connect network tests that always fail in Docker due to listen races --- connect/proxy/listener.go | 35 +++++++++++++++++++++++++++++----- connect/proxy/listener_test.go | 15 +++++++++------ connect/service_test.go | 11 ++++++----- connect/testing.go | 30 +++++++++++++++++++++++------ 4 files changed, 69 insertions(+), 22 deletions(-) diff --git a/connect/proxy/listener.go b/connect/proxy/listener.go index 51ab761ca..c8e70ac31 100644 --- a/connect/proxy/listener.go +++ b/connect/proxy/listener.go @@ -25,6 +25,15 @@ type Listener struct { stopFlag int32 stopChan chan struct{} + // listeningChan is closed when listener is opened successfully. It's really + // only for use in tests where we need to coordinate wait for the Serve + // goroutine to be running before we proceed trying to connect. On my laptop + // this always works out anyway but on constrained VMs and especially docker + // containers (e.g. in CI) we often see the Dial routine win the race and get + // `connection refused`. Retry loops and sleeps are unpleasant workarounds and + // this is cheap and correct. + listeningChan chan struct{} + logger *log.Logger } @@ -41,8 +50,9 @@ func NewPublicListener(svc *connect.Service, cfg PublicListenerConfig, return net.DialTimeout("tcp", cfg.LocalServiceAddress, time.Duration(cfg.LocalConnectTimeoutMs)*time.Millisecond) }, - stopChan: make(chan struct{}), - logger: logger, + stopChan: make(chan struct{}), + listeningChan: make(chan struct{}), + logger: logger, } } @@ -64,17 +74,27 @@ func NewUpstreamListener(svc *connect.Service, cfg UpstreamConfig, defer cancel() return svc.Dial(ctx, cfg.resolver) }, - stopChan: make(chan struct{}), - logger: logger, + stopChan: make(chan struct{}), + listeningChan: make(chan struct{}), + logger: logger, } } -// Serve runs the listener until it is stopped. +// Serve runs the listener until it is stopped. It is an error to call Serve +// more than once for any given Listener instance. func (l *Listener) Serve() error { + // Ensure we mark state closed if we fail before Close is called externally. + defer l.Close() + + if atomic.LoadInt32(&l.stopFlag) != 0 { + return errors.New("serve called on a closed listener") + } + listen, err := l.listenFunc() if err != nil { return err } + close(l.listeningChan) for { conn, err := listen.Accept() @@ -113,3 +133,8 @@ func (l *Listener) handleConn(src net.Conn) { func (l *Listener) Close() error { return nil } + +// Wait for the listener to be ready to accept connections. +func (l *Listener) Wait() { + <-l.listeningChan +} diff --git a/connect/proxy/listener_test.go b/connect/proxy/listener_test.go index ce41c81e5..8354fbe58 100644 --- a/connect/proxy/listener_test.go +++ b/connect/proxy/listener_test.go @@ -24,7 +24,7 @@ func TestPublicListener(t *testing.T) { } testApp, err := NewTestTCPServer(t, cfg.LocalServiceAddress) - require.Nil(t, err) + require.NoError(t, err) defer testApp.Close() svc := connect.TestService(t, "db", ca) @@ -34,9 +34,10 @@ func TestPublicListener(t *testing.T) { // Run proxy go func() { err := l.Serve() - require.Nil(t, err) + require.NoError(t, err) }() defer l.Close() + l.Wait() // Proxy and backend are running, play the part of a TLS client using same // cert for now. @@ -44,7 +45,7 @@ func TestPublicListener(t *testing.T) { Addr: addrs[0], CertURI: agConnect.TestSpiffeIDService(t, "db"), }) - require.Nilf(t, err, "unexpected err: %s", err) + require.NoError(t, err) TestEchoConn(t, conn, "") } @@ -56,9 +57,10 @@ func TestUpstreamListener(t *testing.T) { testSvr := connect.NewTestServer(t, "db", ca) go func() { err := testSvr.Serve() - require.Nil(t, err) + require.NoError(t, err) }() defer testSvr.Close() + <-testSvr.Listening cfg := UpstreamConfig{ DestinationType: "service", @@ -79,13 +81,14 @@ func TestUpstreamListener(t *testing.T) { // Run proxy go func() { err := l.Serve() - require.Nil(t, err) + require.NoError(t, err) }() defer l.Close() + l.Wait() // Proxy and fake remote service are running, play the part of the app // connecting to a remote connect service over TCP. conn, err := net.Dial("tcp", cfg.LocalBindAddress) - require.Nilf(t, err, "unexpected err: %s", err) + require.NoError(t, err) TestEchoConn(t, conn, "") } diff --git a/connect/service_test.go b/connect/service_test.go index 7bc4c97f2..20433d1f5 100644 --- a/connect/service_test.go +++ b/connect/service_test.go @@ -73,9 +73,10 @@ func TestService_Dial(t *testing.T) { if tt.accept { go func() { err := testSvr.Serve() - require.Nil(err) + require.NoError(err) }() defer testSvr.Close() + <-testSvr.Listening } // Always expect to be connecting to a "DB" @@ -95,10 +96,10 @@ func TestService_Dial(t *testing.T) { testTimer.Stop() if tt.wantErr == "" { - require.Nil(err) + require.NoError(err) require.IsType(&tls.Conn{}, conn) } else { - require.NotNil(err) + require.Error(err) require.Contains(err.Error(), tt.wantErr) } @@ -117,7 +118,6 @@ func TestService_ServerTLSConfig(t *testing.T) { } func TestService_HTTPClient(t *testing.T) { - require := require.New(t) ca := connect.TestCA(t, nil) s := TestService(t, "web", ca) @@ -129,8 +129,9 @@ func TestService_HTTPClient(t *testing.T) { err := testSvr.ServeHTTPS(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.Write([]byte("Hello, I am Backend")) })) - require.Nil(t, err) + require.NoError(t, err) }() + <-testSvr.Listening // TODO(banks): this will talk http2 on both client and server. I hit some // compatibility issues when testing though need to make sure that the http diff --git a/connect/testing.go b/connect/testing.go index 9f6e4f781..f9a6a4850 100644 --- a/connect/testing.go +++ b/connect/testing.go @@ -105,6 +105,8 @@ type TestServer struct { // Addr is the listen address. It is set to a random free port on `localhost` // by default. Addr string + // Listening is closed when the listener is run. + Listening chan struct{} l net.Listener stopFlag int32 @@ -116,11 +118,12 @@ type TestServer struct { func NewTestServer(t testing.T, service string, ca *structs.CARoot) *TestServer { ports := freeport.GetT(t, 1) return &TestServer{ - Service: service, - CA: ca, - stopChan: make(chan struct{}), - TLSCfg: TestTLSConfig(t, service, ca), - Addr: fmt.Sprintf("localhost:%d", ports[0]), + Service: service, + CA: ca, + stopChan: make(chan struct{}), + TLSCfg: TestTLSConfig(t, service, ca), + Addr: fmt.Sprintf("127.0.0.1:%d", ports[0]), + Listening: make(chan struct{}), } } @@ -132,6 +135,7 @@ func (s *TestServer) Serve() error { if err != nil { return err } + close(s.Listening) s.l = l log.Printf("test connect service listening on %s", s.Addr) @@ -172,7 +176,21 @@ func (s *TestServer) ServeHTTPS(h http.Handler) error { Handler: h, } log.Printf("starting test connect HTTPS server on %s", s.Addr) - return srv.ListenAndServeTLS("", "") + + // Use our own listener so we can signal when it's ready. + l, err := net.Listen("tcp", s.Addr) + if err != nil { + return err + } + close(s.Listening) + s.l = l + log.Printf("test connect service listening on %s", s.Addr) + + err = srv.ServeTLS(l, "", "") + if atomic.LoadInt32(&s.stopFlag) == 1 { + return nil + } + return err } // Close stops a TestServer From aa10fb2f48f2e0440c9c51a7814c0386daa3b0e2 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Tue, 24 Apr 2018 11:50:31 -0700 Subject: [PATCH 166/539] Clarify some comments and names around CA bootstrapping --- agent/connect/ca_provider.go | 4 +- agent/consul/connect_ca_endpoint.go | 22 ++++---- agent/consul/connect_ca_provider.go | 8 +-- agent/consul/fsm/commands_oss_test.go | 2 +- agent/consul/leader.go | 28 +++++----- agent/consul/server.go | 6 +-- agent/consul/state/connect_ca.go | 76 ++++++++++++++------------- agent/structs/connect_ca.go | 8 +-- 8 files changed, 80 insertions(+), 74 deletions(-) diff --git a/agent/connect/ca_provider.go b/agent/connect/ca_provider.go index 9a53d02a0..cb7219669 100644 --- a/agent/connect/ca_provider.go +++ b/agent/connect/ca_provider.go @@ -29,8 +29,8 @@ type CAProvider interface { // SignCA signs a CA CSR and returns the resulting cross-signed cert. SignCA(*x509.CertificateRequest) (string, error) - // Teardown performs any necessary cleanup that should happen when the provider + // Cleanup performs any necessary cleanup that should happen when the provider // is shut down permanently, such as removing a temporary PKI backend in Vault // created for an intermediate CA. - Teardown() error + Cleanup() error } diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 9a3adeb99..5e3c2f6c7 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -52,7 +52,7 @@ func (s *ConnectCA) ConfigurationSet( return err } - // This action requires operator read access. + // This action requires operator write access. rule, err := s.srv.resolveToken(args.Token) if err != nil { return err @@ -133,7 +133,7 @@ func (s *ConnectCA) ConfigurationSet( } // Add the cross signed cert to the new root's intermediates - newActiveRoot.Intermediates = []string{xcCert} + newActiveRoot.IntermediateCerts = []string{xcCert} // Update the roots and CA config in the state store at the same time idx, roots, err := state.CARoots(nil) @@ -166,11 +166,11 @@ func (s *ConnectCA) ConfigurationSet( // and call teardown on the old provider s.srv.setCAProvider(newProvider) - if err := oldProvider.Teardown(); err != nil { - return err + if err := oldProvider.Cleanup(); err != nil { + s.srv.logger.Printf("[WARN] connect: failed to clean up old provider %q", config.Provider) } - s.srv.logger.Printf("[INFO] connect: CA rotated to the new root under %q provider", args.Config.Provider) + s.srv.logger.Printf("[INFO] connect: CA rotated to new root under provider %q", args.Config.Provider) return nil } @@ -205,12 +205,12 @@ func (s *ConnectCA) Roots( // directly to the structure in the memdb store. reply.Roots[i] = &structs.CARoot{ - ID: r.ID, - Name: r.Name, - RootCert: r.RootCert, - Intermediates: r.Intermediates, - RaftIndex: r.RaftIndex, - Active: r.Active, + ID: r.ID, + Name: r.Name, + RootCert: r.RootCert, + IntermediateCerts: r.IntermediateCerts, + RaftIndex: r.RaftIndex, + Active: r.Active, } if r.Active { diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index 6f0508ce1..8a3c81b2b 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -167,7 +167,7 @@ func (c *ConsulCAProvider) GenerateIntermediate() (*structs.CARoot, *x509.Certif return nil, nil, err } - id := &connect.SpiffeIDSigning{ClusterID: config.ClusterSerial, Domain: "consul"} + id := &connect.SpiffeIDSigning{ClusterID: config.ClusterID, Domain: "consul"} template := &x509.CertificateRequest{ URIs: []*url.URL{id.URI()}, } @@ -198,7 +198,7 @@ func (c *ConsulCAProvider) GenerateIntermediate() (*structs.CARoot, *x509.Certif } // Remove the state store entry for this provider instance. -func (c *ConsulCAProvider) Teardown() error { +func (c *ConsulCAProvider) Cleanup() error { args := &structs.CARequest{ Op: structs.CAOpDeleteProviderState, ProviderState: &structs.CAConsulProviderState{ID: c.id}, @@ -336,7 +336,7 @@ func (c *ConsulCAProvider) SignCA(csr *x509.CertificateRequest) (string, error) if err != nil { return "", err } - id := &connect.SpiffeIDSigning{ClusterID: config.ClusterSerial, Domain: "consul"} + id := &connect.SpiffeIDSigning{ClusterID: config.ClusterID, Domain: "consul"} keyId, err := connect.KeyId(privKey.Public()) if err != nil { return "", err @@ -423,7 +423,7 @@ func (c *ConsulCAProvider) generateCA(privateKey, contents string, sn uint64) (* if pemContents == "" { // The URI (SPIFFE compatible) for the cert - id := &connect.SpiffeIDSigning{ClusterID: config.ClusterSerial, Domain: "consul"} + id := &connect.SpiffeIDSigning{ClusterID: config.ClusterID, Domain: "consul"} keyId, err := connect.KeyId(privKey.Public()) if err != nil { return nil, err diff --git a/agent/consul/fsm/commands_oss_test.go b/agent/consul/fsm/commands_oss_test.go index a6552240c..a52e6d7b6 100644 --- a/agent/consul/fsm/commands_oss_test.go +++ b/agent/consul/fsm/commands_oss_test.go @@ -1251,7 +1251,7 @@ func TestFSM_CAConfig(t *testing.T) { if err != nil { t.Fatalf("err: %v", err) } - var conf *connect.ConsulCAProviderConfig + var conf *structs.ConsulCAProviderConfig if err := mapstructure.WeakDecode(config.Config, &conf); err != nil { t.Fatalf("error decoding config: %s, %v", err, config.Config) } diff --git a/agent/consul/leader.go b/agent/consul/leader.go index 91bacee2f..282393cd3 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -7,16 +7,15 @@ import ( "sync" "time" - "github.com/hashicorp/consul/agent/connect" - uuid "github.com/hashicorp/go-uuid" - "github.com/armon/go-metrics" "github.com/hashicorp/consul/acl" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/consul/autopilot" "github.com/hashicorp/consul/agent/metadata" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/types" + uuid "github.com/hashicorp/go-uuid" "github.com/hashicorp/go-version" "github.com/hashicorp/raft" "github.com/hashicorp/serf/serf" @@ -215,7 +214,7 @@ func (s *Server) establishLeadership() error { s.autopilot.Start() // todo(kyhavlov): start a goroutine here for handling periodic CA rotation - s.bootstrapCA() + s.initializeCA() s.setConsistentReadReady() return nil @@ -366,8 +365,9 @@ func (s *Server) getOrCreateAutopilotConfig() *autopilot.Config { return config } -// getOrCreateCAConfig is used to get the CA config, initializing it if necessary -func (s *Server) getOrCreateCAConfig() (*structs.CAConfiguration, error) { +// initializeCAConfig is used to initialize the CA config if necessary +// when setting up the CA during establishLeadership +func (s *Server) initializeCAConfig() (*structs.CAConfiguration, error) { state := s.fsm.State() _, config, err := state.CAConfig() if err != nil { @@ -377,13 +377,13 @@ func (s *Server) getOrCreateCAConfig() (*structs.CAConfiguration, error) { return config, nil } - sn, err := uuid.GenerateUUID() + id, err := uuid.GenerateUUID() if err != nil { return nil, err } config = s.config.CAConfig - config.ClusterSerial = sn + config.ClusterID = id req := structs.CARequest{ Op: structs.CAOpSetConfig, Config: config, @@ -395,9 +395,10 @@ func (s *Server) getOrCreateCAConfig() (*structs.CAConfiguration, error) { return config, nil } -// bootstrapCA creates a CA provider from the current configuration. -func (s *Server) bootstrapCA() error { - conf, err := s.getOrCreateCAConfig() +// initializeCA sets up the CA provider when gaining leadership, bootstrapping +// the root in the state store if necessary. +func (s *Server) initializeCA() error { + conf, err := s.initializeCAConfig() if err != nil { return err } @@ -424,7 +425,10 @@ func (s *Server) bootstrapCA() error { if err != nil { return err } - if root != nil && root.ID == trustedCA.ID { + if root != nil { + if root.ID != trustedCA.ID { + s.logger.Printf("[WARN] connect: CA root %q is not the active root (%q)", trustedCA.ID, root.ID) + } return nil } diff --git a/agent/consul/server.go b/agent/consul/server.go index fef016829..e15d5f71c 100644 --- a/agent/consul/server.go +++ b/agent/consul/server.go @@ -17,9 +17,8 @@ import ( "sync/atomic" "time" - "github.com/hashicorp/consul/agent/connect" - "github.com/hashicorp/consul/acl" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/consul/autopilot" "github.com/hashicorp/consul/agent/consul/fsm" "github.com/hashicorp/consul/agent/consul/state" @@ -98,7 +97,8 @@ type Server struct { // autopilotWaitGroup is used to block until Autopilot shuts down. autopilotWaitGroup sync.WaitGroup - // caProvider is the current CA provider in use for Connect. + // caProvider is the current CA provider in use for Connect. This is + // only non-nil when we are the leader. caProvider connect.CAProvider caProviderLock sync.RWMutex diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go index 17e274992..99986c891 100644 --- a/agent/consul/state/connect_ca.go +++ b/agent/consul/state/connect_ca.go @@ -8,17 +8,38 @@ import ( ) const ( - caConfigTableName = "connect-ca-config" - caRootTableName = "connect-ca-roots" - caProviderTableName = "connect-ca-builtin" + caBuiltinProviderTableName = "connect-ca-builtin" + caConfigTableName = "connect-ca-config" + caRootTableName = "connect-ca-roots" ) +// caBuiltinProviderTableSchema returns a new table schema used for storing +// the built-in CA provider's state for connect. This is only used by +// the internal Consul CA provider. +func caBuiltinProviderTableSchema() *memdb.TableSchema { + return &memdb.TableSchema{ + Name: caBuiltinProviderTableName, + Indexes: map[string]*memdb.IndexSchema{ + "id": &memdb.IndexSchema{ + Name: "id", + AllowMissing: false, + Unique: true, + Indexer: &memdb.StringFieldIndex{ + Field: "ID", + }, + }, + }, + } +} + // caConfigTableSchema returns a new table schema used for storing // the CA config for Connect. func caConfigTableSchema() *memdb.TableSchema { return &memdb.TableSchema{ Name: caConfigTableName, Indexes: map[string]*memdb.IndexSchema{ + // This table only stores one row, so this just ignores the ID field + // and always overwrites the same config object. "id": &memdb.IndexSchema{ Name: "id", AllowMissing: true, @@ -49,29 +70,10 @@ func caRootTableSchema() *memdb.TableSchema { } } -// caProviderTableSchema returns a new table schema used for storing -// the built-in CA provider's state for connect. This is only used by -// the internal Consul CA provider. -func caProviderTableSchema() *memdb.TableSchema { - return &memdb.TableSchema{ - Name: caProviderTableName, - Indexes: map[string]*memdb.IndexSchema{ - "id": &memdb.IndexSchema{ - Name: "id", - AllowMissing: false, - Unique: true, - Indexer: &memdb.StringFieldIndex{ - Field: "ID", - }, - }, - }, - } -} - func init() { + registerSchema(caBuiltinProviderTableSchema) registerSchema(caConfigTableSchema) registerSchema(caRootTableSchema) - registerSchema(caProviderTableSchema) } // CAConfig is used to pull the CA config from the snapshot. @@ -170,7 +172,7 @@ func (s *Store) caSetConfigTxn(idx uint64, tx *memdb.Txn, config *structs.CAConf if prev != nil { existing := prev.(*structs.CAConfiguration) config.CreateIndex = existing.CreateIndex - config.ClusterSerial = existing.ClusterSerial + config.ClusterID = existing.ClusterID } else { config.CreateIndex = idx } @@ -319,7 +321,7 @@ func (s *Store) CARootSetCAS(idx, cidx uint64, rs []*structs.CARoot) (bool, erro // CAProviderState is used to pull the built-in provider state from the snapshot. func (s *Snapshot) CAProviderState() (*structs.CAConsulProviderState, error) { - c, err := s.tx.First(caProviderTableName, "id") + c, err := s.tx.First(caBuiltinProviderTableName, "id") if err != nil { return nil, err } @@ -334,7 +336,7 @@ func (s *Snapshot) CAProviderState() (*structs.CAConsulProviderState, error) { // CAProviderState is used when restoring from a snapshot. func (s *Restore) CAProviderState(state *structs.CAConsulProviderState) error { - if err := s.tx.Insert(caProviderTableName, state); err != nil { + if err := s.tx.Insert(caBuiltinProviderTableName, state); err != nil { return fmt.Errorf("failed restoring built-in CA state: %s", err) } @@ -347,10 +349,10 @@ func (s *Store) CAProviderState(id string) (uint64, *structs.CAConsulProviderSta defer tx.Abort() // Get the index - idx := maxIndexTxn(tx, caProviderTableName) + idx := maxIndexTxn(tx, caBuiltinProviderTableName) // Get the provider config - c, err := tx.First(caProviderTableName, "id", id) + c, err := tx.First(caBuiltinProviderTableName, "id", id) if err != nil { return 0, nil, fmt.Errorf("failed built-in CA state lookup: %s", err) } @@ -369,10 +371,10 @@ func (s *Store) CAProviderStates() (uint64, []*structs.CAConsulProviderState, er defer tx.Abort() // Get the index - idx := maxIndexTxn(tx, caProviderTableName) + idx := maxIndexTxn(tx, caBuiltinProviderTableName) // Get all - iter, err := tx.Get(caProviderTableName, "id") + iter, err := tx.Get(caBuiltinProviderTableName, "id") if err != nil { return 0, nil, fmt.Errorf("failed CA provider state lookup: %s", err) } @@ -390,7 +392,7 @@ func (s *Store) CASetProviderState(idx uint64, state *structs.CAConsulProviderSt defer tx.Abort() // Check for an existing config - existing, err := tx.First(caProviderTableName, "id", state.ID) + existing, err := tx.First(caBuiltinProviderTableName, "id", state.ID) if err != nil { return false, fmt.Errorf("failed built-in CA state lookup: %s", err) } @@ -403,12 +405,12 @@ func (s *Store) CASetProviderState(idx uint64, state *structs.CAConsulProviderSt } state.ModifyIndex = idx - if err := tx.Insert(caProviderTableName, state); err != nil { + if err := tx.Insert(caBuiltinProviderTableName, state); err != nil { return false, fmt.Errorf("failed updating built-in CA state: %s", err) } // Update the index - if err := tx.Insert("index", &IndexEntry{caProviderTableName, idx}); err != nil { + if err := tx.Insert("index", &IndexEntry{caBuiltinProviderTableName, idx}); err != nil { return false, fmt.Errorf("failed updating index: %s", err) } @@ -423,10 +425,10 @@ func (s *Store) CADeleteProviderState(id string) error { defer tx.Abort() // Get the index - idx := maxIndexTxn(tx, caProviderTableName) + idx := maxIndexTxn(tx, caBuiltinProviderTableName) // Check for an existing config - existing, err := tx.First(caProviderTableName, "id", id) + existing, err := tx.First(caBuiltinProviderTableName, "id", id) if err != nil { return fmt.Errorf("failed built-in CA state lookup: %s", err) } @@ -437,10 +439,10 @@ func (s *Store) CADeleteProviderState(id string) error { providerState := existing.(*structs.CAConsulProviderState) // Do the delete and update the index - if err := tx.Delete(caProviderTableName, providerState); err != nil { + if err := tx.Delete(caBuiltinProviderTableName, providerState); err != nil { return err } - if err := tx.Insert("index", &IndexEntry{caProviderTableName, idx}); err != nil { + if err := tx.Insert("index", &IndexEntry{caBuiltinProviderTableName, idx}); err != nil { return fmt.Errorf("failed updating index: %s", err) } diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 33c355fca..39c46f0c4 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -31,9 +31,9 @@ type CARoot struct { // RootCert is the PEM-encoded public certificate. RootCert string - // Intermediates is a list of PEM-encoded intermediate certs to + // IntermediateCerts is a list of PEM-encoded intermediate certs to // attach to any leaf certs signed by this CA. - Intermediates []string + IntermediateCerts []string // SigningCert is the PEM-encoded signing certificate and SigningKey // is the PEM-encoded private key for the signing certificate. These @@ -146,8 +146,8 @@ const ( // CAConfiguration is the configuration for the current CA plugin. type CAConfiguration struct { - // Unique identifier for the cluster - ClusterSerial string `json:"-"` + // ClusterID is a unique identifier for the cluster + ClusterID string `json:"-"` // Provider is the CA provider implementation to use. Provider string From 44b30476cb0f1ecb835ff01efba7d0af2b517e38 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Tue, 24 Apr 2018 16:16:37 -0700 Subject: [PATCH 167/539] Simplify the CA provider interface by moving some logic out --- agent/connect/ca.go | 31 +++- agent/connect/ca_provider.go | 23 ++- agent/consul/connect_ca_endpoint.go | 35 +++-- agent/consul/connect_ca_provider.go | 217 +++++++++++----------------- agent/consul/leader.go | 29 +++- agent/consul/state/connect_ca.go | 2 +- agent/structs/connect_ca.go | 2 +- 7 files changed, 169 insertions(+), 170 deletions(-) diff --git a/agent/connect/ca.go b/agent/connect/ca.go index 818af9f9f..87b01994e 100644 --- a/agent/connect/ca.go +++ b/agent/connect/ca.go @@ -1,8 +1,11 @@ package connect import ( + "bytes" "crypto" "crypto/ecdsa" + "crypto/rsa" + "crypto/sha1" "crypto/sha256" "crypto/x509" "encoding/pem" @@ -25,6 +28,23 @@ func ParseCert(pemValue string) (*x509.Certificate, error) { return x509.ParseCertificate(block.Bytes) } +// ParseCertFingerprint parses the x509 certificate from a PEM-encoded value +// and returns the SHA-1 fingerprint. +func ParseCertFingerprint(pemValue string) (string, error) { + // The _ result below is not an error but the remaining PEM bytes. + block, _ := pem.Decode([]byte(pemValue)) + if block == nil { + return "", fmt.Errorf("no PEM-encoded data found") + } + + hash := sha1.Sum(block.Bytes) + hexified := make([][]byte, len(hash)) + for i, data := range hash { + hexified[i] = []byte(fmt.Sprintf("%02X", data)) + } + return string(bytes.Join(hexified, []byte(":"))), nil +} + // ParseSigner parses a crypto.Signer from a PEM-encoded key. The private key // is expected to be the first block in the PEM value. func ParseSigner(pemValue string) (crypto.Signer, error) { @@ -38,6 +58,9 @@ func ParseSigner(pemValue string) (crypto.Signer, error) { case "EC PRIVATE KEY": return x509.ParseECPrivateKey(block.Bytes) + case "RSA PRIVATE KEY": + return x509.ParsePKCS1PrivateKey(block.Bytes) + case "PRIVATE KEY": signer, err := x509.ParsePKCS8PrivateKey(block.Bytes) if err != nil { @@ -74,15 +97,17 @@ func ParseCSR(pemValue string) (*x509.CertificateRequest, error) { // KeyId returns a x509 KeyId from the given signing key. The key must be // an *ecdsa.PublicKey currently, but may support more types in the future. func KeyId(raw interface{}) ([]byte, error) { - pub, ok := raw.(*ecdsa.PublicKey) - if !ok { + switch raw.(type) { + case *ecdsa.PublicKey: + case *rsa.PublicKey: + default: return nil, fmt.Errorf("invalid key type: %T", raw) } // This is not standard; RFC allows any unique identifier as long as they // match in subject/authority chains but suggests specific hashing of DER // bytes of public key including DER tags. - bs, err := x509.MarshalPKIXPublicKey(pub) + bs, err := x509.MarshalPKIXPublicKey(raw) if err != nil { return nil, err } diff --git a/agent/connect/ca_provider.go b/agent/connect/ca_provider.go index cb7219669..1eb5bde11 100644 --- a/agent/connect/ca_provider.go +++ b/agent/connect/ca_provider.go @@ -13,21 +13,28 @@ type CAProvider interface { // Active root returns the currently active root CA for this // provider. This should be a parent of the certificate returned by // ActiveIntermediate() - ActiveRoot() (*structs.CARoot, error) + ActiveRoot() (string, error) // ActiveIntermediate returns the current signing cert used by this // provider for generating SPIFFE leaf certs. - ActiveIntermediate() (*structs.CARoot, error) + ActiveIntermediate() (string, error) - // GenerateIntermediate returns a new intermediate signing cert, a - // cross-signing CSR for it and sets it to the active intermediate. - GenerateIntermediate() (*structs.CARoot, *x509.CertificateRequest, error) + // GenerateIntermediate returns a new intermediate signing cert and + // sets it to the active intermediate. + GenerateIntermediate() (string, error) // Sign signs a leaf certificate used by Connect proxies from a CSR. - Sign(*SpiffeIDService, *x509.CertificateRequest) (*structs.IssuedCert, error) + Sign(*x509.CertificateRequest) (*structs.IssuedCert, error) - // SignCA signs a CA CSR and returns the resulting cross-signed cert. - SignCA(*x509.CertificateRequest) (string, error) + // CrossSignCA must accept a CA certificate signed by another CA's key + // and cross sign it exactly as it is such that it forms a chain back the the + // CAProvider's current root. Specifically, the Distinguished Name, Subject + // Alternative Name, SubjectKeyID and other relevant extensions must be kept. + // The resulting certificate must have a distinct Serial Number and the + // AuthorityKeyID set to the CAProvider's current signing key as well as the + // Issuer related fields changed as necessary. The resulting certificate is + // returned as a PEM formatted string. + CrossSignCA(*x509.Certificate) (string, error) // Cleanup performs any necessary cleanup that should happen when the provider // is shut down permanently, such as removing a temporary PKI backend in Vault diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 5e3c2f6c7..b0041423e 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -80,11 +80,22 @@ func (s *ConnectCA) ConfigurationSet( return fmt.Errorf("could not initialize provider: %v", err) } - newActiveRoot, err := newProvider.ActiveRoot() + newRootPEM, err := newProvider.ActiveRoot() if err != nil { return err } + id, err := connect.ParseCertFingerprint(newRootPEM) + if err != nil { + return fmt.Errorf("error parsing root fingerprint: %v", err) + } + newActiveRoot := &structs.CARoot{ + ID: id, + Name: fmt.Sprintf("%s CA Root Cert", config.Provider), + RootCert: newRootPEM, + Active: true, + } + // Compare the new provider's root CA ID to the current one. If they // match, just update the existing provider with the new config. // If they don't match, begin the root rotation process. @@ -121,13 +132,19 @@ func (s *ConnectCA) ConfigurationSet( // to get the cross-signed intermediate // 3. Get the active root for the new provider, append the intermediate from step 3 // to its list of intermediates - _, csr, err := newProvider.GenerateIntermediate() + intermediatePEM, err := newProvider.GenerateIntermediate() if err != nil { return err } + intermediateCA, err := connect.ParseCert(intermediatePEM) + if err != nil { + return err + } + + // Have the old provider cross-sign the new intermediate oldProvider := s.srv.getCAProvider() - xcCert, err := oldProvider.SignCA(csr) + xcCert, err := oldProvider.CrossSignCA(intermediateCA) if err != nil { return err } @@ -237,21 +254,11 @@ func (s *ConnectCA) Sign( return err } - // Parse the SPIFFE ID - spiffeId, err := connect.ParseCertURI(csr.URIs[0]) - if err != nil { - return err - } - serviceId, ok := spiffeId.(*connect.SpiffeIDService) - if !ok { - return fmt.Errorf("SPIFFE ID in CSR must be a service ID") - } - // todo(kyhavlov): more validation on the CSR before signing provider := s.srv.getCAProvider() - cert, err := provider.Sign(serviceId, csr) + cert, err := provider.Sign(csr) if err != nil { return err } diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index 8a3c81b2b..2f32a3d67 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -16,7 +16,6 @@ import ( "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" - uuid "github.com/hashicorp/go-uuid" "github.com/mitchellh/mapstructure" ) @@ -96,12 +95,16 @@ func NewConsulCAProvider(rawConfig map[string]interface{}, srv *Server) (*Consul newState.PrivateKey = conf.PrivateKey } - // Generate the root CA - ca, err := provider.generateCA(newState.PrivateKey, conf.RootCert, idx+1) - if err != nil { - return nil, fmt.Errorf("error generating CA: %v", err) + // Generate the root CA if necessary + if conf.RootCert == "" { + ca, err := provider.generateCA(newState.PrivateKey, idx+1) + if err != nil { + return nil, fmt.Errorf("error generating CA: %v", err) + } + newState.RootCert = ca + } else { + newState.RootCert = conf.RootCert } - newState.CARoot = ca // Write the provider state args := &structs.CARequest{ @@ -133,68 +136,33 @@ func decodeConfig(raw map[string]interface{}) (*ConsulCAProviderConfig, error) { } // Return the active root CA and generate a new one if needed -func (c *ConsulCAProvider) ActiveRoot() (*structs.CARoot, error) { +func (c *ConsulCAProvider) ActiveRoot() (string, error) { state := c.srv.fsm.State() _, providerState, err := state.CAProviderState(c.id) if err != nil { - return nil, err + return "", err } - return providerState.CARoot, nil + return providerState.RootCert, nil } // We aren't maintaining separate root/intermediate CAs for the builtin // provider, so just return the root. -func (c *ConsulCAProvider) ActiveIntermediate() (*structs.CARoot, error) { +func (c *ConsulCAProvider) ActiveIntermediate() (string, error) { return c.ActiveRoot() } // We aren't maintaining separate root/intermediate CAs for the builtin // provider, so just generate a CSR for the active root. -func (c *ConsulCAProvider) GenerateIntermediate() (*structs.CARoot, *x509.CertificateRequest, error) { +func (c *ConsulCAProvider) GenerateIntermediate() (string, error) { ca, err := c.ActiveIntermediate() if err != nil { - return nil, nil, err + return "", err } - state := c.srv.fsm.State() - _, providerState, err := state.CAProviderState(c.id) - if err != nil { - return nil, nil, err - } - _, config, err := state.CAConfig() - if err != nil { - return nil, nil, err - } + // todo(kyhavlov): make a new intermediate here - id := &connect.SpiffeIDSigning{ClusterID: config.ClusterID, Domain: "consul"} - template := &x509.CertificateRequest{ - URIs: []*url.URL{id.URI()}, - } - - signer, err := connect.ParseSigner(providerState.PrivateKey) - if err != nil { - return nil, nil, err - } - - // Create the CSR itself - var csrBuf bytes.Buffer - bs, err := x509.CreateCertificateRequest(rand.Reader, template, signer) - if err != nil { - return nil, nil, fmt.Errorf("error creating CSR: %s", err) - } - - err = pem.Encode(&csrBuf, &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: bs}) - if err != nil { - return nil, nil, fmt.Errorf("error encoding CSR: %s", err) - } - - csr, err := connect.ParseCSR(csrBuf.String()) - if err != nil { - return nil, nil, err - } - - return ca, csr, err + return ca, err } // Remove the state store entry for this provider instance. @@ -216,7 +184,7 @@ func (c *ConsulCAProvider) Cleanup() error { // Sign returns a new certificate valid for the given SpiffeIDService // using the current CA. -func (c *ConsulCAProvider) Sign(serviceId *connect.SpiffeIDService, csr *x509.CertificateRequest) (*structs.IssuedCert, error) { +func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (*structs.IssuedCert, error) { // Lock during the signing so we don't use the same index twice // for different cert serial numbers. c.Lock() @@ -242,8 +210,18 @@ func (c *ConsulCAProvider) Sign(serviceId *connect.SpiffeIDService, csr *x509.Ce return nil, err } + // Parse the SPIFFE ID + spiffeId, err := connect.ParseCertURI(csr.URIs[0]) + if err != nil { + return nil, err + } + serviceId, ok := spiffeId.(*connect.SpiffeIDService) + if !ok { + return nil, fmt.Errorf("SPIFFE ID in CSR must be a service ID") + } + // Parse the CA cert - caCert, err := connect.ParseCert(providerState.CARoot.RootCert) + caCert, err := connect.ParseCert(providerState.RootCert) if err != nil { return nil, fmt.Errorf("error parsing CA cert: %s", err) } @@ -312,8 +290,8 @@ func (c *ConsulCAProvider) Sign(serviceId *connect.SpiffeIDService, csr *x509.Ce }, nil } -// SignCA returns an intermediate CA cert signed by the current active root. -func (c *ConsulCAProvider) SignCA(csr *x509.CertificateRequest) (string, error) { +// CrossSignCA returns the given intermediate CA cert signed by the current active root. +func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { c.Lock() defer c.Unlock() @@ -329,44 +307,28 @@ func (c *ConsulCAProvider) SignCA(csr *x509.CertificateRequest) (string, error) return "", fmt.Errorf("error parsing private key %q: %v", providerState.PrivateKey, err) } - name := fmt.Sprintf("Consul cross-signed CA %d", providerState.LeafIndex+1) - - // The URI (SPIFFE compatible) for the cert - _, config, err := state.CAConfig() + rootCA, err := connect.ParseCert(providerState.RootCert) if err != nil { return "", err } - id := &connect.SpiffeIDSigning{ClusterID: config.ClusterID, Domain: "consul"} + keyId, err := connect.KeyId(privKey.Public()) if err != nil { return "", err } - // Create the CA cert + // Create the cross-signing template from the existing root CA serialNum := &big.Int{} serialNum.SetUint64(providerState.LeafIndex + 1) - template := x509.Certificate{ - SerialNumber: serialNum, - Subject: pkix.Name{CommonName: name}, - URIs: csr.URIs, - Signature: csr.Signature, - PublicKeyAlgorithm: csr.PublicKeyAlgorithm, - PublicKey: csr.PublicKey, - PermittedDNSDomainsCritical: true, - PermittedDNSDomains: []string{id.URI().Hostname()}, - BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageCertSign | - x509.KeyUsageCRLSign | - x509.KeyUsageDigitalSignature, - IsCA: true, - NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), - NotBefore: time.Now(), - AuthorityKeyId: keyId, - SubjectKeyId: keyId, - } + template := *cert + template.SerialNumber = serialNum + template.Subject = rootCA.Subject + template.SignatureAlgorithm = rootCA.SignatureAlgorithm + template.SubjectKeyId = keyId + template.AuthorityKeyId = keyId bs, err := x509.CreateCertificate( - rand.Reader, &template, &template, privKey.Public(), privKey) + rand.Reader, &template, rootCA, cert.PublicKey, privKey) if err != nil { return "", fmt.Errorf("error generating CA certificate: %s", err) } @@ -405,75 +367,58 @@ func generatePrivateKey() (string, error) { } // generateCA makes a new root CA using the current private key -func (c *ConsulCAProvider) generateCA(privateKey, contents string, sn uint64) (*structs.CARoot, error) { +func (c *ConsulCAProvider) generateCA(privateKey string, sn uint64) (string, error) { state := c.srv.fsm.State() _, config, err := state.CAConfig() if err != nil { - return nil, err + return "", err } privKey, err := connect.ParseSigner(privateKey) if err != nil { - return nil, fmt.Errorf("error parsing private key %q: %v", privateKey, err) + return "", fmt.Errorf("error parsing private key %q: %v", privateKey, err) } name := fmt.Sprintf("Consul CA %d", sn) - pemContents := contents - - if pemContents == "" { - // The URI (SPIFFE compatible) for the cert - id := &connect.SpiffeIDSigning{ClusterID: config.ClusterID, Domain: "consul"} - keyId, err := connect.KeyId(privKey.Public()) - if err != nil { - return nil, err - } - - // Create the CA cert - serialNum := &big.Int{} - serialNum.SetUint64(sn) - template := x509.Certificate{ - SerialNumber: serialNum, - Subject: pkix.Name{CommonName: name}, - URIs: []*url.URL{id.URI()}, - PermittedDNSDomainsCritical: true, - PermittedDNSDomains: []string{id.URI().Hostname()}, - BasicConstraintsValid: true, - KeyUsage: x509.KeyUsageCertSign | - x509.KeyUsageCRLSign | - x509.KeyUsageDigitalSignature, - IsCA: true, - NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), - NotBefore: time.Now(), - AuthorityKeyId: keyId, - SubjectKeyId: keyId, - } - - bs, err := x509.CreateCertificate( - rand.Reader, &template, &template, privKey.Public(), privKey) - if err != nil { - return nil, fmt.Errorf("error generating CA certificate: %s", err) - } - - var buf bytes.Buffer - err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) - if err != nil { - return nil, fmt.Errorf("error encoding private key: %s", err) - } - - pemContents = buf.String() - } - - // Generate an ID for the new CA cert - rootId, err := uuid.GenerateUUID() + // The URI (SPIFFE compatible) for the cert + id := &connect.SpiffeIDSigning{ClusterID: config.ClusterID, Domain: "consul"} + keyId, err := connect.KeyId(privKey.Public()) if err != nil { - return nil, err + return "", err } - return &structs.CARoot{ - ID: rootId, - Name: name, - RootCert: pemContents, - Active: true, - }, nil + // Create the CA cert + serialNum := &big.Int{} + serialNum.SetUint64(sn) + template := x509.Certificate{ + SerialNumber: serialNum, + Subject: pkix.Name{CommonName: name}, + URIs: []*url.URL{id.URI()}, + PermittedDNSDomainsCritical: true, + PermittedDNSDomains: []string{id.URI().Hostname()}, + BasicConstraintsValid: true, + KeyUsage: x509.KeyUsageCertSign | + x509.KeyUsageCRLSign | + x509.KeyUsageDigitalSignature, + IsCA: true, + NotAfter: time.Now().Add(10 * 365 * 24 * time.Hour), + NotBefore: time.Now(), + AuthorityKeyId: keyId, + SubjectKeyId: keyId, + } + + bs, err := x509.CreateCertificate( + rand.Reader, &template, &template, privKey.Public(), privKey) + if err != nil { + return "", fmt.Errorf("error generating CA certificate: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) + if err != nil { + return "", fmt.Errorf("error encoding private key: %s", err) + } + + return buf.String(), nil } diff --git a/agent/consul/leader.go b/agent/consul/leader.go index 282393cd3..1670add29 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -214,7 +214,9 @@ func (s *Server) establishLeadership() error { s.autopilot.Start() // todo(kyhavlov): start a goroutine here for handling periodic CA rotation - s.initializeCA() + if err := s.initializeCA(); err != nil { + return err + } s.setConsistentReadReady() return nil @@ -232,6 +234,8 @@ func (s *Server) revokeLeadership() error { return err } + s.setCAProvider(nil) + s.resetConsistentReadReady() s.autopilot.Stop() return nil @@ -412,22 +416,33 @@ func (s *Server) initializeCA() error { s.setCAProvider(provider) // Get the active root cert from the CA - trustedCA, err := provider.ActiveRoot() + rootPEM, err := provider.ActiveRoot() if err != nil { return fmt.Errorf("error getting root cert: %v", err) } + id, err := connect.ParseCertFingerprint(rootPEM) + if err != nil { + return fmt.Errorf("error parsing root fingerprint: %v", err) + } + rootCA := &structs.CARoot{ + ID: id, + Name: fmt.Sprintf("%s CA Root Cert", conf.Provider), + RootCert: rootPEM, + Active: true, + } + // Check if the CA root is already initialized and exit if it is. // Every change to the CA after this initial bootstrapping should // be done through the rotation process. state := s.fsm.State() - _, root, err := state.CARootActive(nil) + _, activeRoot, err := state.CARootActive(nil) if err != nil { return err } - if root != nil { - if root.ID != trustedCA.ID { - s.logger.Printf("[WARN] connect: CA root %q is not the active root (%q)", trustedCA.ID, root.ID) + if activeRoot != nil { + if activeRoot.ID != rootCA.ID { + s.logger.Printf("[WARN] connect: CA root %q is not the active root (%q)", rootCA.ID, activeRoot.ID) } return nil } @@ -442,7 +457,7 @@ func (s *Server) initializeCA() error { resp, err := s.raftApply(structs.ConnectCARequestType, &structs.CARequest{ Op: structs.CAOpSetRoots, Index: idx, - Roots: []*structs.CARoot{trustedCA}, + Roots: []*structs.CARoot{rootCA}, }) if err != nil { s.logger.Printf("[ERR] connect: Apply failed %v", err) diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go index 99986c891..7c4cea294 100644 --- a/agent/consul/state/connect_ca.go +++ b/agent/consul/state/connect_ca.go @@ -62,7 +62,7 @@ func caRootTableSchema() *memdb.TableSchema { Name: "id", AllowMissing: false, Unique: true, - Indexer: &memdb.UUIDFieldIndex{ + Indexer: &memdb.StringFieldIndex{ Field: "ID", }, }, diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 39c46f0c4..4562fc1ea 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -164,7 +164,7 @@ type CAConfiguration struct { type CAConsulProviderState struct { ID string PrivateKey string - CARoot *CARoot + RootCert string LeafIndex uint64 RaftIndex From 887cc98d7e532026a54dd1341af039afd8950fa4 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Tue, 24 Apr 2018 16:31:42 -0700 Subject: [PATCH 168/539] Simplify the CAProvider.Sign method --- agent/connect/ca_provider.go | 4 +--- agent/consul/connect_ca_endpoint.go | 28 ++++++++++++++++++++---- agent/consul/connect_ca_provider.go | 33 ++++++++++++----------------- 3 files changed, 38 insertions(+), 27 deletions(-) diff --git a/agent/connect/ca_provider.go b/agent/connect/ca_provider.go index 1eb5bde11..bec028851 100644 --- a/agent/connect/ca_provider.go +++ b/agent/connect/ca_provider.go @@ -2,8 +2,6 @@ package connect import ( "crypto/x509" - - "github.com/hashicorp/consul/agent/structs" ) // CAProvider is the interface for Consul to interact with @@ -24,7 +22,7 @@ type CAProvider interface { GenerateIntermediate() (string, error) // Sign signs a leaf certificate used by Connect proxies from a CSR. - Sign(*x509.CertificateRequest) (*structs.IssuedCert, error) + Sign(*x509.CertificateRequest) (string, error) // CrossSignCA must accept a CA certificate signed by another CA's key // and cross sign it exactly as it is such that it forms a chain back the the diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index b0041423e..b952c5f87 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -254,17 +254,37 @@ func (s *ConnectCA) Sign( return err } - // todo(kyhavlov): more validation on the CSR before signing - provider := s.srv.getCAProvider() - cert, err := provider.Sign(csr) + // todo(kyhavlov): more validation on the CSR before signing + pem, err := provider.Sign(csr) + if err != nil { + return err + } + + // Parse the SPIFFE ID + spiffeId, err := connect.ParseCertURI(csr.URIs[0]) + if err != nil { + return err + } + serviceId, ok := spiffeId.(*connect.SpiffeIDService) + if !ok { + return fmt.Errorf("SPIFFE ID in CSR must be a service ID") + } + cert, err := connect.ParseCert(pem) if err != nil { return err } // Set the response - *reply = *cert + reply = &structs.IssuedCert{ + SerialNumber: connect.HexString(cert.SerialNumber.Bytes()), + CertPEM: pem, + Service: serviceId.Service, + ServiceURI: cert.URIs[0].String(), + ValidAfter: cert.NotBefore, + ValidBefore: cert.NotAfter, + } return nil } diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index 2f32a3d67..d7d76f88e 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -184,7 +184,7 @@ func (c *ConsulCAProvider) Cleanup() error { // Sign returns a new certificate valid for the given SpiffeIDService // using the current CA. -func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (*structs.IssuedCert, error) { +func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { // Lock during the signing so we don't use the same index twice // for different cert serial numbers. c.Lock() @@ -194,36 +194,36 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (*structs.IssuedCe state := c.srv.fsm.State() _, providerState, err := state.CAProviderState(c.id) if err != nil { - return nil, err + return "", err } // Create the keyId for the cert from the signing private key. signer, err := connect.ParseSigner(providerState.PrivateKey) if err != nil { - return nil, err + return "", err } if signer == nil { - return nil, fmt.Errorf("error signing cert: Consul CA not initialized yet") + return "", fmt.Errorf("error signing cert: Consul CA not initialized yet") } keyId, err := connect.KeyId(signer.Public()) if err != nil { - return nil, err + return "", err } // Parse the SPIFFE ID spiffeId, err := connect.ParseCertURI(csr.URIs[0]) if err != nil { - return nil, err + return "", err } serviceId, ok := spiffeId.(*connect.SpiffeIDService) if !ok { - return nil, fmt.Errorf("SPIFFE ID in CSR must be a service ID") + return "", fmt.Errorf("SPIFFE ID in CSR must be a service ID") } // Parse the CA cert caCert, err := connect.ParseCert(providerState.RootCert) if err != nil { - return nil, fmt.Errorf("error parsing CA cert: %s", err) + return "", fmt.Errorf("error parsing CA cert: %s", err) } // Cert template for generation @@ -257,11 +257,11 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (*structs.IssuedCe bs, err := x509.CreateCertificate( rand.Reader, &template, caCert, signer.Public(), signer) if err != nil { - return nil, fmt.Errorf("error generating certificate: %s", err) + return "", fmt.Errorf("error generating certificate: %s", err) } err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) if err != nil { - return nil, fmt.Errorf("error encoding private key: %s", err) + return "", fmt.Errorf("error encoding private key: %s", err) } // Increment the leaf cert index @@ -273,21 +273,14 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (*structs.IssuedCe } resp, err := c.srv.raftApply(structs.ConnectCARequestType, args) if err != nil { - return nil, err + return "", err } if respErr, ok := resp.(error); ok { - return nil, respErr + return "", respErr } // Set the response - return &structs.IssuedCert{ - SerialNumber: connect.HexString(template.SerialNumber.Bytes()), - CertPEM: buf.String(), - Service: serviceId.Service, - ServiceURI: template.URIs[0].String(), - ValidAfter: template.NotBefore, - ValidBefore: template.NotAfter, - }, nil + return buf.String(), nil } // CrossSignCA returns the given intermediate CA cert signed by the current active root. From 02fef5f9a2302847dbf3c6ae2510686ba07306ba Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Tue, 24 Apr 2018 17:14:30 -0700 Subject: [PATCH 169/539] Move ConsulCAProviderConfig into structs package --- agent/consul/connect_ca_provider.go | 12 +++--------- agent/consul/connect_ca_provider_test.go | 4 +--- agent/structs/connect_ca.go | 6 ++++++ 3 files changed, 10 insertions(+), 12 deletions(-) diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index d7d76f88e..1dbe884c7 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -19,14 +19,8 @@ import ( "github.com/mitchellh/mapstructure" ) -type ConsulCAProviderConfig struct { - PrivateKey string - RootCert string - RotationPeriod time.Duration -} - type ConsulCAProvider struct { - config *ConsulCAProviderConfig + config *structs.ConsulCAProviderConfig id string srv *Server @@ -122,8 +116,8 @@ func NewConsulCAProvider(rawConfig map[string]interface{}, srv *Server) (*Consul return provider, nil } -func decodeConfig(raw map[string]interface{}) (*ConsulCAProviderConfig, error) { - var config *ConsulCAProviderConfig +func decodeConfig(raw map[string]interface{}) (*structs.ConsulCAProviderConfig, error) { + var config *structs.ConsulCAProviderConfig if err := mapstructure.WeakDecode(raw, &config); err != nil { return nil, fmt.Errorf("error decoding config: %s", err) } diff --git a/agent/consul/connect_ca_provider_test.go b/agent/consul/connect_ca_provider_test.go index adad3acba..583f91722 100644 --- a/agent/consul/connect_ca_provider_test.go +++ b/agent/consul/connect_ca_provider_test.go @@ -28,7 +28,5 @@ func TestCAProvider_Bootstrap(t *testing.T) { state := s1.fsm.State() _, activeRoot, err := state.CARootActive(nil) assert.NoError(err) - assert.Equal(root.ID, activeRoot.ID) - assert.Equal(root.Name, activeRoot.Name) - assert.Equal(root.RootCert, activeRoot.RootCert) + assert.Equal(root, activeRoot.RootCert) } diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 4562fc1ea..c46db703a 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -160,6 +160,12 @@ type CAConfiguration struct { RaftIndex } +type ConsulCAProviderConfig struct { + PrivateKey string + RootCert string + RotationPeriod time.Duration +} + // CAConsulProviderState is used to track the built-in Consul CA provider's state. type CAConsulProviderState struct { ID string From 21677132260a3a575a7c6e0a80af80aadb37d711 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Wed, 25 Apr 2018 11:34:08 -0700 Subject: [PATCH 170/539] Add CA config to connect section of agent config --- agent/agent.go | 18 ++++++++++++++++++ agent/config/builder.go | 11 +++++++++++ agent/config/config.go | 6 ++++-- agent/config/runtime.go | 6 ++++++ agent/config/runtime_test.go | 23 +++++++++++++++++++---- agent/consul/config.go | 3 +++ agent/consul/connect_ca_endpoint.go | 18 ++++++++++++++++++ agent/consul/connect_ca_provider.go | 4 ++-- agent/consul/leader.go | 5 +++++ 9 files changed, 86 insertions(+), 8 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 3c866bc48..8f7dd9043 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -894,6 +894,24 @@ func (a *Agent) consulConfig() (*consul.Config, error) { base.TLSCipherSuites = a.config.TLSCipherSuites base.TLSPreferServerCipherSuites = a.config.TLSPreferServerCipherSuites + // Copy the Connect CA bootstrap config + if a.config.ConnectEnabled { + base.ConnectEnabled = true + + if a.config.ConnectCAProvider != "" { + base.CAConfig.Provider = a.config.ConnectCAProvider + + // Merge with the default config if it's the consul provider. + if a.config.ConnectCAProvider == "consul" { + for k, v := range a.config.ConnectCAConfig { + base.CAConfig.Config[k] = v + } + } else { + base.CAConfig.Config = a.config.ConnectCAConfig + } + } + } + // Setup the user event callback base.UserEventHandler = func(e serf.UserEvent) { select { diff --git a/agent/config/builder.go b/agent/config/builder.go index ec36e9ab0..6ad6c70b5 100644 --- a/agent/config/builder.go +++ b/agent/config/builder.go @@ -522,6 +522,14 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) { // Connect proxy defaults. proxyBindMinPort, proxyBindMaxPort := b.connectProxyPortRange(c.Connect) + var connectEnabled bool + var connectCAProvider string + var connectCAConfig map[string]interface{} + if c.Connect != nil { + connectEnabled = b.boolVal(c.Connect.Enabled) + connectCAProvider = b.stringVal(c.Connect.CAProvider) + connectCAConfig = c.Connect.CAConfig + } // ---------------------------------------------------------------- // build runtime config @@ -641,8 +649,11 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) { CheckUpdateInterval: b.durationVal("check_update_interval", c.CheckUpdateInterval), Checks: checks, ClientAddrs: clientAddrs, + ConnectEnabled: connectEnabled, ConnectProxyBindMinPort: proxyBindMinPort, ConnectProxyBindMaxPort: proxyBindMaxPort, + ConnectCAProvider: connectCAProvider, + ConnectCAConfig: connectCAConfig, DataDir: b.stringVal(c.DataDir), Datacenter: strings.ToLower(b.stringVal(c.Datacenter)), DevMode: b.boolVal(b.Flags.DevMode), diff --git a/agent/config/config.go b/agent/config/config.go index f652c9076..5eb231472 100644 --- a/agent/config/config.go +++ b/agent/config/config.go @@ -368,8 +368,10 @@ type ServiceConnectProxy struct { type Connect struct { // Enabled opts the agent into connect. It should be set on all clients and // servers in a cluster for correct connect operation. TODO(banks) review that. - Enabled bool `json:"enabled,omitempty" hcl:"enabled" mapstructure:"enabled"` - ProxyDefaults *ConnectProxyDefaults `json:"proxy_defaults,omitempty" hcl:"proxy_defaults" mapstructure:"proxy_defaults"` + Enabled *bool `json:"enabled,omitempty" hcl:"enabled" mapstructure:"enabled"` + ProxyDefaults *ConnectProxyDefaults `json:"proxy_defaults,omitempty" hcl:"proxy_defaults" mapstructure:"proxy_defaults"` + CAProvider *string `json:"ca_provider,omitempty" hcl:"ca_provider" mapstructure:"ca_provider"` + CAConfig map[string]interface{} `json:"ca_config,omitempty" hcl:"ca_config" mapstructure:"ca_config"` } // ConnectProxyDefaults is the agent-global connect proxy configuration. diff --git a/agent/config/runtime.go b/agent/config/runtime.go index b31630d27..15a7ac2ba 100644 --- a/agent/config/runtime.go +++ b/agent/config/runtime.go @@ -647,6 +647,12 @@ type RuntimeConfig struct { // registration time to allow global control of defaults. ConnectProxyDefaultConfig map[string]interface{} + // ConnectCAProvider is the type of CA provider to use with Connect. + ConnectCAProvider string + + // ConnectCAConfig is the config to use for the CA provider. + ConnectCAConfig map[string]interface{} + // DNSAddrs contains the list of TCP and UDP addresses the DNS server will // bind to. If the DNS endpoint is disabled (ports.dns <= 0) the list is // empty. diff --git a/agent/config/runtime_test.go b/agent/config/runtime_test.go index 773b7a036..1db5ab207 100644 --- a/agent/config/runtime_test.go +++ b/agent/config/runtime_test.go @@ -2354,6 +2354,11 @@ func TestFullConfig(t *testing.T) { "check_update_interval": "16507s", "client_addr": "93.83.18.19", "connect": { + "ca_provider": "b8j4ynx9", + "ca_config": { + "g4cvJyys": "IRLXE9Ds", + "hyMy9Oxn": "XeBp4Sis" + }, "enabled": true, "proxy_defaults": { "bind_min_port": 2000, @@ -2811,6 +2816,11 @@ func TestFullConfig(t *testing.T) { check_update_interval = "16507s" client_addr = "93.83.18.19" connect { + ca_provider = "b8j4ynx9" + ca_config { + "g4cvJyys" = "IRLXE9Ds" + "hyMy9Oxn" = "XeBp4Sis" + } enabled = true proxy_defaults { bind_min_port = 2000 @@ -3403,10 +3413,15 @@ func TestFullConfig(t *testing.T) { DeregisterCriticalServiceAfter: 13209 * time.Second, }, }, - CheckUpdateInterval: 16507 * time.Second, - ClientAddrs: []*net.IPAddr{ipAddr("93.83.18.19")}, - ConnectProxyBindMinPort: 2000, - ConnectProxyBindMaxPort: 3000, + CheckUpdateInterval: 16507 * time.Second, + ClientAddrs: []*net.IPAddr{ipAddr("93.83.18.19")}, + ConnectProxyBindMinPort: 2000, + ConnectProxyBindMaxPort: 3000, + ConnectCAProvider: "b8j4ynx9", + ConnectCAConfig: map[string]interface{}{ + "g4cvJyys": "IRLXE9Ds", + "hyMy9Oxn": "XeBp4Sis", + }, DNSAddrs: []net.Addr{tcpAddr("93.95.95.81:7001"), udpAddr("93.95.95.81:7001")}, DNSARecordLimit: 29907, DNSAllowStale: true, diff --git a/agent/consul/config.go b/agent/consul/config.go index df4e55e42..94c8bc06a 100644 --- a/agent/consul/config.go +++ b/agent/consul/config.go @@ -348,6 +348,9 @@ type Config struct { // dead servers. AutopilotInterval time.Duration + // ConnectEnabled is whether to enable Connect features such as the CA. + ConnectEnabled bool + // CAConfig is used to apply the initial Connect CA configuration when // bootstrapping. CAConfig *structs.CAConfiguration diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index b952c5f87..f52c9218e 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -1,6 +1,7 @@ package consul import ( + "errors" "fmt" "reflect" @@ -11,6 +12,8 @@ import ( "github.com/hashicorp/go-memdb" ) +var ErrConnectNotEnabled = errors.New("Connect must be enabled in order to use this endpoint") + // ConnectCA manages the Connect CA. type ConnectCA struct { // srv is a pointer back to the server. @@ -21,6 +24,11 @@ type ConnectCA struct { func (s *ConnectCA) ConfigurationGet( args *structs.DCSpecificRequest, reply *structs.CAConfiguration) error { + // Exit early if Connect hasn't been enabled. + if !s.srv.config.ConnectEnabled { + return ErrConnectNotEnabled + } + if done, err := s.srv.forward("ConnectCA.ConfigurationGet", args, args, reply); done { return err } @@ -48,6 +56,11 @@ func (s *ConnectCA) ConfigurationGet( func (s *ConnectCA) ConfigurationSet( args *structs.CARequest, reply *interface{}) error { + // Exit early if Connect hasn't been enabled. + if !s.srv.config.ConnectEnabled { + return ErrConnectNotEnabled + } + if done, err := s.srv.forward("ConnectCA.ConfigurationSet", args, args, reply); done { return err } @@ -244,6 +257,11 @@ func (s *ConnectCA) Roots( func (s *ConnectCA) Sign( args *structs.CASignRequest, reply *structs.IssuedCert) error { + // Exit early if Connect hasn't been enabled. + if !s.srv.config.ConnectEnabled { + return ErrConnectNotEnabled + } + if done, err := s.srv.forward("ConnectCA.Sign", args, args, reply); done { return err } diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index 1dbe884c7..f9321138b 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -291,7 +291,7 @@ func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { privKey, err := connect.ParseSigner(providerState.PrivateKey) if err != nil { - return "", fmt.Errorf("error parsing private key %q: %v", providerState.PrivateKey, err) + return "", fmt.Errorf("error parsing private key %q: %s", providerState.PrivateKey, err) } rootCA, err := connect.ParseCert(providerState.RootCert) @@ -363,7 +363,7 @@ func (c *ConsulCAProvider) generateCA(privateKey string, sn uint64) (string, err privKey, err := connect.ParseSigner(privateKey) if err != nil { - return "", fmt.Errorf("error parsing private key %q: %v", privateKey, err) + return "", fmt.Errorf("error parsing private key %q: %s", privateKey, err) } name := fmt.Sprintf("Consul CA %d", sn) diff --git a/agent/consul/leader.go b/agent/consul/leader.go index 1670add29..d9c3a83ea 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -402,6 +402,11 @@ func (s *Server) initializeCAConfig() (*structs.CAConfiguration, error) { // initializeCA sets up the CA provider when gaining leadership, bootstrapping // the root in the state store if necessary. func (s *Server) initializeCA() error { + // Bail if connect isn't enabled. + if !s.config.ConnectEnabled { + return nil + } + conf, err := s.initializeCAConfig() if err != nil { return err From 216e74b4ad0700b94524917998339cb24acce6ce Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 5 Apr 2018 11:45:53 +0100 Subject: [PATCH 171/539] Connect verification and AuthZ --- connect/service.go | 8 +- connect/testing.go | 9 ++- connect/tls.go | 135 +++++++++++++++++++++++++++---- connect/tls_test.go | 188 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 318 insertions(+), 22 deletions(-) diff --git a/connect/service.go b/connect/service.go index 6bbda0807..51bad44f6 100644 --- a/connect/service.go +++ b/connect/service.go @@ -74,8 +74,8 @@ func NewServiceWithLogger(serviceID string, client *api.Client, client: client, logger: logger, } - s.serverTLSCfg = newReloadableTLSConfig(defaultTLSConfig(serverVerifyCerts)) - s.clientTLSCfg = newReloadableTLSConfig(defaultTLSConfig(clientVerifyCerts)) + s.serverTLSCfg = newReloadableTLSConfig(defaultTLSConfig(newServerSideVerifier(client, serviceID))) + s.clientTLSCfg = newReloadableTLSConfig(defaultTLSConfig(clientSideVerifier)) // TODO(banks) run the background certificate sync return s, nil @@ -97,9 +97,9 @@ func NewDevServiceFromCertFiles(serviceID string, client *api.Client, // Note that newReloadableTLSConfig makes a copy so we can re-use the same // base for both client and server with swapped verifiers. - tlsCfg.VerifyPeerCertificate = serverVerifyCerts + setVerifier(tlsCfg, newServerSideVerifier(client, serviceID)) s.serverTLSCfg = newReloadableTLSConfig(tlsCfg) - tlsCfg.VerifyPeerCertificate = clientVerifyCerts + setVerifier(tlsCfg, clientSideVerifier) s.clientTLSCfg = newReloadableTLSConfig(tlsCfg) return s, nil } diff --git a/connect/testing.go b/connect/testing.go index f9a6a4850..c23b83701 100644 --- a/connect/testing.go +++ b/connect/testing.go @@ -26,10 +26,11 @@ func TestService(t testing.T, service string, ca *structs.CARoot) *Service { t.Fatal(err) } + // verify server without AuthZ call svc.serverTLSCfg = newReloadableTLSConfig( - TestTLSConfigWithVerifier(t, service, ca, serverVerifyCerts)) + TestTLSConfigWithVerifier(t, service, ca, newServerSideVerifier(nil, service))) svc.clientTLSCfg = newReloadableTLSConfig( - TestTLSConfigWithVerifier(t, service, ca, clientVerifyCerts)) + TestTLSConfigWithVerifier(t, service, ca, clientSideVerifier)) return svc } @@ -43,9 +44,9 @@ func TestTLSConfig(t testing.T, service string, ca *structs.CARoot) *tls.Config } // TestTLSConfigWithVerifier returns a *tls.Config suitable for use during -// tests, it will use the given verifyFunc to verify tls certificates. +// tests, it will use the given verifierFunc to verify tls certificates. func TestTLSConfigWithVerifier(t testing.T, service string, ca *structs.CARoot, - verifier verifyFunc) *tls.Config { + verifier verifierFunc) *tls.Config { t.Helper() cfg := defaultTLSConfig(verifier) diff --git a/connect/tls.go b/connect/tls.go index 89d5ccb54..d23b49396 100644 --- a/connect/tls.go +++ b/connect/tls.go @@ -5,17 +5,23 @@ import ( "crypto/x509" "errors" "io/ioutil" + "log" "sync" "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/api" ) -// verifyFunc is the type of tls.Config.VerifyPeerCertificate for convenience. -type verifyFunc func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error +// verifierFunc is a function that can accept rawCertificate bytes from a peer +// and verify them against a given tls.Config. It's called from the +// tls.Config.VerifyPeerCertificate hook. We don't pass verifiedChains since +// that is always nil in our usage. Implementations can use the roots provided +// in the cfg to verify the certs. +type verifierFunc func(cfg *tls.Config, rawCerts [][]byte) error // defaultTLSConfig returns the standard config. -func defaultTLSConfig(verify verifyFunc) *tls.Config { - return &tls.Config{ +func defaultTLSConfig(v verifierFunc) *tls.Config { + cfg := &tls.Config{ MinVersion: tls.VersionTLS12, ClientAuth: tls.RequireAndVerifyClientCert, // We don't have access to go internals that decide if AES hardware @@ -34,12 +40,23 @@ func defaultTLSConfig(verify verifyFunc) *tls.Config { // We have to set this since otherwise Go will attempt to verify DNS names // match DNS SAN/CN which we don't want. We hook up VerifyPeerCertificate to // do our own path validation as well as Connect AuthZ. - InsecureSkipVerify: true, - VerifyPeerCertificate: verify, + InsecureSkipVerify: true, // Include h2 to allow connect http servers to automatically support http2. // See: https://github.com/golang/go/blob/917c33fe8672116b04848cf11545296789cafd3b/src/net/http/server.go#L2724-L2731 NextProtos: []string{"h2"}, } + setVerifier(cfg, v) + return cfg +} + +// setVerifier takes a *tls.Config and set's it's VerifyPeerCertificates hook to +// use the passed verifierFunc. +func setVerifier(cfg *tls.Config, v verifierFunc) { + if v != nil { + cfg.VerifyPeerCertificate = func(rawCerts [][]byte, chains [][]*x509.Certificate) error { + return v(cfg, rawCerts) + } + } } // reloadableTLSConfig exposes a tls.Config that can have it's certificates @@ -147,14 +164,104 @@ func verifyServerCertMatchesURI(certs []*x509.Certificate, return errors.New("peer certificate mismatch") } -// serverVerifyCerts is the verifyFunc for use on Connect servers. -func serverVerifyCerts(rawCerts [][]byte, chains [][]*x509.Certificate) error { - // TODO(banks): implement me - return nil +// newServerSideVerifier returns a verifierFunc that wraps the provided +// api.Client to verify the TLS chain and perform AuthZ for the server end of +// the connection. The service name provided is used as the target serviceID +// for the Authorization. +func newServerSideVerifier(client *api.Client, serviceID string) verifierFunc { + return func(tlsCfg *tls.Config, rawCerts [][]byte) error { + leaf, err := verifyChain(tlsCfg, rawCerts, false) + if err != nil { + return err + } + + // Check leaf is a cert we understand + if len(leaf.URIs) < 1 { + return errors.New("connect: invalid leaf certificate") + } + + certURI, err := connect.ParseCertURI(leaf.URIs[0]) + if err != nil { + return errors.New("connect: invalid leaf certificate URI") + } + + // No AuthZ if there is no client. + if client == nil { + return nil + } + + // Perform AuthZ + req := &api.AgentAuthorizeParams{ + // TODO(banks): this is jank, we have a serviceID from the Service setup + // but this needs to be a service name as the target. For now we are + // relying on them usually being the same but this will break when they + // are not. We either need to make Authorize endpoint optionally accept + // IDs somehow or rethink this as it will require fetching the service + // name sometime ahead of accepting requests (maybe along with TLS certs?) + // which feels gross and will take extra plumbing to expose it to here. + Target: serviceID, + ClientCertURI: certURI.URI().String(), + ClientCertSerial: connect.HexString(leaf.SerialNumber.Bytes()), + } + resp, err := client.Agent().ConnectAuthorize(req) + if err != nil { + return errors.New("connect: authz call failed: " + err.Error()) + } + if !resp.Authorized { + return errors.New("connect: authz denied: " + resp.Reason) + } + log.Println("[DEBUG] authz result", resp) + return nil + } } -// clientVerifyCerts is the verifyFunc for use on Connect clients. -func clientVerifyCerts(rawCerts [][]byte, chains [][]*x509.Certificate) error { - // TODO(banks): implement me - return nil +// clientSideVerifier is a verifierFunc that performs verification of certificates +// on the client end of the connection. For now it is just basic TLS +// verification since the identity check needs additional state and becomes +// clunky to customise the callback for every outgoing request. That is done +// within Service.Dial for now. +func clientSideVerifier(tlsCfg *tls.Config, rawCerts [][]byte) error { + _, err := verifyChain(tlsCfg, rawCerts, true) + return err +} + +// verifyChain performs standard TLS verification without enforcing remote +// hostname matching. +func verifyChain(tlsCfg *tls.Config, rawCerts [][]byte, client bool) (*x509.Certificate, error) { + + // Fetch leaf and intermediates. This is based on code form tls handshake. + if len(rawCerts) < 1 { + return nil, errors.New("tls: no certificates from peer") + } + certs := make([]*x509.Certificate, len(rawCerts)) + for i, asn1Data := range rawCerts { + cert, err := x509.ParseCertificate(asn1Data) + if err != nil { + return nil, errors.New("tls: failed to parse certificate from peer: " + err.Error()) + } + certs[i] = cert + } + + cas := tlsCfg.RootCAs + if client { + cas = tlsCfg.ClientCAs + } + + opts := x509.VerifyOptions{ + Roots: cas, + Intermediates: x509.NewCertPool(), + } + if !client { + // Server side only sets KeyUsages in tls. This defaults to ServerAuth in + // x509 lib. See + // https://github.com/golang/go/blob/ee7dd810f9ca4e63ecfc1d3044869591783b8b74/src/crypto/x509/verify.go#L866-L868 + opts.KeyUsages = []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth} + } + + // All but the first cert are intermediates + for _, cert := range certs[1:] { + opts.Intermediates.AddCert(cert) + } + _, err := certs[0].Verify(opts) + return certs[0], err } diff --git a/connect/tls_test.go b/connect/tls_test.go index d13b78661..82b89440f 100644 --- a/connect/tls_test.go +++ b/connect/tls_test.go @@ -1,10 +1,14 @@ package connect import ( + "crypto/tls" "crypto/x509" + "encoding/pem" "testing" + "github.com/hashicorp/consul/agent" "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/api" "github.com/stretchr/testify/require" ) @@ -100,3 +104,187 @@ func Test_verifyServerCertMatchesURI(t *testing.T) { }) } } + +func testCertPEMBlock(t *testing.T, pemValue string) []byte { + t.Helper() + // The _ result below is not an error but the remaining PEM bytes. + block, _ := pem.Decode([]byte(pemValue)) + require.NotNil(t, block) + require.Equal(t, "CERTIFICATE", block.Type) + return block.Bytes +} + +func TestClientSideVerifier(t *testing.T) { + ca1 := connect.TestCA(t, nil) + ca2 := connect.TestCA(t, ca1) + + webCA1PEM, _ := connect.TestLeaf(t, "web", ca1) + webCA2PEM, _ := connect.TestLeaf(t, "web", ca2) + + webCA1 := testCertPEMBlock(t, webCA1PEM) + xcCA2 := testCertPEMBlock(t, ca2.SigningCert) + webCA2 := testCertPEMBlock(t, webCA2PEM) + + tests := []struct { + name string + tlsCfg *tls.Config + rawCerts [][]byte + wantErr string + }{ + { + name: "ok service ca1", + tlsCfg: TestTLSConfig(t, "web", ca1), + rawCerts: [][]byte{webCA1}, + wantErr: "", + }, + { + name: "untrusted CA", + tlsCfg: TestTLSConfig(t, "web", ca2), // only trust ca2 + rawCerts: [][]byte{webCA1}, // present ca1 + wantErr: "unknown authority", + }, + { + name: "cross signed intermediate", + tlsCfg: TestTLSConfig(t, "web", ca1), // only trust ca1 + rawCerts: [][]byte{webCA2, xcCA2}, // present ca2 signed cert, and xc + wantErr: "", + }, + { + name: "cross signed without intermediate", + tlsCfg: TestTLSConfig(t, "web", ca1), // only trust ca1 + rawCerts: [][]byte{webCA2}, // present ca2 signed cert only + wantErr: "unknown authority", + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + require := require.New(t) + err := clientSideVerifier(tt.tlsCfg, tt.rawCerts) + if tt.wantErr == "" { + require.Nil(err) + } else { + require.NotNil(err) + require.Contains(err.Error(), tt.wantErr) + } + }) + } +} + +func TestServerSideVerifier(t *testing.T) { + ca1 := connect.TestCA(t, nil) + ca2 := connect.TestCA(t, ca1) + + webCA1PEM, _ := connect.TestLeaf(t, "web", ca1) + webCA2PEM, _ := connect.TestLeaf(t, "web", ca2) + + apiCA1PEM, _ := connect.TestLeaf(t, "api", ca1) + apiCA2PEM, _ := connect.TestLeaf(t, "api", ca2) + + webCA1 := testCertPEMBlock(t, webCA1PEM) + xcCA2 := testCertPEMBlock(t, ca2.SigningCert) + webCA2 := testCertPEMBlock(t, webCA2PEM) + + apiCA1 := testCertPEMBlock(t, apiCA1PEM) + apiCA2 := testCertPEMBlock(t, apiCA2PEM) + + // Setup a local test agent to query + agent := agent.NewTestAgent("test-consul", "") + defer agent.Shutdown() + + cfg := api.DefaultConfig() + cfg.Address = agent.HTTPAddr() + client, err := api.NewClient(cfg) + require.Nil(t, err) + + // Setup intentions to validate against. We actually default to allow so first + // setup a blanket deny rule for db, then only allow web. + connect := client.Connect() + ixn := &api.Intention{ + SourceNS: "default", + SourceName: "*", + DestinationNS: "default", + DestinationName: "db", + Action: api.IntentionActionDeny, + SourceType: api.IntentionSourceConsul, + Meta: map[string]string{}, + } + id, _, err := connect.IntentionCreate(ixn, nil) + require.Nil(t, err) + require.NotEmpty(t, id) + + ixn = &api.Intention{ + SourceNS: "default", + SourceName: "web", + DestinationNS: "default", + DestinationName: "db", + Action: api.IntentionActionAllow, + SourceType: api.IntentionSourceConsul, + Meta: map[string]string{}, + } + id, _, err = connect.IntentionCreate(ixn, nil) + require.Nil(t, err) + require.NotEmpty(t, id) + + tests := []struct { + name string + service string + tlsCfg *tls.Config + rawCerts [][]byte + wantErr string + }{ + { + name: "ok service ca1, allow", + service: "db", + tlsCfg: TestTLSConfig(t, "db", ca1), + rawCerts: [][]byte{webCA1}, + wantErr: "", + }, + { + name: "untrusted CA", + service: "db", + tlsCfg: TestTLSConfig(t, "db", ca2), // only trust ca2 + rawCerts: [][]byte{webCA1}, // present ca1 + wantErr: "unknown authority", + }, + { + name: "cross signed intermediate, allow", + service: "db", + tlsCfg: TestTLSConfig(t, "db", ca1), // only trust ca1 + rawCerts: [][]byte{webCA2, xcCA2}, // present ca2 signed cert, and xc + wantErr: "", + }, + { + name: "cross signed without intermediate", + service: "db", + tlsCfg: TestTLSConfig(t, "db", ca1), // only trust ca1 + rawCerts: [][]byte{webCA2}, // present ca2 signed cert only + wantErr: "unknown authority", + }, + { + name: "ok service ca1, deny", + service: "db", + tlsCfg: TestTLSConfig(t, "db", ca1), + rawCerts: [][]byte{apiCA1}, + wantErr: "denied", + }, + { + name: "cross signed intermediate, deny", + service: "db", + tlsCfg: TestTLSConfig(t, "db", ca1), // only trust ca1 + rawCerts: [][]byte{apiCA2, xcCA2}, // present ca2 signed cert, and xc + wantErr: "denied", + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + v := newServerSideVerifier(client, tt.service) + err := v(tt.tlsCfg, tt.rawCerts) + if tt.wantErr == "" { + require.Nil(t, err) + } else { + require.NotNil(t, err) + require.Contains(t, err.Error(), tt.wantErr) + } + }) + } +} From 53dc914d21ab2d321fd4ea40c26a73fb30622935 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 5 Apr 2018 20:30:19 +0100 Subject: [PATCH 172/539] Refactor reloadableTLSConfig and verifyier shenanigans into simpler dynamicTLSConfig --- connect/service.go | 22 ++---- connect/testing.go | 22 ++---- connect/tls.go | 180 +++++++++++++++++++++++++++----------------- connect/tls_test.go | 164 ++++++++++++++++++++++++++++------------ 4 files changed, 238 insertions(+), 150 deletions(-) diff --git a/connect/service.go b/connect/service.go index 51bad44f6..f9d6591c2 100644 --- a/connect/service.go +++ b/connect/service.go @@ -42,11 +42,8 @@ type Service struct { // connections will fail to verify. client *api.Client - // serverTLSCfg is the (reloadable) TLS config we use for serving. - serverTLSCfg *reloadableTLSConfig - - // clientTLSCfg is the (reloadable) TLS config we use for dialling. - clientTLSCfg *reloadableTLSConfig + // tlsCfg is the dynamic TLS config + tlsCfg *dynamicTLSConfig // httpResolverFromAddr is a function that returns a Resolver from a string // address for HTTP clients. It's privately pluggable to make testing easier @@ -73,9 +70,8 @@ func NewServiceWithLogger(serviceID string, client *api.Client, serviceID: serviceID, client: client, logger: logger, + tlsCfg: newDynamicTLSConfig(defaultTLSConfig()), } - s.serverTLSCfg = newReloadableTLSConfig(defaultTLSConfig(newServerSideVerifier(client, serviceID))) - s.clientTLSCfg = newReloadableTLSConfig(defaultTLSConfig(clientSideVerifier)) // TODO(banks) run the background certificate sync return s, nil @@ -94,13 +90,7 @@ func NewDevServiceFromCertFiles(serviceID string, client *api.Client, if err != nil { return nil, err } - - // Note that newReloadableTLSConfig makes a copy so we can re-use the same - // base for both client and server with swapped verifiers. - setVerifier(tlsCfg, newServerSideVerifier(client, serviceID)) - s.serverTLSCfg = newReloadableTLSConfig(tlsCfg) - setVerifier(tlsCfg, clientSideVerifier) - s.clientTLSCfg = newReloadableTLSConfig(tlsCfg) + s.tlsCfg = newDynamicTLSConfig(tlsCfg) return s, nil } @@ -115,7 +105,7 @@ func NewDevServiceFromCertFiles(serviceID string, client *api.Client, // error during renewal. The listener will be able to accept connections again // once connectivity is restored provided the client's Token is valid. func (s *Service) ServerTLSConfig() *tls.Config { - return s.serverTLSCfg.TLSConfig() + return s.tlsCfg.Get(newServerSideVerifier(s.client, s.serviceID)) } // Dial connects to a remote Connect-enabled server. The passed Resolver is used @@ -138,7 +128,7 @@ func (s *Service) Dial(ctx context.Context, resolver Resolver) (net.Conn, error) return nil, err } - tlsConn := tls.Client(tcpConn, s.clientTLSCfg.TLSConfig()) + tlsConn := tls.Client(tcpConn, s.tlsCfg.Get(clientSideVerifier)) // Set deadline for Handshake to complete. deadline, ok := ctx.Deadline() if ok { diff --git a/connect/testing.go b/connect/testing.go index c23b83701..491036aaf 100644 --- a/connect/testing.go +++ b/connect/testing.go @@ -20,17 +20,15 @@ import ( func TestService(t testing.T, service string, ca *structs.CARoot) *Service { t.Helper() - // Don't need to talk to client since we are setting TLSConfig locally + // Don't need to talk to client since we are setting TLSConfig locally. This + // will cause server verification to skip AuthZ too. svc, err := NewService(service, nil) if err != nil { t.Fatal(err) } - // verify server without AuthZ call - svc.serverTLSCfg = newReloadableTLSConfig( - TestTLSConfigWithVerifier(t, service, ca, newServerSideVerifier(nil, service))) - svc.clientTLSCfg = newReloadableTLSConfig( - TestTLSConfigWithVerifier(t, service, ca, clientSideVerifier)) + // Override the tlsConfig hackily. + svc.tlsCfg = newDynamicTLSConfig(TestTLSConfig(t, service, ca)) return svc } @@ -39,17 +37,7 @@ func TestService(t testing.T, service string, ca *structs.CARoot) *Service { func TestTLSConfig(t testing.T, service string, ca *structs.CARoot) *tls.Config { t.Helper() - // Insecure default (nil verifier) - return TestTLSConfigWithVerifier(t, service, ca, nil) -} - -// TestTLSConfigWithVerifier returns a *tls.Config suitable for use during -// tests, it will use the given verifierFunc to verify tls certificates. -func TestTLSConfigWithVerifier(t testing.T, service string, ca *structs.CARoot, - verifier verifierFunc) *tls.Config { - t.Helper() - - cfg := defaultTLSConfig(verifier) + cfg := defaultTLSConfig() cfg.Certificates = []tls.Certificate{TestSvcKeyPair(t, service, ca)} cfg.RootCAs = TestCAPool(t, ca) cfg.ClientCAs = TestCAPool(t, ca) diff --git a/connect/tls.go b/connect/tls.go index d23b49396..f5cb95a75 100644 --- a/connect/tls.go +++ b/connect/tls.go @@ -5,7 +5,6 @@ import ( "crypto/x509" "errors" "io/ioutil" - "log" "sync" "github.com/hashicorp/consul/agent/connect" @@ -14,13 +13,18 @@ import ( // verifierFunc is a function that can accept rawCertificate bytes from a peer // and verify them against a given tls.Config. It's called from the -// tls.Config.VerifyPeerCertificate hook. We don't pass verifiedChains since -// that is always nil in our usage. Implementations can use the roots provided -// in the cfg to verify the certs. +// tls.Config.VerifyPeerCertificate hook. +// +// We don't pass verifiedChains since that is always nil in our usage. +// Implementations can use the roots provided in the cfg to verify the certs. +// +// The passed *tls.Config may have a nil VerifyPeerCertificates function but +// will have correct roots, leaf and other fields. type verifierFunc func(cfg *tls.Config, rawCerts [][]byte) error -// defaultTLSConfig returns the standard config. -func defaultTLSConfig(v verifierFunc) *tls.Config { +// defaultTLSConfig returns the standard config with no peer verifier. It is +// insecure to use it as-is. +func defaultTLSConfig() *tls.Config { cfg := &tls.Config{ MinVersion: tls.VersionTLS12, ClientAuth: tls.RequireAndVerifyClientCert, @@ -45,70 +49,11 @@ func defaultTLSConfig(v verifierFunc) *tls.Config { // See: https://github.com/golang/go/blob/917c33fe8672116b04848cf11545296789cafd3b/src/net/http/server.go#L2724-L2731 NextProtos: []string{"h2"}, } - setVerifier(cfg, v) return cfg } -// setVerifier takes a *tls.Config and set's it's VerifyPeerCertificates hook to -// use the passed verifierFunc. -func setVerifier(cfg *tls.Config, v verifierFunc) { - if v != nil { - cfg.VerifyPeerCertificate = func(rawCerts [][]byte, chains [][]*x509.Certificate) error { - return v(cfg, rawCerts) - } - } -} - -// reloadableTLSConfig exposes a tls.Config that can have it's certificates -// reloaded. On a server, this uses GetConfigForClient to pass the current -// tls.Config or client certificate for each acceptted connection. On a client, -// this uses GetClientCertificate to provide the current client certificate. -type reloadableTLSConfig struct { - mu sync.Mutex - - // cfg is the current config to use for new connections - cfg *tls.Config -} - -// newReloadableTLSConfig returns a reloadable config currently set to base. -func newReloadableTLSConfig(base *tls.Config) *reloadableTLSConfig { - c := &reloadableTLSConfig{} - c.SetTLSConfig(base) - return c -} - -// TLSConfig returns a *tls.Config that will dynamically load certs. It's -// suitable for use in either a client or server. -func (c *reloadableTLSConfig) TLSConfig() *tls.Config { - c.mu.Lock() - cfgCopy := c.cfg - c.mu.Unlock() - return cfgCopy -} - -// SetTLSConfig sets the config used for future connections. It is safe to call -// from any goroutine. -func (c *reloadableTLSConfig) SetTLSConfig(cfg *tls.Config) error { - copy := cfg.Clone() - copy.GetClientCertificate = func(*tls.CertificateRequestInfo) (*tls.Certificate, error) { - current := c.TLSConfig() - if len(current.Certificates) < 1 { - return nil, errors.New("tls: no certificates configured") - } - return ¤t.Certificates[0], nil - } - copy.GetConfigForClient = func(*tls.ClientHelloInfo) (*tls.Config, error) { - return c.TLSConfig(), nil - } - - c.mu.Lock() - defer c.mu.Unlock() - c.cfg = copy - return nil -} - // devTLSConfigFromFiles returns a default TLS Config but with certs and CAs -// based on local files for dev. +// based on local files for dev. No verification is setup. func devTLSConfigFromFiles(caFile, certFile, keyFile string) (*tls.Config, error) { @@ -126,9 +71,7 @@ func devTLSConfigFromFiles(caFile, certFile, return nil, err } - // Insecure no verification - cfg := defaultTLSConfig(nil) - + cfg := defaultTLSConfig() cfg.Certificates = []tls.Certificate{cert} cfg.RootCAs = roots cfg.ClientCAs = roots @@ -210,7 +153,6 @@ func newServerSideVerifier(client *api.Client, serviceID string) verifierFunc { if !resp.Authorized { return errors.New("connect: authz denied: " + resp.Reason) } - log.Println("[DEBUG] authz result", resp) return nil } } @@ -265,3 +207,101 @@ func verifyChain(tlsCfg *tls.Config, rawCerts [][]byte, client bool) (*x509.Cert _, err := certs[0].Verify(opts) return certs[0], err } + +// dynamicTLSConfig represents the state for returning a tls.Config that can +// have root and leaf certificates updated dynamically with all existing clients +// and servers automatically picking up the changes. It requires initialising +// with a valid base config from which all the non-certificate and verification +// params are used. The base config passed should not be modified externally as +// it is assumed to be serialised by the embedded mutex. +type dynamicTLSConfig struct { + base *tls.Config + + sync.Mutex + leaf *tls.Certificate + roots *x509.CertPool +} + +// newDynamicTLSConfig returns a dynamicTLSConfig constructed from base. +// base.Certificates[0] is used as the initial leaf and base.RootCAs is used as +// the initial roots. +func newDynamicTLSConfig(base *tls.Config) *dynamicTLSConfig { + cfg := &dynamicTLSConfig{ + base: base, + } + if len(base.Certificates) > 0 { + cfg.leaf = &base.Certificates[0] + } + if base.RootCAs != nil { + cfg.roots = base.RootCAs + } + return cfg +} + +// Get fetches the lastest tls.Config with all the hooks attached to keep it +// loading the most recent roots and certs even after future changes to cfg. +// +// The verifierFunc passed will be attached to the config returned such that it +// runs with the _latest_ config object returned passed to it. That means that a +// client can use this config for a long time and will still verify against the +// latest roots even though the roots in the struct is has can't change. +func (cfg *dynamicTLSConfig) Get(v verifierFunc) *tls.Config { + cfg.Lock() + defer cfg.Unlock() + copy := cfg.base.Clone() + copy.RootCAs = cfg.roots + copy.ClientCAs = cfg.roots + if v != nil { + copy.VerifyPeerCertificate = func(rawCerts [][]byte, chains [][]*x509.Certificate) error { + return v(cfg.Get(nil), rawCerts) + } + } + copy.GetCertificate = func(_ *tls.ClientHelloInfo) (*tls.Certificate, error) { + leaf := cfg.Leaf() + if leaf == nil { + return nil, errors.New("tls: no certificates configured") + } + return leaf, nil + } + copy.GetClientCertificate = func(_ *tls.CertificateRequestInfo) (*tls.Certificate, error) { + leaf := cfg.Leaf() + if leaf == nil { + return nil, errors.New("tls: no certificates configured") + } + return leaf, nil + } + copy.GetConfigForClient = func(*tls.ClientHelloInfo) (*tls.Config, error) { + return cfg.Get(v), nil + } + return copy +} + +// SetRoots sets new roots. +func (cfg *dynamicTLSConfig) SetRoots(roots *x509.CertPool) error { + cfg.Lock() + defer cfg.Unlock() + cfg.roots = roots + return nil +} + +// SetLeaf sets a new leaf. +func (cfg *dynamicTLSConfig) SetLeaf(leaf *tls.Certificate) error { + cfg.Lock() + defer cfg.Unlock() + cfg.leaf = leaf + return nil +} + +// Roots returns the current CA root CertPool. +func (cfg *dynamicTLSConfig) Roots() *x509.CertPool { + cfg.Lock() + defer cfg.Unlock() + return cfg.roots +} + +// Leaf returns the current Leaf certificate. +func (cfg *dynamicTLSConfig) Leaf() *tls.Certificate { + cfg.Lock() + defer cfg.Unlock() + return cfg.leaf +} diff --git a/connect/tls_test.go b/connect/tls_test.go index 82b89440f..aa1063f3e 100644 --- a/connect/tls_test.go +++ b/connect/tls_test.go @@ -12,53 +12,6 @@ import ( "github.com/stretchr/testify/require" ) -func TestReloadableTLSConfig(t *testing.T) { - require := require.New(t) - base := defaultTLSConfig(nil) - - c := newReloadableTLSConfig(base) - - // The dynamic config should be the one we loaded (with some different hooks) - got := c.TLSConfig() - expect := base.Clone() - // Equal and even cmp.Diff fail on tls.Config due to unexported fields in - // each. Compare a few things to prove it's returning the bits we - // specifically set. - require.Equal(expect.Certificates, got.Certificates) - require.Equal(expect.RootCAs, got.RootCAs) - require.Equal(expect.ClientCAs, got.ClientCAs) - require.Equal(expect.InsecureSkipVerify, got.InsecureSkipVerify) - require.Equal(expect.MinVersion, got.MinVersion) - require.Equal(expect.CipherSuites, got.CipherSuites) - require.NotNil(got.GetClientCertificate) - require.NotNil(got.GetConfigForClient) - require.Contains(got.NextProtos, "h2") - - ca := connect.TestCA(t, nil) - - // Now change the config as if we just loaded certs from Consul - new := TestTLSConfig(t, "web", ca) - err := c.SetTLSConfig(new) - require.Nil(err) - - // Change the passed config to ensure SetTLSConfig made a copy otherwise this - // is racey. - expect = new.Clone() - new.Certificates = nil - - // The dynamic config should be the one we loaded (with some different hooks) - got = c.TLSConfig() - require.Equal(expect.Certificates, got.Certificates) - require.Equal(expect.RootCAs, got.RootCAs) - require.Equal(expect.ClientCAs, got.ClientCAs) - require.Equal(expect.InsecureSkipVerify, got.InsecureSkipVerify) - require.Equal(expect.MinVersion, got.MinVersion) - require.Equal(expect.CipherSuites, got.CipherSuites) - require.NotNil(got.GetClientCertificate) - require.NotNil(got.GetConfigForClient) - require.Contains(got.NextProtos, "h2") -} - func Test_verifyServerCertMatchesURI(t *testing.T) { ca1 := connect.TestCA(t, nil) @@ -288,3 +241,120 @@ func TestServerSideVerifier(t *testing.T) { }) } } + +// requireEqualTLSConfig compares tlsConfig fields we care about. Equal and even +// cmp.Diff fail on tls.Config due to unexported fields in each. expectLeaf +// allows expecting a leaf cert different from the one in expect +func requireEqualTLSConfig(t *testing.T, expect, got *tls.Config) { + require := require.New(t) + require.Equal(expect.RootCAs, got.RootCAs) + require.Equal(expect.ClientCAs, got.ClientCAs) + require.Equal(expect.InsecureSkipVerify, got.InsecureSkipVerify) + require.Equal(expect.MinVersion, got.MinVersion) + require.Equal(expect.CipherSuites, got.CipherSuites) + require.NotNil(got.GetCertificate) + require.NotNil(got.GetClientCertificate) + require.NotNil(got.GetConfigForClient) + require.Contains(got.NextProtos, "h2") + + var expectLeaf *tls.Certificate + var err error + if expect.GetCertificate != nil { + expectLeaf, err = expect.GetCertificate(nil) + require.Nil(err) + } else if len(expect.Certificates) > 0 { + expectLeaf = &expect.Certificates[0] + } + + gotLeaf, err := got.GetCertificate(nil) + require.Nil(err) + require.Equal(expectLeaf, gotLeaf) + + gotLeaf, err = got.GetClientCertificate(nil) + require.Nil(err) + require.Equal(expectLeaf, gotLeaf) +} + +// requireCorrectVerifier invokes got.VerifyPeerCertificate and expects the +// tls.Config arg to be returned on the provided channel. This ensures the +// correct verifier func was attached to got. +// +// It then ensures that the tls.Config passed to the verifierFunc was actually +// the same as the expected current value. +func requireCorrectVerifier(t *testing.T, expect, got *tls.Config, + ch chan *tls.Config) { + + err := got.VerifyPeerCertificate(nil, nil) + require.Nil(t, err) + verifierCfg := <-ch + // The tls.Cfg passed to verifyFunc should be the expected (current) value. + requireEqualTLSConfig(t, expect, verifierCfg) +} + +func TestDynamicTLSConfig(t *testing.T) { + require := require.New(t) + + ca1 := connect.TestCA(t, nil) + ca2 := connect.TestCA(t, nil) + baseCfg := TestTLSConfig(t, "web", ca1) + newCfg := TestTLSConfig(t, "web", ca2) + + c := newDynamicTLSConfig(baseCfg) + + // Should set them from the base config + require.Equal(c.Leaf(), &baseCfg.Certificates[0]) + require.Equal(c.Roots(), baseCfg.RootCAs) + + // Create verifiers we can assert are set and run correctly. + v1Ch := make(chan *tls.Config, 1) + v2Ch := make(chan *tls.Config, 1) + v3Ch := make(chan *tls.Config, 1) + verify1 := func(cfg *tls.Config, rawCerts [][]byte) error { + v1Ch <- cfg + return nil + } + verify2 := func(cfg *tls.Config, rawCerts [][]byte) error { + v2Ch <- cfg + return nil + } + verify3 := func(cfg *tls.Config, rawCerts [][]byte) error { + v3Ch <- cfg + return nil + } + + // The dynamic config should be the one we loaded (with some different hooks) + gotBefore := c.Get(verify1) + requireEqualTLSConfig(t, baseCfg, gotBefore) + requireCorrectVerifier(t, baseCfg, gotBefore, v1Ch) + + // Now change the roots as if we just loaded new roots from Consul + err := c.SetRoots(newCfg.RootCAs) + require.Nil(err) + + // The dynamic config should have the new roots, but old leaf + gotAfter := c.Get(verify2) + expect := newCfg.Clone() + expect.GetCertificate = func(_ *tls.ClientHelloInfo) (*tls.Certificate, error) { + return &baseCfg.Certificates[0], nil + } + requireEqualTLSConfig(t, expect, gotAfter) + requireCorrectVerifier(t, expect, gotAfter, v2Ch) + + // The old config fetched before should still call it's own verify func, but + // that verifier should be passed the new config (expect). + requireCorrectVerifier(t, expect, gotBefore, v1Ch) + + // Now change the leaf + err = c.SetLeaf(&newCfg.Certificates[0]) + require.Nil(err) + + // The dynamic config should have the new roots, AND new leaf + gotAfterLeaf := c.Get(verify3) + requireEqualTLSConfig(t, newCfg, gotAfterLeaf) + requireCorrectVerifier(t, newCfg, gotAfterLeaf, v3Ch) + + // Both older configs should still call their own verify funcs, but those + // verifiers should be passed the new config. + requireCorrectVerifier(t, newCfg, gotBefore, v1Ch) + requireCorrectVerifier(t, newCfg, gotAfter, v2Ch) +} From 6f566f750e953bcaea068b8a2a8a1301a2235767 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 5 Apr 2018 17:15:43 +0100 Subject: [PATCH 173/539] Basic `watch` support for connect proxy config and certificate endpoints. - Includes some bug fixes for previous `api` work and `agent` that weren't tested - Needed somewhat pervasive changes to support hash based blocking - some TODOs left in our watch toolchain that will explicitly fail on hash-based watches. - Integration into `connect` is partially done here but still WIP --- agent/agent_endpoint.go | 2 +- agent/agent_endpoint_test.go | 10 +- agent/http_oss.go | 1 + agent/structs/connect.go | 7 +- agent/watch_handler.go | 20 ++- agent/watch_handler_test.go | 4 +- api/agent_test.go | 29 +++-- api/api.go | 19 ++- command/watch/watch.go | 13 +- connect/service.go | 99 ++++++++++++++- watch/funcs.go | 147 +++++++++++++++++----- watch/funcs_test.go | 234 +++++++++++++++++++++++++++++++++-- watch/plan.go | 34 +++-- watch/plan_test.go | 16 ++- watch/watch.go | 75 +++++++++-- 15 files changed, 615 insertions(+), 95 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 798c370b2..d500b17ba 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -942,7 +942,7 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. // // Returns the local proxy config for the identified proxy. Requires token= // param with the correct local ProxyToken (not ACL token). -func (s *HTTPServer) ConnectProxyConfig(resp http.ResponseWriter, req *http.Request) (interface{}, error) { +func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http.Request) (interface{}, error) { // Get the proxy ID. Note that this is the ID of a proxy's service instance. id := strings.TrimPrefix(req.URL.Path, "/v1/agent/connect/proxy/") diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index d6b1996dd..d5ea7305a 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2292,7 +2292,7 @@ func TestAgentConnectProxy(t *testing.T) { ProxyServiceID: "test-proxy", TargetServiceID: "test", TargetServiceName: "test", - ContentHash: "a15dccb216d38a6e", + ContentHash: "84346af2031659c9", ExecMode: "daemon", Command: "", Config: map[string]interface{}{ @@ -2310,7 +2310,7 @@ func TestAgentConnectProxy(t *testing.T) { ur, err := copystructure.Copy(expectedResponse) require.NoError(t, err) updatedResponse := ur.(*api.ConnectProxyConfig) - updatedResponse.ContentHash = "22bc9233a52c08fd" + updatedResponse.ContentHash = "7d53473b0e9db5a" upstreams := updatedResponse.Config["upstreams"].([]interface{}) upstreams = append(upstreams, map[string]interface{}{ @@ -2337,7 +2337,7 @@ func TestAgentConnectProxy(t *testing.T) { }, { name: "blocking fetch timeout, no change", - url: "/v1/agent/connect/proxy/test-proxy?hash=a15dccb216d38a6e&wait=100ms", + url: "/v1/agent/connect/proxy/test-proxy?hash=" + expectedResponse.ContentHash + "&wait=100ms", wantWait: 100 * time.Millisecond, wantCode: 200, wantErr: false, @@ -2352,7 +2352,7 @@ func TestAgentConnectProxy(t *testing.T) { }, { name: "blocking fetch returns change", - url: "/v1/agent/connect/proxy/test-proxy?hash=a15dccb216d38a6e", + url: "/v1/agent/connect/proxy/test-proxy?hash=" + expectedResponse.ContentHash, updateFunc: func() { time.Sleep(100 * time.Millisecond) // Re-register with new proxy config @@ -2393,7 +2393,7 @@ func TestAgentConnectProxy(t *testing.T) { go tt.updateFunc() } start := time.Now() - obj, err := a.srv.ConnectProxyConfig(resp, req) + obj, err := a.srv.AgentConnectProxyConfig(resp, req) elapsed := time.Now().Sub(start) if tt.wantErr { diff --git a/agent/http_oss.go b/agent/http_oss.go index 124a26875..d9b8068ef 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -32,6 +32,7 @@ func init() { registerEndpoint("/v1/agent/connect/authorize", []string{"POST"}, (*HTTPServer).AgentConnectAuthorize) registerEndpoint("/v1/agent/connect/ca/roots", []string{"GET"}, (*HTTPServer).AgentConnectCARoots) registerEndpoint("/v1/agent/connect/ca/leaf/", []string{"GET"}, (*HTTPServer).AgentConnectCALeafCert) + registerEndpoint("/v1/agent/connect/proxy/", []string{"GET"}, (*HTTPServer).AgentConnectProxyConfig) registerEndpoint("/v1/agent/service/register", []string{"PUT"}, (*HTTPServer).AgentRegisterService) registerEndpoint("/v1/agent/service/deregister/", []string{"PUT"}, (*HTTPServer).AgentDeregisterService) registerEndpoint("/v1/agent/service/maintenance/", []string{"PUT"}, (*HTTPServer).AgentServiceMaintenance) diff --git a/agent/structs/connect.go b/agent/structs/connect.go index 5f907c1ab..20970c1bf 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -66,8 +66,11 @@ type ConnectManagedProxy struct { // ProxyService is a pointer to the local proxy's service record for // convenience. The proxies ID and name etc. can be read from there. It may be - // nil if the agent is starting up and hasn't registered the service yet. - ProxyService *NodeService + // nil if the agent is starting up and hasn't registered the service yet. We + // ignore it when calculating the hash value since the only thing that effects + // the proxy's config is the ID of the target service which is already + // represented below. + ProxyService *NodeService `hash:"ignore"` // TargetServiceID is the ID of the target service on the localhost. It may // not exist yet since bootstrapping is allowed to happen in either order. diff --git a/agent/watch_handler.go b/agent/watch_handler.go index 4c6a9d3f3..27c7a430e 100644 --- a/agent/watch_handler.go +++ b/agent/watch_handler.go @@ -42,7 +42,13 @@ func makeWatchHandler(logOutput io.Writer, handler interface{}) watch.HandlerFun } logger := log.New(logOutput, "", log.LstdFlags) - fn := func(idx uint64, data interface{}) { + fn := func(blockVal watch.BlockingParam, data interface{}) { + idx, ok := blockVal.(watch.WaitIndexVal) + if !ok { + logger.Printf("[ERR] agent: watch handler doesn't support non-index watches") + return + } + // Create the command var cmd *osexec.Cmd var err error @@ -58,7 +64,7 @@ func makeWatchHandler(logOutput io.Writer, handler interface{}) watch.HandlerFun } cmd.Env = append(os.Environ(), - "CONSUL_INDEX="+strconv.FormatUint(idx, 10), + "CONSUL_INDEX="+strconv.FormatUint(uint64(idx), 10), ) // Collect the output @@ -96,7 +102,13 @@ func makeWatchHandler(logOutput io.Writer, handler interface{}) watch.HandlerFun func makeHTTPWatchHandler(logOutput io.Writer, config *watch.HttpHandlerConfig) watch.HandlerFunc { logger := log.New(logOutput, "", log.LstdFlags) - fn := func(idx uint64, data interface{}) { + fn := func(blockVal watch.BlockingParam, data interface{}) { + idx, ok := blockVal.(watch.WaitIndexVal) + if !ok { + logger.Printf("[ERR] agent: watch handler doesn't support non-index watches") + return + } + trans := cleanhttp.DefaultTransport() // Skip SSL certificate verification if TLSSkipVerify is true @@ -132,7 +144,7 @@ func makeHTTPWatchHandler(logOutput io.Writer, config *watch.HttpHandlerConfig) } req = req.WithContext(ctx) req.Header.Add("Content-Type", "application/json") - req.Header.Add("X-Consul-Index", strconv.FormatUint(idx, 10)) + req.Header.Add("X-Consul-Index", strconv.FormatUint(uint64(idx), 10)) for key, values := range config.Header { for _, val := range values { req.Header.Add(key, val) diff --git a/agent/watch_handler_test.go b/agent/watch_handler_test.go index f7ba83b0a..6851baf71 100644 --- a/agent/watch_handler_test.go +++ b/agent/watch_handler_test.go @@ -17,7 +17,7 @@ func TestMakeWatchHandler(t *testing.T) { defer os.Remove("handler_index_out") script := "bash -c 'echo $CONSUL_INDEX >> handler_index_out && cat >> handler_out'" handler := makeWatchHandler(os.Stderr, script) - handler(100, []string{"foo", "bar", "baz"}) + handler(watch.WaitIndexVal(100), []string{"foo", "bar", "baz"}) raw, err := ioutil.ReadFile("handler_out") if err != nil { t.Fatalf("err: %v", err) @@ -62,5 +62,5 @@ func TestMakeHTTPWatchHandler(t *testing.T) { Timeout: time.Minute, } handler := makeHTTPWatchHandler(os.Stderr, &config) - handler(100, []string{"foo", "bar", "baz"}) + handler(watch.WaitIndexVal(100), []string{"foo", "bar", "baz"}) } diff --git a/api/agent_test.go b/api/agent_test.go index 01d35ae15..8cc58e012 100644 --- a/api/agent_test.go +++ b/api/agent_test.go @@ -1087,20 +1087,31 @@ func TestAPI_AgentConnectProxyConfig(t *testing.T) { Name: "foo", Tags: []string{"bar", "baz"}, Port: 8000, - Check: &AgentServiceCheck{ - CheckID: "foo-ttl", - TTL: "15s", + Connect: &AgentServiceConnect{ + Proxy: &AgentServiceConnectProxy{ + Config: map[string]interface{}{ + "foo": "bar", + }, + }, }, } if err := agent.ServiceRegister(reg); err != nil { t.Fatalf("err: %v", err) } - checks, err := agent.Checks() - if err != nil { - t.Fatalf("err: %v", err) - } - if _, ok := checks["foo-ttl"]; !ok { - t.Fatalf("missing check: %v", checks) + config, qm, err := agent.ConnectProxyConfig("foo-proxy", nil) + require.NoError(t, err) + expectConfig := &ConnectProxyConfig{ + ProxyServiceID: "foo-proxy", + TargetServiceID: "foo", + TargetServiceName: "foo", + ContentHash: "e662ea8600d84cf0", + ExecMode: "daemon", + Command: "", + Config: map[string]interface{}{ + "foo": "bar", + }, } + require.Equal(t, expectConfig, config) + require.Equal(t, "e662ea8600d84cf0", qm.LastContentHash) } diff --git a/api/api.go b/api/api.go index 6f3034d90..6d6436637 100644 --- a/api/api.go +++ b/api/api.go @@ -175,6 +175,11 @@ type QueryMeta struct { // a blocking query LastIndex uint64 + // LastContentHash. This can be used as a WaitHash to perform a blocking query + // for endpoints that support hash-based blocking. Endpoints that do not + // support it will return an empty hash. + LastContentHash string + // Time of last contact from the leader for the // server servicing the request LastContact time.Duration @@ -733,12 +738,16 @@ func (c *Client) write(endpoint string, in, out interface{}, q *WriteOptions) (* func parseQueryMeta(resp *http.Response, q *QueryMeta) error { header := resp.Header - // Parse the X-Consul-Index - index, err := strconv.ParseUint(header.Get("X-Consul-Index"), 10, 64) - if err != nil { - return fmt.Errorf("Failed to parse X-Consul-Index: %v", err) + // Parse the X-Consul-Index (if it's set - hash based blocking queries don't + // set this) + if indexStr := header.Get("X-Consul-Index"); indexStr != "" { + index, err := strconv.ParseUint(indexStr, 10, 64) + if err != nil { + return fmt.Errorf("Failed to parse X-Consul-Index: %v", err) + } + q.LastIndex = index } - q.LastIndex = index + q.LastContentHash = header.Get("X-Consul-ContentHash") // Parse the X-Consul-LastContact last, err := strconv.ParseUint(header.Get("X-Consul-LastContact"), 10, 64) diff --git a/command/watch/watch.go b/command/watch/watch.go index 3b8c67836..2286de1cc 100644 --- a/command/watch/watch.go +++ b/command/watch/watch.go @@ -154,7 +154,7 @@ func (c *cmd) Run(args []string) int { // 1: true errExit := 0 if len(c.flags.Args()) == 0 { - wp.Handler = func(idx uint64, data interface{}) { + wp.Handler = func(blockParam consulwatch.BlockingParam, data interface{}) { defer wp.Stop() buf, err := json.MarshalIndent(data, "", " ") if err != nil { @@ -164,7 +164,14 @@ func (c *cmd) Run(args []string) int { c.UI.Output(string(buf)) } } else { - wp.Handler = func(idx uint64, data interface{}) { + wp.Handler = func(blockVal consulwatch.BlockingParam, data interface{}) { + idx, ok := blockVal.(consulwatch.WaitIndexVal) + if !ok { + // TODO(banks): make this work for hash based watches. + c.UI.Error("Error: watch handler doesn't support non-index watches") + return + } + doneCh := make(chan struct{}) defer close(doneCh) logFn := func(err error) { @@ -185,7 +192,7 @@ func (c *cmd) Run(args []string) int { goto ERR } cmd.Env = append(os.Environ(), - "CONSUL_INDEX="+strconv.FormatUint(idx, 10), + "CONSUL_INDEX="+strconv.FormatUint(uint64(idx), 10), ) // Encode the input diff --git a/connect/service.go b/connect/service.go index f9d6591c2..4c8887745 100644 --- a/connect/service.go +++ b/connect/service.go @@ -3,6 +3,7 @@ package connect import ( "context" "crypto/tls" + "crypto/x509" "errors" "log" "net" @@ -11,6 +12,7 @@ import ( "time" "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/watch" "golang.org/x/net/http2" ) @@ -52,6 +54,9 @@ type Service struct { // TODO(banks): write the proper implementation httpResolverFromAddr func(addr string) (Resolver, error) + rootsWatch *watch.Plan + leafWatch *watch.Plan + logger *log.Logger } @@ -73,7 +78,28 @@ func NewServiceWithLogger(serviceID string, client *api.Client, tlsCfg: newDynamicTLSConfig(defaultTLSConfig()), } - // TODO(banks) run the background certificate sync + // Set up root and leaf watches + p, err := watch.Parse(map[string]interface{}{ + "type": "connect_roots", + }) + if err != nil { + return nil, err + } + s.rootsWatch = p + s.rootsWatch.Handler = s.rootsWatchHandler + + p, err = watch.Parse(map[string]interface{}{ + "type": "connect_leaf", + }) + if err != nil { + return nil, err + } + s.leafWatch = p + s.leafWatch.Handler = s.leafWatchHandler + + //go s.rootsWatch.RunWithClientAndLogger(s.client, s.logger) + //go s.leafWatch.RunWithClientAndLogger(s.client, s.logger) + return s, nil } @@ -201,6 +227,75 @@ func (s *Service) HTTPClient() *http.Client { // Close stops the service and frees resources. func (s *Service) Close() error { - // TODO(banks): stop background activity if started + if s.rootsWatch != nil { + s.rootsWatch.Stop() + } + if s.leafWatch != nil { + s.leafWatch.Stop() + } return nil } + +func (s *Service) rootsWatchHandler(blockParam watch.BlockingParam, raw interface{}) { + if raw == nil { + return + } + v, ok := raw.(*api.CARootList) + if !ok || v == nil { + s.logger.Println("[ERR] got invalid response from root watch") + return + } + + // Got new root certificates, update the tls.Configs. + roots := x509.NewCertPool() + for _, root := range v.Roots { + roots.AppendCertsFromPEM([]byte(root.RootCertPEM)) + } + + // Note that SetTLSConfig takes care of adding a dynamic GetConfigForClient + // hook that will fetch this updated config for new incoming connections on a + // server. That means all future connections are validated against the new + // roots. On a client, we only expose Dial and we fetch the most recent config + // each time so all future Dials (direct or via an http.Client with our dial + // hook) will grab this new config. + newCfg := s.serverTLSCfg.TLSConfig() + // Server-side verification uses ClientCAs. + newCfg.ClientCAs = roots + s.serverTLSCfg.SetTLSConfig(newCfg) + + newCfg = s.clientTLSCfg.TLSConfig() + // Client-side verification uses RootCAs. + newCfg.RootCAs = roots + s.clientTLSCfg.SetTLSConfig(newCfg) +} + +func (s *Service) leafWatchHandler(blockParam watch.BlockingParam, raw interface{}) { + if raw == nil { + return // ignore + } + v, ok := raw.(*api.LeafCert) + if !ok || v == nil { + s.logger.Println("[ERR] got invalid response from root watch") + return + } + + // Got new leaf, update the tls.Configs + cert, err := tls.X509KeyPair([]byte(v.CertPEM), []byte(v.PrivateKeyPEM)) + if err != nil { + s.logger.Printf("[ERR] failed to parse new leaf cert: %s", err) + return + } + + // Note that SetTLSConfig takes care of adding a dynamic GetClientCertificate + // hook that will fetch the first cert from the Certificates slice of the + // current config for each outbound client request even if the client is using + // an old version of the config struct so all we need to do it set that and + // all existing clients will start using the new cert. + newCfg := s.serverTLSCfg.TLSConfig() + newCfg.Certificates = []tls.Certificate{cert} + s.serverTLSCfg.SetTLSConfig(newCfg) + + newCfg = s.clientTLSCfg.TLSConfig() + newCfg.Certificates = []tls.Certificate{cert} + s.clientTLSCfg.SetTLSConfig(newCfg) +} diff --git a/watch/funcs.go b/watch/funcs.go index 20265decc..8c5823633 100644 --- a/watch/funcs.go +++ b/watch/funcs.go @@ -3,6 +3,7 @@ package watch import ( "context" "fmt" + "log" consulapi "github.com/hashicorp/consul/api" ) @@ -16,13 +17,16 @@ var watchFuncFactory map[string]watchFactory func init() { watchFuncFactory = map[string]watchFactory{ - "key": keyWatch, - "keyprefix": keyPrefixWatch, - "services": servicesWatch, - "nodes": nodesWatch, - "service": serviceWatch, - "checks": checksWatch, - "event": eventWatch, + "key": keyWatch, + "keyprefix": keyPrefixWatch, + "services": servicesWatch, + "nodes": nodesWatch, + "service": serviceWatch, + "checks": checksWatch, + "event": eventWatch, + "connect_roots": connectRootsWatch, + "connect_leaf": connectLeafWatch, + "connect_proxy_config": connectProxyConfigWatch, } } @@ -40,18 +44,18 @@ func keyWatch(params map[string]interface{}) (WatcherFunc, error) { if key == "" { return nil, fmt.Errorf("Must specify a single key to watch") } - fn := func(p *Plan) (uint64, interface{}, error) { + fn := func(p *Plan) (BlockingParam, interface{}, error) { kv := p.client.KV() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() pair, meta, err := kv.Get(key, &opts) if err != nil { - return 0, nil, err + return nil, nil, err } if pair == nil { - return meta.LastIndex, nil, err + return WaitIndexVal(meta.LastIndex), nil, err } - return meta.LastIndex, pair, err + return WaitIndexVal(meta.LastIndex), pair, err } return fn, nil } @@ -70,15 +74,15 @@ func keyPrefixWatch(params map[string]interface{}) (WatcherFunc, error) { if prefix == "" { return nil, fmt.Errorf("Must specify a single prefix to watch") } - fn := func(p *Plan) (uint64, interface{}, error) { + fn := func(p *Plan) (BlockingParam, interface{}, error) { kv := p.client.KV() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() pairs, meta, err := kv.List(prefix, &opts) if err != nil { - return 0, nil, err + return nil, nil, err } - return meta.LastIndex, pairs, err + return WaitIndexVal(meta.LastIndex), pairs, err } return fn, nil } @@ -90,15 +94,15 @@ func servicesWatch(params map[string]interface{}) (WatcherFunc, error) { return nil, err } - fn := func(p *Plan) (uint64, interface{}, error) { + fn := func(p *Plan) (BlockingParam, interface{}, error) { catalog := p.client.Catalog() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() services, meta, err := catalog.Services(&opts) if err != nil { - return 0, nil, err + return nil, nil, err } - return meta.LastIndex, services, err + return WaitIndexVal(meta.LastIndex), services, err } return fn, nil } @@ -110,15 +114,15 @@ func nodesWatch(params map[string]interface{}) (WatcherFunc, error) { return nil, err } - fn := func(p *Plan) (uint64, interface{}, error) { + fn := func(p *Plan) (BlockingParam, interface{}, error) { catalog := p.client.Catalog() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() nodes, meta, err := catalog.Nodes(&opts) if err != nil { - return 0, nil, err + return nil, nil, err } - return meta.LastIndex, nodes, err + return WaitIndexVal(meta.LastIndex), nodes, err } return fn, nil } @@ -147,15 +151,15 @@ func serviceWatch(params map[string]interface{}) (WatcherFunc, error) { return nil, err } - fn := func(p *Plan) (uint64, interface{}, error) { + fn := func(p *Plan) (BlockingParam, interface{}, error) { health := p.client.Health() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() nodes, meta, err := health.Service(service, tag, passingOnly, &opts) if err != nil { - return 0, nil, err + return nil, nil, err } - return meta.LastIndex, nodes, err + return WaitIndexVal(meta.LastIndex), nodes, err } return fn, nil } @@ -181,7 +185,7 @@ func checksWatch(params map[string]interface{}) (WatcherFunc, error) { state = "any" } - fn := func(p *Plan) (uint64, interface{}, error) { + fn := func(p *Plan) (BlockingParam, interface{}, error) { health := p.client.Health() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() @@ -194,9 +198,9 @@ func checksWatch(params map[string]interface{}) (WatcherFunc, error) { checks, meta, err = health.Checks(service, &opts) } if err != nil { - return 0, nil, err + return nil, nil, err } - return meta.LastIndex, checks, err + return WaitIndexVal(meta.LastIndex), checks, err } return fn, nil } @@ -210,23 +214,98 @@ func eventWatch(params map[string]interface{}) (WatcherFunc, error) { return nil, err } - fn := func(p *Plan) (uint64, interface{}, error) { + fn := func(p *Plan) (BlockingParam, interface{}, error) { event := p.client.Event() opts := makeQueryOptionsWithContext(p, false) defer p.cancelFunc() events, meta, err := event.List(name, &opts) if err != nil { - return 0, nil, err + return nil, nil, err } // Prune to only the new events for i := 0; i < len(events); i++ { - if event.IDToIndex(events[i].ID) == p.lastIndex { + if WaitIndexVal(event.IDToIndex(events[i].ID)).Equal(p.lastParamVal) { events = events[i+1:] break } } - return meta.LastIndex, events, err + return WaitIndexVal(meta.LastIndex), events, err + } + return fn, nil +} + +// connectRootsWatch is used to watch for changes to Connect Root certificates. +func connectRootsWatch(params map[string]interface{}) (WatcherFunc, error) { + // We don't support stale since roots are likely to be cached locally in the + // agent anyway. + + fn := func(p *Plan) (BlockingParam, interface{}, error) { + agent := p.client.Agent() + opts := makeQueryOptionsWithContext(p, false) + defer p.cancelFunc() + + roots, meta, err := agent.ConnectCARoots(&opts) + if err != nil { + return nil, nil, err + } + + return WaitIndexVal(meta.LastIndex), roots, err + } + return fn, nil +} + +// connectLeafWatch is used to watch for changes to Connect Leaf certificates +// for given local service id. +func connectLeafWatch(params map[string]interface{}) (WatcherFunc, error) { + // We don't support stale since certs are likely to be cached locally in the + // agent anyway. + + var serviceID string + if err := assignValue(params, "service_id", &serviceID); err != nil { + return nil, err + } + + fn := func(p *Plan) (BlockingParam, interface{}, error) { + agent := p.client.Agent() + opts := makeQueryOptionsWithContext(p, false) + defer p.cancelFunc() + + leaf, meta, err := agent.ConnectCALeaf(serviceID, &opts) + if err != nil { + return nil, nil, err + } + + return WaitIndexVal(meta.LastIndex), leaf, err + } + return fn, nil +} + +// connectProxyConfigWatch is used to watch for changes to Connect managed proxy +// configuration. Note that this state is agent-local so the watch mechanism +// uses `hash` rather than `index` for deciding whether to block. +func connectProxyConfigWatch(params map[string]interface{}) (WatcherFunc, error) { + // We don't support consistency modes since it's agent local data + + var proxyServiceID string + if err := assignValue(params, "proxy_service_id", &proxyServiceID); err != nil { + return nil, err + } + + fn := func(p *Plan) (BlockingParam, interface{}, error) { + agent := p.client.Agent() + opts := makeQueryOptionsWithContext(p, false) + defer p.cancelFunc() + + log.Printf("DEBUG: id: %s, opts: %v", proxyServiceID, opts) + + config, _, err := agent.ConnectProxyConfig(proxyServiceID, &opts) + if err != nil { + return nil, nil, err + } + + // Return string ContentHash since we don't have Raft indexes to block on. + return WaitHashVal(config.ContentHash), config, err } return fn, nil } @@ -234,6 +313,12 @@ func eventWatch(params map[string]interface{}) (WatcherFunc, error) { func makeQueryOptionsWithContext(p *Plan, stale bool) consulapi.QueryOptions { ctx, cancel := context.WithCancel(context.Background()) p.cancelFunc = cancel - opts := consulapi.QueryOptions{AllowStale: stale, WaitIndex: p.lastIndex} + opts := consulapi.QueryOptions{AllowStale: stale} + switch param := p.lastParamVal.(type) { + case WaitIndexVal: + opts.WaitIndex = uint64(param) + case WaitHashVal: + opts.WaitHash = string(param) + } return *opts.WithContext(ctx) } diff --git a/watch/funcs_test.go b/watch/funcs_test.go index 190ae24fa..89c5a1e80 100644 --- a/watch/funcs_test.go +++ b/watch/funcs_test.go @@ -8,8 +8,10 @@ import ( "time" "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/agent/structs" consulapi "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/watch" + "github.com/stretchr/testify/require" ) var errBadContent = errors.New("bad content") @@ -30,7 +32,7 @@ func TestKeyWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"key", "key":"foo/bar/baz"}`) - plan.Handler = func(idx uint64, raw interface{}) { + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { if raw == nil { return // ignore } @@ -84,7 +86,7 @@ func TestKeyWatch_With_PrefixDelete(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"key", "key":"foo/bar/baz"}`) - plan.Handler = func(idx uint64, raw interface{}) { + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { if raw == nil { return // ignore } @@ -138,7 +140,7 @@ func TestKeyPrefixWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"keyprefix", "prefix":"foo/"}`) - plan.Handler = func(idx uint64, raw interface{}) { + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { if raw == nil { return // ignore } @@ -191,7 +193,7 @@ func TestServicesWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"services"}`) - plan.Handler = func(idx uint64, raw interface{}) { + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { if raw == nil { return // ignore } @@ -245,7 +247,7 @@ func TestNodesWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"nodes"}`) - plan.Handler = func(idx uint64, raw interface{}) { + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { if raw == nil { return // ignore } @@ -296,7 +298,7 @@ func TestServiceWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"service", "service":"foo", "tag":"bar", "passingonly":true}`) - plan.Handler = func(idx uint64, raw interface{}) { + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { if raw == nil { return // ignore } @@ -352,7 +354,7 @@ func TestChecksWatch_State(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"checks", "state":"warning"}`) - plan.Handler = func(idx uint64, raw interface{}) { + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { if raw == nil { return // ignore } @@ -413,7 +415,7 @@ func TestChecksWatch_Service(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"checks", "service":"foobar"}`) - plan.Handler = func(idx uint64, raw interface{}) { + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { if raw == nil { return // ignore } @@ -479,7 +481,7 @@ func TestEventWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"event", "name": "foo"}`) - plan.Handler = func(idx uint64, raw interface{}) { + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { if raw == nil { return } @@ -523,6 +525,220 @@ func TestEventWatch(t *testing.T) { wg.Wait() } +func TestConnectRootsWatch(t *testing.T) { + // TODO(banks) enable and make it work once this is supported. Note that this + // test actually passes currently just by busy-polling the roots endpoint + // until it changes. + t.Skip("CA and Leaf implementation don't actually support blocking yet") + t.Parallel() + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + + invoke := makeInvokeCh() + plan := mustParse(t, `{"type":"connect_roots"}`) + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + if raw == nil { + return // ignore + } + v, ok := raw.(*consulapi.CARootList) + if !ok || v == nil { + return // ignore + } + // TODO(banks): verify the right roots came back. + invoke <- nil + } + + var wg sync.WaitGroup + wg.Add(1) + go func() { + defer wg.Done() + time.Sleep(20 * time.Millisecond) + // TODO(banks): this is a hack since CA config is in flux. We _did_ expose a + // temporary agent endpoint for PUTing config, but didn't expose it in `api` + // package intentionally. If we are going to hack around with temporary API, + // we can might as well drop right down to the RPC level... + args := structs.CAConfiguration{ + Provider: "static", + Config: map[string]interface{}{ + "Name": "test-1", + "Generate": true, + }, + } + var reply interface{} + if err := a.RPC("ConnectCA.ConfigurationSet", &args, &reply); err != nil { + t.Fatalf("err: %v", err) + } + + }() + + wg.Add(1) + go func() { + defer wg.Done() + if err := plan.Run(a.HTTPAddr()); err != nil { + t.Fatalf("err: %v", err) + } + }() + + if err := <-invoke; err != nil { + t.Fatalf("err: %v", err) + } + + plan.Stop() + wg.Wait() +} + +func TestConnectLeafWatch(t *testing.T) { + // TODO(banks) enable and make it work once this is supported. + t.Skip("CA and Leaf implementation don't actually support blocking yet") + t.Parallel() + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + + // Register a web service to get certs for + { + agent := a.Client().Agent() + reg := consulapi.AgentServiceRegistration{ + ID: "web", + Name: "web", + Port: 9090, + } + err := agent.ServiceRegister(®) + require.Nil(t, err) + } + + // Setup a new generated CA + // + // TODO(banks): this is a hack since CA config is in flux. We _did_ expose a + // temporary agent endpoint for PUTing config, but didn't expose it in `api` + // package intentionally. If we are going to hack around with temporary API, + // we can might as well drop right down to the RPC level... + args := structs.CAConfiguration{ + Provider: "static", + Config: map[string]interface{}{ + "Name": "test-1", + "Generate": true, + }, + } + var reply interface{} + if err := a.RPC("ConnectCA.ConfigurationSet", &args, &reply); err != nil { + t.Fatalf("err: %v", err) + } + + invoke := makeInvokeCh() + plan := mustParse(t, `{"type":"connect_leaf", "service_id":"web"}`) + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + if raw == nil { + return // ignore + } + v, ok := raw.(*consulapi.LeafCert) + if !ok || v == nil { + return // ignore + } + // TODO(banks): verify the right leaf came back. + invoke <- nil + } + + var wg sync.WaitGroup + wg.Add(1) + go func() { + defer wg.Done() + time.Sleep(20 * time.Millisecond) + + // Change the CA which should eventually trigger a leaf change but probably + // won't now so this test has no way to succeed yet. + args := structs.CAConfiguration{ + Provider: "static", + Config: map[string]interface{}{ + "Name": "test-2", + "Generate": true, + }, + } + var reply interface{} + if err := a.RPC("ConnectCA.ConfigurationSet", &args, &reply); err != nil { + t.Fatalf("err: %v", err) + } + }() + + wg.Add(1) + go func() { + defer wg.Done() + if err := plan.Run(a.HTTPAddr()); err != nil { + t.Fatalf("err: %v", err) + } + }() + + if err := <-invoke; err != nil { + t.Fatalf("err: %v", err) + } + + plan.Stop() + wg.Wait() +} + +func TestConnectProxyConfigWatch(t *testing.T) { + t.Parallel() + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + + // Register a local agent service with a managed proxy + reg := &consulapi.AgentServiceRegistration{ + Name: "web", + Port: 8080, + Connect: &consulapi.AgentServiceConnect{ + Proxy: &consulapi.AgentServiceConnectProxy{ + Config: map[string]interface{}{ + "foo": "bar", + }, + }, + }, + } + client := a.Client() + agent := client.Agent() + err := agent.ServiceRegister(reg) + require.NoError(t, err) + + invoke := makeInvokeCh() + plan := mustParse(t, `{"type":"connect_proxy_config", "proxy_service_id":"web-proxy"}`) + plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + if raw == nil { + return // ignore + } + v, ok := raw.(*consulapi.ConnectProxyConfig) + if !ok || v == nil { + return // ignore + } + invoke <- nil + } + + var wg sync.WaitGroup + wg.Add(1) + go func() { + defer wg.Done() + time.Sleep(20 * time.Millisecond) + + // Change the proxy's config + reg.Connect.Proxy.Config["foo"] = "buzz" + reg.Connect.Proxy.Config["baz"] = "qux" + err := agent.ServiceRegister(reg) + require.NoError(t, err) + }() + + wg.Add(1) + go func() { + defer wg.Done() + if err := plan.Run(a.HTTPAddr()); err != nil { + t.Fatalf("err: %v", err) + } + }() + + if err := <-invoke; err != nil { + t.Fatalf("err: %v", err) + } + + plan.Stop() + wg.Wait() +} + func mustParse(t *testing.T, q string) *watch.Plan { var params map[string]interface{} if err := json.Unmarshal([]byte(q), ¶ms); err != nil { diff --git a/watch/plan.go b/watch/plan.go index fff9da7c7..6292c19a4 100644 --- a/watch/plan.go +++ b/watch/plan.go @@ -37,7 +37,6 @@ func (p *Plan) RunWithConfig(address string, conf *consulapi.Config) error { if err != nil { return fmt.Errorf("Failed to connect to agent: %v", err) } - p.client = client // Create the logger output := p.LogOutput @@ -46,12 +45,24 @@ func (p *Plan) RunWithConfig(address string, conf *consulapi.Config) error { } logger := log.New(output, "", log.LstdFlags) + return p.RunWithClientAndLogger(client, logger) +} + +// RunWithClientAndLogger runs a watch plan using an external client and +// log.Logger instance. Using this, the plan's Datacenter, Token and LogOutput +// fields are ignored and the passed client is expected to be configured as +// needed. +func (p *Plan) RunWithClientAndLogger(client *consulapi.Client, + logger *log.Logger) error { + + p.client = client + // Loop until we are canceled failures := 0 OUTER: for !p.shouldStop() { // Invoke the handler - index, result, err := p.Watcher(p) + blockParamVal, result, err := p.Watcher(p) // Check if we should terminate since the function // could have blocked for a while @@ -63,7 +74,11 @@ OUTER: if err != nil { // Perform an exponential backoff failures++ - p.lastIndex = 0 + if blockParamVal == nil { + p.lastParamVal = nil + } else { + p.lastParamVal = blockParamVal.Next(p.lastParamVal) + } retry := retryInterval * time.Duration(failures*failures) if retry > maxBackoffTime { retry = maxBackoffTime @@ -82,24 +97,21 @@ OUTER: failures = 0 // If the index is unchanged do nothing - if index == p.lastIndex { + if p.lastParamVal != nil && p.lastParamVal.Equal(blockParamVal) { continue } // Update the index, look for change - oldIndex := p.lastIndex - p.lastIndex = index - if oldIndex != 0 && reflect.DeepEqual(p.lastResult, result) { + oldParamVal := p.lastParamVal + p.lastParamVal = blockParamVal.Next(oldParamVal) + if oldParamVal != nil && reflect.DeepEqual(p.lastResult, result) { continue } - if p.lastIndex < oldIndex { - p.lastIndex = 0 - } // Handle the updated result p.lastResult = result if p.Handler != nil { - p.Handler(index, result) + p.Handler(blockParamVal, result) } } return nil diff --git a/watch/plan_test.go b/watch/plan_test.go index 16e4cfbc2..6099dc294 100644 --- a/watch/plan_test.go +++ b/watch/plan_test.go @@ -10,9 +10,12 @@ func init() { } func noopWatch(params map[string]interface{}) (WatcherFunc, error) { - fn := func(p *Plan) (uint64, interface{}, error) { - idx := p.lastIndex + 1 - return idx, idx, nil + fn := func(p *Plan) (BlockingParam, interface{}, error) { + idx := WaitIndexVal(0) + if i, ok := p.lastParamVal.(WaitIndexVal); ok { + idx = i + } + return idx + 1, uint64(idx + 1), nil } return fn, nil } @@ -32,7 +35,12 @@ func TestRun_Stop(t *testing.T) { var expect uint64 = 1 doneCh := make(chan struct{}) - plan.Handler = func(idx uint64, val interface{}) { + plan.Handler = func(blockParamVal BlockingParam, val interface{}) { + idxVal, ok := blockParamVal.(WaitIndexVal) + if !ok { + t.Fatalf("Expected index-based watch") + } + idx := uint64(idxVal) if idx != expect { t.Fatalf("Bad: %d %d", expect, idx) } diff --git a/watch/watch.go b/watch/watch.go index cdf534296..b520d702e 100644 --- a/watch/watch.go +++ b/watch/watch.go @@ -28,10 +28,10 @@ type Plan struct { Handler HandlerFunc LogOutput io.Writer - address string - client *consulapi.Client - lastIndex uint64 - lastResult interface{} + address string + client *consulapi.Client + lastParamVal BlockingParam + lastResult interface{} stop bool stopCh chan struct{} @@ -48,11 +48,72 @@ type HttpHandlerConfig struct { TLSSkipVerify bool `mapstructure:"tls_skip_verify"` } -// WatcherFunc is used to watch for a diff -type WatcherFunc func(*Plan) (uint64, interface{}, error) +// BlockingParam is an interface representing the common operations needed for +// different styles of blocking. It's used to abstract the core watch plan from +// whether we are performing index-based or hash-based blocking. +type BlockingParam interface { + // Equal returns whether the other param value should be considered equal + // (i.e. representing no change in the watched resource). Equal must not panic + // if other is nil. + Equal(other BlockingParam) bool + + // Next is called when deciding which value to use on the next blocking call. + // It assumes the BlockingParam value it is called on is the most recent one + // returned and passes the previous one which may be nil as context. This + // allows types to customise logic around ordering without assuming there is + // an order. For example WaitIndexVal can check that the index didn't go + // backwards and if it did then reset to 0. Most other cases should just + // return themselves (the most recent value) to be used in the next request. + Next(previous BlockingParam) BlockingParam +} + +// WaitIndexVal is a type representing a Consul index that implements +// BlockingParam. +type WaitIndexVal uint64 + +// Equal implements BlockingParam +func (idx WaitIndexVal) Equal(other BlockingParam) bool { + if otherIdx, ok := other.(WaitIndexVal); ok { + return idx == otherIdx + } + return false +} + +// Next implements BlockingParam +func (idx WaitIndexVal) Next(previous BlockingParam) BlockingParam { + if previous == nil { + return idx + } + prevIdx, ok := previous.(WaitIndexVal) + if ok && prevIdx > idx { + // This value is smaller than the previous index, reset. + return WaitIndexVal(0) + } + return idx +} + +// WaitHashVal is a type representing a Consul content hash that implements +// BlockingParam. +type WaitHashVal string + +// Equal implements BlockingParam +func (h WaitHashVal) Equal(other BlockingParam) bool { + if otherHash, ok := other.(WaitHashVal); ok { + return h == otherHash + } + return false +} + +// Next implements BlockingParam +func (h WaitHashVal) Next(previous BlockingParam) BlockingParam { + return h +} + +// WatcherFunc is used to watch for a diff. +type WatcherFunc func(*Plan) (BlockingParam, interface{}, error) // HandlerFunc is used to handle new data -type HandlerFunc func(uint64, interface{}) +type HandlerFunc func(BlockingParam, interface{}) // Parse takes a watch query and compiles it into a WatchPlan or an error func Parse(params map[string]interface{}) (*Plan, error) { From eca94dcc9245a79c3312e5db0f2732e05d44cc5d Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 25 Apr 2018 14:53:30 +0100 Subject: [PATCH 174/539] Working proxy config reload tests --- connect/proxy/config.go | 133 +++++++++++++++--- connect/proxy/config_test.go | 125 ++++++++++++++++ connect/proxy/testdata/config-kitchensink.hcl | 3 +- key.pem | 0 watch/funcs.go | 3 - 5 files changed, 242 insertions(+), 22 deletions(-) create mode 100644 key.pem diff --git a/connect/proxy/config.go b/connect/proxy/config.go index a8f83d22c..3bd4db38b 100644 --- a/connect/proxy/config.go +++ b/connect/proxy/config.go @@ -5,8 +5,11 @@ import ( "io/ioutil" "log" + "github.com/mitchellh/mapstructure" + "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/connect" + "github.com/hashicorp/consul/watch" "github.com/hashicorp/hcl" ) @@ -59,21 +62,23 @@ type Config struct { // PublicListenerConfig contains the parameters needed for the incoming mTLS // listener. type PublicListenerConfig struct { - // BindAddress is the host:port the public mTLS listener will bind to. - BindAddress string `json:"bind_address" hcl:"bind_address"` + // BindAddress is the host/IP the public mTLS listener will bind to. + BindAddress string `json:"bind_address" hcl:"bind_address" mapstructure:"bind_address"` + + BindPort string `json:"bind_port" hcl:"bind_port" mapstructure:"bind_port"` // LocalServiceAddress is the host:port for the proxied application. This // should be on loopback or otherwise protected as it's plain TCP. - LocalServiceAddress string `json:"local_service_address" hcl:"local_service_address"` + LocalServiceAddress string `json:"local_service_address" hcl:"local_service_address" mapstructure:"local_service_address"` // LocalConnectTimeout is the timeout for establishing connections with the // local backend. Defaults to 1000 (1s). - LocalConnectTimeoutMs int `json:"local_connect_timeout_ms" hcl:"local_connect_timeout_ms"` + LocalConnectTimeoutMs int `json:"local_connect_timeout_ms" hcl:"local_connect_timeout_ms" mapstructure:"local_connect_timeout_ms"` // HandshakeTimeout is the timeout for incoming mTLS clients to complete a // handshake. Setting this low avoids DOS by malicious clients holding // resources open. Defaults to 10000 (10s). - HandshakeTimeoutMs int `json:"handshake_timeout_ms" hcl:"handshake_timeout_ms"` + HandshakeTimeoutMs int `json:"handshake_timeout_ms" hcl:"handshake_timeout_ms" mapstructure:"handshake_timeout_ms"` } // applyDefaults sets zero-valued params to a sane default. @@ -88,26 +93,28 @@ func (plc *PublicListenerConfig) applyDefaults() { // UpstreamConfig configures an upstream (outgoing) listener. type UpstreamConfig struct { - // LocalAddress is the host:port to listen on for local app connections. - LocalBindAddress string `json:"local_bind_address" hcl:"local_bind_address,attr"` + // LocalAddress is the host/ip to listen on for local app connections. Defaults to 127.0.0.1. + LocalBindAddress string `json:"local_bind_address" hcl:"local_bind_address,attr" mapstructure:"local_bind_address"` + + LocalBindPort int `json:"local_bind_port" hcl:"local_bind_port,attr" mapstructure:"local_bind_port"` // DestinationName is the service name of the destination. - DestinationName string `json:"destination_name" hcl:"destination_name,attr"` + DestinationName string `json:"destination_name" hcl:"destination_name,attr" mapstructure:"destination_name"` // DestinationNamespace is the namespace of the destination. - DestinationNamespace string `json:"destination_namespace" hcl:"destination_namespace,attr"` + DestinationNamespace string `json:"destination_namespace" hcl:"destination_namespace,attr" mapstructure:"destination_namespace"` // DestinationType determines which service discovery method is used to find a // candidate instance to connect to. - DestinationType string `json:"destination_type" hcl:"destination_type,attr"` + DestinationType string `json:"destination_type" hcl:"destination_type,attr" mapstructure:"destination_type"` // DestinationDatacenter is the datacenter the destination is in. If empty, // defaults to discovery within the same datacenter. - DestinationDatacenter string `json:"destination_datacenter" hcl:"destination_datacenter,attr"` + DestinationDatacenter string `json:"destination_datacenter" hcl:"destination_datacenter,attr" mapstructure:"destination_datacenter"` // ConnectTimeout is the timeout for establishing connections with the remote // service instance. Defaults to 10,000 (10s). - ConnectTimeoutMs int `json:"connect_timeout_ms" hcl:"connect_timeout_ms,attr"` + ConnectTimeoutMs int `json:"connect_timeout_ms" hcl:"connect_timeout_ms,attr" mapstructure:"connect_timeout_ms"` // resolver is used to plug in the service discover mechanism. It can be used // in tests to bypass discovery. In real usage it is used to inject the @@ -121,13 +128,22 @@ func (uc *UpstreamConfig) applyDefaults() { if uc.ConnectTimeoutMs == 0 { uc.ConnectTimeoutMs = 10000 } + if uc.DestinationType == "" { + uc.DestinationType = "service" + } + if uc.DestinationNamespace == "" { + uc.DestinationNamespace = "default" + } + if uc.LocalBindAddress == "" { + uc.LocalBindAddress = "127.0.0.1" + } } // String returns a string that uniquely identifies the Upstream. Used for // identifying the upstream in log output and map keys. func (uc *UpstreamConfig) String() string { - return fmt.Sprintf("%s->%s:%s/%s", uc.LocalBindAddress, uc.DestinationType, - uc.DestinationNamespace, uc.DestinationName) + return fmt.Sprintf("%s:%d->%s:%s/%s", uc.LocalBindAddress, uc.LocalBindPort, + uc.DestinationType, uc.DestinationNamespace, uc.DestinationName) } // UpstreamResolverFromClient returns a ConsulResolver that can resolve the @@ -212,12 +228,93 @@ type AgentConfigWatcher struct { client *api.Client proxyID string logger *log.Logger + ch chan *Config + plan *watch.Plan +} + +// NewAgentConfigWatcher creates an AgentConfigWatcher. +func NewAgentConfigWatcher(client *api.Client, proxyID string, + logger *log.Logger) (*AgentConfigWatcher, error) { + w := &AgentConfigWatcher{ + client: client, + proxyID: proxyID, + logger: logger, + ch: make(chan *Config), + } + + // Setup watch plan for config + plan, err := watch.Parse(map[string]interface{}{ + "type": "connect_proxy_config", + "proxy_service_id": w.proxyID, + }) + if err != nil { + return nil, err + } + w.plan = plan + w.plan.Handler = w.handler + go w.plan.RunWithClientAndLogger(w.client, w.logger) + return w, nil +} + +func (w *AgentConfigWatcher) handler(blockVal watch.BlockingParam, + val interface{}) { + log.Printf("DEBUG: got hash %s", blockVal.(watch.WaitHashVal)) + + resp, ok := val.(*api.ConnectProxyConfig) + if !ok { + w.logger.Printf("[WARN] proxy config watch returned bad response: %v", val) + return + } + + // Setup Service instance now we know target ID etc + service, err := connect.NewService(resp.TargetServiceID, w.client) + if err != nil { + w.logger.Printf("[WARN] proxy config watch failed to initialize"+ + " service: %s", err) + return + } + + // Create proxy config from the response + cfg := &Config{ + ProxyID: w.proxyID, + // Token should be already setup in the client + ProxiedServiceID: resp.TargetServiceID, + ProxiedServiceNamespace: "default", + service: service, + } + + // Unmarshal configs + err = mapstructure.Decode(resp.Config, &cfg.PublicListener) + if err != nil { + w.logger.Printf("[ERR] proxy config watch public listener config "+ + "couldn't be parsed: %s", err) + return + } + cfg.PublicListener.applyDefaults() + + err = mapstructure.Decode(resp.Config["upstreams"], &cfg.Upstreams) + if err != nil { + w.logger.Printf("[ERR] proxy config watch upstream listener config "+ + "couldn't be parsed: %s", err) + return + } + for i := range cfg.Upstreams { + cfg.Upstreams[i].applyDefaults() + } + + // Parsed config OK, deliver it! + w.ch <- cfg } // Watch implements ConfigWatcher. func (w *AgentConfigWatcher) Watch() <-chan *Config { - watch := make(chan *Config) - // TODO implement me, note we need to discover the Service instance to use and - // set it on the Config we return. - return watch + return w.ch +} + +// Close frees watcher resources and implements io.Closer +func (w *AgentConfigWatcher) Close() error { + if w.plan != nil { + w.plan.Stop() + } + return nil } diff --git a/connect/proxy/config_test.go b/connect/proxy/config_test.go index 96782b12e..855eaddf1 100644 --- a/connect/proxy/config_test.go +++ b/connect/proxy/config_test.go @@ -1,8 +1,15 @@ package proxy import ( + "log" + "os" "testing" + "time" + "github.com/stretchr/testify/assert" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/connect" "github.com/stretchr/testify/require" ) @@ -106,3 +113,121 @@ func TestUpstreamResolverFromClient(t *testing.T) { }) } } + +func TestAgentConfigWatcher(t *testing.T) { + a := agent.NewTestAgent("agent_smith", "") + + client := a.Client() + agent := client.Agent() + + // Register a service with a proxy + // Register a local agent service with a managed proxy + reg := &api.AgentServiceRegistration{ + Name: "web", + Port: 8080, + Connect: &api.AgentServiceConnect{ + Proxy: &api.AgentServiceConnectProxy{ + Config: map[string]interface{}{ + "bind_address": "10.10.10.10", + "bind_port": "1010", + "local_service_address": "127.0.0.1:5000", + "handshake_timeout_ms": 999, + "upstreams": []interface{}{ + map[string]interface{}{ + "destination_name": "db", + "local_bind_port": 9191, + }, + }, + }, + }, + }, + } + err := agent.ServiceRegister(reg) + require.NoError(t, err) + + w, err := NewAgentConfigWatcher(client, "web-proxy", + log.New(os.Stderr, "", log.LstdFlags)) + require.NoError(t, err) + + cfg := testGetConfigValTimeout(t, w, 500*time.Millisecond) + + expectCfg := &Config{ + ProxyID: w.proxyID, + ProxiedServiceID: "web", + ProxiedServiceNamespace: "default", + PublicListener: PublicListenerConfig{ + BindAddress: "10.10.10.10", + BindPort: "1010", + LocalServiceAddress: "127.0.0.1:5000", + HandshakeTimeoutMs: 999, + LocalConnectTimeoutMs: 1000, // from applyDefaults + }, + Upstreams: []UpstreamConfig{ + { + DestinationName: "db", + DestinationNamespace: "default", + DestinationType: "service", + LocalBindPort: 9191, + LocalBindAddress: "127.0.0.1", + ConnectTimeoutMs: 10000, // from applyDefaults + }, + }, + } + + // nil this out as comparisons are problematic, we'll explicitly sanity check + // it's reasonable later. + assert.NotNil(t, cfg.service) + cfg.service = nil + + assert.Equal(t, expectCfg, cfg) + + // TODO(banks): Sanity check the service is viable and gets TLS certs eventually from + // the agent. + + // Now keep watching and update the config. + go func() { + // Wait for watcher to be watching + time.Sleep(20 * time.Millisecond) + upstreams := reg.Connect.Proxy.Config["upstreams"].([]interface{}) + upstreams = append(upstreams, map[string]interface{}{ + "destination_name": "cache", + "local_bind_port": 9292, + "local_bind_address": "127.10.10.10", + }) + reg.Connect.Proxy.Config["upstreams"] = upstreams + reg.Connect.Proxy.Config["local_connect_timeout_ms"] = 444 + err := agent.ServiceRegister(reg) + require.NoError(t, err) + }() + + cfg = testGetConfigValTimeout(t, w, 2*time.Second) + + expectCfg.Upstreams = append(expectCfg.Upstreams, UpstreamConfig{ + DestinationName: "cache", + DestinationNamespace: "default", + DestinationType: "service", + ConnectTimeoutMs: 10000, // from applyDefaults + LocalBindPort: 9292, + LocalBindAddress: "127.10.10.10", + }) + expectCfg.PublicListener.LocalConnectTimeoutMs = 444 + + // nil this out as comparisons are problematic, we'll explicitly sanity check + // it's reasonable later. + assert.NotNil(t, cfg.service) + cfg.service = nil + + assert.Equal(t, expectCfg, cfg) +} + +func testGetConfigValTimeout(t *testing.T, w ConfigWatcher, + timeout time.Duration) *Config { + t.Helper() + select { + case cfg := <-w.Watch(): + return cfg + case <-time.After(timeout): + t.Fatalf("timeout after %s waiting for config update", timeout) + return nil + } +} diff --git a/connect/proxy/testdata/config-kitchensink.hcl b/connect/proxy/testdata/config-kitchensink.hcl index 2bda99791..fccfdffd0 100644 --- a/connect/proxy/testdata/config-kitchensink.hcl +++ b/connect/proxy/testdata/config-kitchensink.hcl @@ -12,7 +12,8 @@ dev_service_cert_file = "connect/testdata/ca1-svc-web.cert.pem" dev_service_key_file = "connect/testdata/ca1-svc-web.key.pem" public_listener { - bind_address = ":9999" + bind_address = "127.0.0.1" + bind_port= "9999" local_service_address = "127.0.0.1:5000" } diff --git a/key.pem b/key.pem new file mode 100644 index 000000000..e69de29bb diff --git a/watch/funcs.go b/watch/funcs.go index 8c5823633..3ad7f4f68 100644 --- a/watch/funcs.go +++ b/watch/funcs.go @@ -3,7 +3,6 @@ package watch import ( "context" "fmt" - "log" consulapi "github.com/hashicorp/consul/api" ) @@ -297,8 +296,6 @@ func connectProxyConfigWatch(params map[string]interface{}) (WatcherFunc, error) opts := makeQueryOptionsWithContext(p, false) defer p.cancelFunc() - log.Printf("DEBUG: id: %s, opts: %v", proxyServiceID, opts) - config, _, err := agent.ConnectProxyConfig(proxyServiceID, &opts) if err != nil { return nil, nil, err From 072b2a79cacd0155d2cd6d9c84088952b5329e29 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 25 Apr 2018 20:41:26 +0100 Subject: [PATCH 175/539] Support legacy watch.HandlerFunc type for backward compat reduces impact of change --- agent/watch_handler.go | 20 +++----------- agent/watch_handler_test.go | 4 +-- command/watch/watch.go | 13 +++------ connect/proxy/config.go | 2 +- connect/service.go | 4 +-- watch/funcs.go | 20 +++++++------- watch/funcs_test.go | 24 ++++++++--------- watch/plan.go | 12 +++++++-- watch/plan_test.go | 53 ++++++++++++++++++++++++++++++++++--- watch/watch.go | 53 ++++++++++++++++++++++--------------- 10 files changed, 125 insertions(+), 80 deletions(-) diff --git a/agent/watch_handler.go b/agent/watch_handler.go index 27c7a430e..4c6a9d3f3 100644 --- a/agent/watch_handler.go +++ b/agent/watch_handler.go @@ -42,13 +42,7 @@ func makeWatchHandler(logOutput io.Writer, handler interface{}) watch.HandlerFun } logger := log.New(logOutput, "", log.LstdFlags) - fn := func(blockVal watch.BlockingParam, data interface{}) { - idx, ok := blockVal.(watch.WaitIndexVal) - if !ok { - logger.Printf("[ERR] agent: watch handler doesn't support non-index watches") - return - } - + fn := func(idx uint64, data interface{}) { // Create the command var cmd *osexec.Cmd var err error @@ -64,7 +58,7 @@ func makeWatchHandler(logOutput io.Writer, handler interface{}) watch.HandlerFun } cmd.Env = append(os.Environ(), - "CONSUL_INDEX="+strconv.FormatUint(uint64(idx), 10), + "CONSUL_INDEX="+strconv.FormatUint(idx, 10), ) // Collect the output @@ -102,13 +96,7 @@ func makeWatchHandler(logOutput io.Writer, handler interface{}) watch.HandlerFun func makeHTTPWatchHandler(logOutput io.Writer, config *watch.HttpHandlerConfig) watch.HandlerFunc { logger := log.New(logOutput, "", log.LstdFlags) - fn := func(blockVal watch.BlockingParam, data interface{}) { - idx, ok := blockVal.(watch.WaitIndexVal) - if !ok { - logger.Printf("[ERR] agent: watch handler doesn't support non-index watches") - return - } - + fn := func(idx uint64, data interface{}) { trans := cleanhttp.DefaultTransport() // Skip SSL certificate verification if TLSSkipVerify is true @@ -144,7 +132,7 @@ func makeHTTPWatchHandler(logOutput io.Writer, config *watch.HttpHandlerConfig) } req = req.WithContext(ctx) req.Header.Add("Content-Type", "application/json") - req.Header.Add("X-Consul-Index", strconv.FormatUint(uint64(idx), 10)) + req.Header.Add("X-Consul-Index", strconv.FormatUint(idx, 10)) for key, values := range config.Header { for _, val := range values { req.Header.Add(key, val) diff --git a/agent/watch_handler_test.go b/agent/watch_handler_test.go index 6851baf71..f7ba83b0a 100644 --- a/agent/watch_handler_test.go +++ b/agent/watch_handler_test.go @@ -17,7 +17,7 @@ func TestMakeWatchHandler(t *testing.T) { defer os.Remove("handler_index_out") script := "bash -c 'echo $CONSUL_INDEX >> handler_index_out && cat >> handler_out'" handler := makeWatchHandler(os.Stderr, script) - handler(watch.WaitIndexVal(100), []string{"foo", "bar", "baz"}) + handler(100, []string{"foo", "bar", "baz"}) raw, err := ioutil.ReadFile("handler_out") if err != nil { t.Fatalf("err: %v", err) @@ -62,5 +62,5 @@ func TestMakeHTTPWatchHandler(t *testing.T) { Timeout: time.Minute, } handler := makeHTTPWatchHandler(os.Stderr, &config) - handler(watch.WaitIndexVal(100), []string{"foo", "bar", "baz"}) + handler(100, []string{"foo", "bar", "baz"}) } diff --git a/command/watch/watch.go b/command/watch/watch.go index 2286de1cc..3b8c67836 100644 --- a/command/watch/watch.go +++ b/command/watch/watch.go @@ -154,7 +154,7 @@ func (c *cmd) Run(args []string) int { // 1: true errExit := 0 if len(c.flags.Args()) == 0 { - wp.Handler = func(blockParam consulwatch.BlockingParam, data interface{}) { + wp.Handler = func(idx uint64, data interface{}) { defer wp.Stop() buf, err := json.MarshalIndent(data, "", " ") if err != nil { @@ -164,14 +164,7 @@ func (c *cmd) Run(args []string) int { c.UI.Output(string(buf)) } } else { - wp.Handler = func(blockVal consulwatch.BlockingParam, data interface{}) { - idx, ok := blockVal.(consulwatch.WaitIndexVal) - if !ok { - // TODO(banks): make this work for hash based watches. - c.UI.Error("Error: watch handler doesn't support non-index watches") - return - } - + wp.Handler = func(idx uint64, data interface{}) { doneCh := make(chan struct{}) defer close(doneCh) logFn := func(err error) { @@ -192,7 +185,7 @@ func (c *cmd) Run(args []string) int { goto ERR } cmd.Env = append(os.Environ(), - "CONSUL_INDEX="+strconv.FormatUint(uint64(idx), 10), + "CONSUL_INDEX="+strconv.FormatUint(idx, 10), ) // Encode the input diff --git a/connect/proxy/config.go b/connect/proxy/config.go index 3bd4db38b..b5a8c6bb4 100644 --- a/connect/proxy/config.go +++ b/connect/proxy/config.go @@ -256,7 +256,7 @@ func NewAgentConfigWatcher(client *api.Client, proxyID string, return w, nil } -func (w *AgentConfigWatcher) handler(blockVal watch.BlockingParam, +func (w *AgentConfigWatcher) handler(blockVal watch.BlockingParamVal, val interface{}) { log.Printf("DEBUG: got hash %s", blockVal.(watch.WaitHashVal)) diff --git a/connect/service.go b/connect/service.go index 4c8887745..a614f227f 100644 --- a/connect/service.go +++ b/connect/service.go @@ -236,7 +236,7 @@ func (s *Service) Close() error { return nil } -func (s *Service) rootsWatchHandler(blockParam watch.BlockingParam, raw interface{}) { +func (s *Service) rootsWatchHandler(blockParam watch.BlockingParamVal, raw interface{}) { if raw == nil { return } @@ -269,7 +269,7 @@ func (s *Service) rootsWatchHandler(blockParam watch.BlockingParam, raw interfac s.clientTLSCfg.SetTLSConfig(newCfg) } -func (s *Service) leafWatchHandler(blockParam watch.BlockingParam, raw interface{}) { +func (s *Service) leafWatchHandler(blockParam watch.BlockingParamVal, raw interface{}) { if raw == nil { return // ignore } diff --git a/watch/funcs.go b/watch/funcs.go index 3ad7f4f68..5e72e40a6 100644 --- a/watch/funcs.go +++ b/watch/funcs.go @@ -43,7 +43,7 @@ func keyWatch(params map[string]interface{}) (WatcherFunc, error) { if key == "" { return nil, fmt.Errorf("Must specify a single key to watch") } - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { kv := p.client.KV() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() @@ -73,7 +73,7 @@ func keyPrefixWatch(params map[string]interface{}) (WatcherFunc, error) { if prefix == "" { return nil, fmt.Errorf("Must specify a single prefix to watch") } - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { kv := p.client.KV() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() @@ -93,7 +93,7 @@ func servicesWatch(params map[string]interface{}) (WatcherFunc, error) { return nil, err } - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { catalog := p.client.Catalog() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() @@ -113,7 +113,7 @@ func nodesWatch(params map[string]interface{}) (WatcherFunc, error) { return nil, err } - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { catalog := p.client.Catalog() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() @@ -150,7 +150,7 @@ func serviceWatch(params map[string]interface{}) (WatcherFunc, error) { return nil, err } - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { health := p.client.Health() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() @@ -184,7 +184,7 @@ func checksWatch(params map[string]interface{}) (WatcherFunc, error) { state = "any" } - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { health := p.client.Health() opts := makeQueryOptionsWithContext(p, stale) defer p.cancelFunc() @@ -213,7 +213,7 @@ func eventWatch(params map[string]interface{}) (WatcherFunc, error) { return nil, err } - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { event := p.client.Event() opts := makeQueryOptionsWithContext(p, false) defer p.cancelFunc() @@ -239,7 +239,7 @@ func connectRootsWatch(params map[string]interface{}) (WatcherFunc, error) { // We don't support stale since roots are likely to be cached locally in the // agent anyway. - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { agent := p.client.Agent() opts := makeQueryOptionsWithContext(p, false) defer p.cancelFunc() @@ -265,7 +265,7 @@ func connectLeafWatch(params map[string]interface{}) (WatcherFunc, error) { return nil, err } - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { agent := p.client.Agent() opts := makeQueryOptionsWithContext(p, false) defer p.cancelFunc() @@ -291,7 +291,7 @@ func connectProxyConfigWatch(params map[string]interface{}) (WatcherFunc, error) return nil, err } - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { agent := p.client.Agent() opts := makeQueryOptionsWithContext(p, false) defer p.cancelFunc() diff --git a/watch/funcs_test.go b/watch/funcs_test.go index 89c5a1e80..d5253de44 100644 --- a/watch/funcs_test.go +++ b/watch/funcs_test.go @@ -32,7 +32,7 @@ func TestKeyWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"key", "key":"foo/bar/baz"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore } @@ -86,7 +86,7 @@ func TestKeyWatch_With_PrefixDelete(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"key", "key":"foo/bar/baz"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore } @@ -140,7 +140,7 @@ func TestKeyPrefixWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"keyprefix", "prefix":"foo/"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore } @@ -193,7 +193,7 @@ func TestServicesWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"services"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore } @@ -247,7 +247,7 @@ func TestNodesWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"nodes"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore } @@ -298,7 +298,7 @@ func TestServiceWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"service", "service":"foo", "tag":"bar", "passingonly":true}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore } @@ -354,7 +354,7 @@ func TestChecksWatch_State(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"checks", "state":"warning"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore } @@ -415,7 +415,7 @@ func TestChecksWatch_Service(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"checks", "service":"foobar"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore } @@ -481,7 +481,7 @@ func TestEventWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"event", "name": "foo"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return } @@ -536,7 +536,7 @@ func TestConnectRootsWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"connect_roots"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore } @@ -626,7 +626,7 @@ func TestConnectLeafWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"connect_leaf", "service_id":"web"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore } @@ -699,7 +699,7 @@ func TestConnectProxyConfigWatch(t *testing.T) { invoke := makeInvokeCh() plan := mustParse(t, `{"type":"connect_proxy_config", "proxy_service_id":"web-proxy"}`) - plan.Handler = func(blockParam watch.BlockingParam, raw interface{}) { + plan.HybridHandler = func(blockParamVal watch.BlockingParamVal, raw interface{}) { if raw == nil { return // ignore } diff --git a/watch/plan.go b/watch/plan.go index 6292c19a4..1e34e4eac 100644 --- a/watch/plan.go +++ b/watch/plan.go @@ -110,8 +110,16 @@ OUTER: // Handle the updated result p.lastResult = result - if p.Handler != nil { - p.Handler(blockParamVal, result) + // If a hybrid handler exists use that + if p.HybridHandler != nil { + p.HybridHandler(blockParamVal, result) + } else if p.Handler != nil { + idx, ok := blockParamVal.(WaitIndexVal) + if !ok { + logger.Printf("[ERR] consul.watch: Handler only supports index-based " + + " watches but non index-based watch run. Skipping Handler.") + } + p.Handler(uint64(idx), result) } } return nil diff --git a/watch/plan_test.go b/watch/plan_test.go index 6099dc294..0ac648508 100644 --- a/watch/plan_test.go +++ b/watch/plan_test.go @@ -10,7 +10,7 @@ func init() { } func noopWatch(params map[string]interface{}) (WatcherFunc, error) { - fn := func(p *Plan) (BlockingParam, interface{}, error) { + fn := func(p *Plan) (BlockingParamVal, interface{}, error) { idx := WaitIndexVal(0) if i, ok := p.lastParamVal.(WaitIndexVal); ok { idx = i @@ -35,10 +35,57 @@ func TestRun_Stop(t *testing.T) { var expect uint64 = 1 doneCh := make(chan struct{}) - plan.Handler = func(blockParamVal BlockingParam, val interface{}) { + plan.Handler = func(idx uint64, val interface{}) { + if idx != expect { + t.Fatalf("Bad: %d %d", expect, idx) + } + if val != expect { + t.Fatalf("Bad: %d %d", expect, val) + } + if expect == 1 { + close(doneCh) + } + expect++ + } + + errCh := make(chan error, 1) + go func() { + errCh <- plan.Run("127.0.0.1:8500") + }() + + select { + case <-doneCh: + plan.Stop() + + case <-time.After(1 * time.Second): + t.Fatalf("handler never ran") + } + + select { + case err := <-errCh: + if err != nil { + t.Fatalf("err: %v", err) + } + + case <-time.After(1 * time.Second): + t.Fatalf("watcher didn't exit") + } + + if expect == 1 { + t.Fatalf("Bad: %d", expect) + } +} + +func TestRun_Stop_Hybrid(t *testing.T) { + t.Parallel() + plan := mustParse(t, `{"type":"noop"}`) + + var expect uint64 = 1 + doneCh := make(chan struct{}) + plan.HybridHandler = func(blockParamVal BlockingParamVal, val interface{}) { idxVal, ok := blockParamVal.(WaitIndexVal) if !ok { - t.Fatalf("Expected index-based watch") + t.Fatalf("expected index-based watch") } idx := uint64(idxVal) if idx != expect { diff --git a/watch/watch.go b/watch/watch.go index b520d702e..1bc3d0ae6 100644 --- a/watch/watch.go +++ b/watch/watch.go @@ -24,13 +24,16 @@ type Plan struct { HandlerType string Exempt map[string]interface{} - Watcher WatcherFunc - Handler HandlerFunc - LogOutput io.Writer + Watcher WatcherFunc + // Handler is kept for backward compatibility but only supports watches based + // on index param. To support hash based watches, set HybridHandler instead. + Handler HandlerFunc + HybridHandler HybridHandlerFunc + LogOutput io.Writer address string client *consulapi.Client - lastParamVal BlockingParam + lastParamVal BlockingParamVal lastResult interface{} stop bool @@ -48,39 +51,39 @@ type HttpHandlerConfig struct { TLSSkipVerify bool `mapstructure:"tls_skip_verify"` } -// BlockingParam is an interface representing the common operations needed for +// BlockingParamVal is an interface representing the common operations needed for // different styles of blocking. It's used to abstract the core watch plan from // whether we are performing index-based or hash-based blocking. -type BlockingParam interface { +type BlockingParamVal interface { // Equal returns whether the other param value should be considered equal // (i.e. representing no change in the watched resource). Equal must not panic // if other is nil. - Equal(other BlockingParam) bool + Equal(other BlockingParamVal) bool // Next is called when deciding which value to use on the next blocking call. - // It assumes the BlockingParam value it is called on is the most recent one + // It assumes the BlockingParamVal value it is called on is the most recent one // returned and passes the previous one which may be nil as context. This // allows types to customise logic around ordering without assuming there is // an order. For example WaitIndexVal can check that the index didn't go // backwards and if it did then reset to 0. Most other cases should just // return themselves (the most recent value) to be used in the next request. - Next(previous BlockingParam) BlockingParam + Next(previous BlockingParamVal) BlockingParamVal } // WaitIndexVal is a type representing a Consul index that implements -// BlockingParam. +// BlockingParamVal. type WaitIndexVal uint64 -// Equal implements BlockingParam -func (idx WaitIndexVal) Equal(other BlockingParam) bool { +// Equal implements BlockingParamVal +func (idx WaitIndexVal) Equal(other BlockingParamVal) bool { if otherIdx, ok := other.(WaitIndexVal); ok { return idx == otherIdx } return false } -// Next implements BlockingParam -func (idx WaitIndexVal) Next(previous BlockingParam) BlockingParam { +// Next implements BlockingParamVal +func (idx WaitIndexVal) Next(previous BlockingParamVal) BlockingParamVal { if previous == nil { return idx } @@ -93,27 +96,33 @@ func (idx WaitIndexVal) Next(previous BlockingParam) BlockingParam { } // WaitHashVal is a type representing a Consul content hash that implements -// BlockingParam. +// BlockingParamVal. type WaitHashVal string -// Equal implements BlockingParam -func (h WaitHashVal) Equal(other BlockingParam) bool { +// Equal implements BlockingParamVal +func (h WaitHashVal) Equal(other BlockingParamVal) bool { if otherHash, ok := other.(WaitHashVal); ok { return h == otherHash } return false } -// Next implements BlockingParam -func (h WaitHashVal) Next(previous BlockingParam) BlockingParam { +// Next implements BlockingParamVal +func (h WaitHashVal) Next(previous BlockingParamVal) BlockingParamVal { return h } // WatcherFunc is used to watch for a diff. -type WatcherFunc func(*Plan) (BlockingParam, interface{}, error) +type WatcherFunc func(*Plan) (BlockingParamVal, interface{}, error) -// HandlerFunc is used to handle new data -type HandlerFunc func(BlockingParam, interface{}) +// HandlerFunc is used to handle new data. It only works for index-based watches +// (which is almost all end points currently) and is kept for backwards +// compatibility until more places can make use of hash-based watches too. +type HandlerFunc func(uint64, interface{}) + +// HybridHandlerFunc is used to handle new data. It can support either +// index-based or hash-based watches via the BlockingParamVal. +type HybridHandlerFunc func(BlockingParamVal, interface{}) // Parse takes a watch query and compiles it into a WatchPlan or an error func Parse(params map[string]interface{}) (*Plan, error) { From 2b1660fdf729f9b76e5837e1ead42abf8ff79b12 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 25 Apr 2018 21:22:31 +0100 Subject: [PATCH 176/539] Fix tests and listeners to work with Config changes (splitting host and port fields) --- connect/proxy/config.go | 4 ++-- connect/proxy/config_test.go | 7 ++++--- connect/proxy/listener.go | 8 ++++++-- connect/proxy/listener_test.go | 19 ++++++++++++------- connect/proxy/testing.go | 13 +++---------- connect/service.go | 4 ++-- 6 files changed, 29 insertions(+), 26 deletions(-) diff --git a/connect/proxy/config.go b/connect/proxy/config.go index b5a8c6bb4..6fad0bd55 100644 --- a/connect/proxy/config.go +++ b/connect/proxy/config.go @@ -65,7 +65,7 @@ type PublicListenerConfig struct { // BindAddress is the host/IP the public mTLS listener will bind to. BindAddress string `json:"bind_address" hcl:"bind_address" mapstructure:"bind_address"` - BindPort string `json:"bind_port" hcl:"bind_port" mapstructure:"bind_port"` + BindPort int `json:"bind_port" hcl:"bind_port" mapstructure:"bind_port"` // LocalServiceAddress is the host:port for the proxied application. This // should be on loopback or otherwise protected as it's plain TCP. @@ -251,7 +251,7 @@ func NewAgentConfigWatcher(client *api.Client, proxyID string, return nil, err } w.plan = plan - w.plan.Handler = w.handler + w.plan.HybridHandler = w.handler go w.plan.RunWithClientAndLogger(w.client, w.logger) return w, nil } diff --git a/connect/proxy/config_test.go b/connect/proxy/config_test.go index 855eaddf1..e576d5f82 100644 --- a/connect/proxy/config_test.go +++ b/connect/proxy/config_test.go @@ -24,7 +24,8 @@ func TestParseConfigFile(t *testing.T) { ProxiedServiceID: "web", ProxiedServiceNamespace: "default", PublicListener: PublicListenerConfig{ - BindAddress: ":9999", + BindAddress: "127.0.0.1", + BindPort: 9999, LocalServiceAddress: "127.0.0.1:5000", LocalConnectTimeoutMs: 1000, HandshakeTimeoutMs: 10000, // From defaults @@ -129,7 +130,7 @@ func TestAgentConfigWatcher(t *testing.T) { Proxy: &api.AgentServiceConnectProxy{ Config: map[string]interface{}{ "bind_address": "10.10.10.10", - "bind_port": "1010", + "bind_port": 1010, "local_service_address": "127.0.0.1:5000", "handshake_timeout_ms": 999, "upstreams": []interface{}{ @@ -157,7 +158,7 @@ func TestAgentConfigWatcher(t *testing.T) { ProxiedServiceNamespace: "default", PublicListener: PublicListenerConfig{ BindAddress: "10.10.10.10", - BindPort: "1010", + BindPort: 1010, LocalServiceAddress: "127.0.0.1:5000", HandshakeTimeoutMs: 999, LocalConnectTimeoutMs: 1000, // from applyDefaults diff --git a/connect/proxy/listener.go b/connect/proxy/listener.go index c8e70ac31..12134f840 100644 --- a/connect/proxy/listener.go +++ b/connect/proxy/listener.go @@ -4,6 +4,7 @@ import ( "context" "crypto/tls" "errors" + "fmt" "log" "net" "sync/atomic" @@ -44,7 +45,9 @@ func NewPublicListener(svc *connect.Service, cfg PublicListenerConfig, return &Listener{ Service: svc, listenFunc: func() (net.Listener, error) { - return tls.Listen("tcp", cfg.BindAddress, svc.ServerTLSConfig()) + return tls.Listen("tcp", + fmt.Sprintf("%s:%d", cfg.BindAddress, cfg.BindPort), + svc.ServerTLSConfig()) }, dialFunc: func() (net.Conn, error) { return net.DialTimeout("tcp", cfg.LocalServiceAddress, @@ -63,7 +66,8 @@ func NewUpstreamListener(svc *connect.Service, cfg UpstreamConfig, return &Listener{ Service: svc, listenFunc: func() (net.Listener, error) { - return net.Listen("tcp", cfg.LocalBindAddress) + return net.Listen("tcp", + fmt.Sprintf("%s:%d", cfg.LocalBindAddress, cfg.LocalBindPort)) }, dialFunc: func() (net.Conn, error) { if cfg.resolver == nil { diff --git a/connect/proxy/listener_test.go b/connect/proxy/listener_test.go index 8354fbe58..a0bc640d7 100644 --- a/connect/proxy/listener_test.go +++ b/connect/proxy/listener_test.go @@ -2,6 +2,7 @@ package proxy import ( "context" + "fmt" "log" "net" "os" @@ -9,16 +10,18 @@ import ( agConnect "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/connect" + "github.com/hashicorp/consul/lib/freeport" "github.com/stretchr/testify/require" ) func TestPublicListener(t *testing.T) { ca := agConnect.TestCA(t, nil) - addrs := TestLocalBindAddrs(t, 2) + ports := freeport.GetT(t, 2) cfg := PublicListenerConfig{ - BindAddress: addrs[0], - LocalServiceAddress: addrs[1], + BindAddress: "127.0.0.1", + BindPort: ports[0], + LocalServiceAddress: TestLocalAddr(ports[1]), HandshakeTimeoutMs: 100, LocalConnectTimeoutMs: 100, } @@ -42,7 +45,7 @@ func TestPublicListener(t *testing.T) { // Proxy and backend are running, play the part of a TLS client using same // cert for now. conn, err := svc.Dial(context.Background(), &connect.StaticResolver{ - Addr: addrs[0], + Addr: TestLocalAddr(ports[0]), CertURI: agConnect.TestSpiffeIDService(t, "db"), }) require.NoError(t, err) @@ -51,7 +54,7 @@ func TestPublicListener(t *testing.T) { func TestUpstreamListener(t *testing.T) { ca := agConnect.TestCA(t, nil) - addrs := TestLocalBindAddrs(t, 1) + ports := freeport.GetT(t, 1) // Run a test server that we can dial. testSvr := connect.NewTestServer(t, "db", ca) @@ -67,7 +70,8 @@ func TestUpstreamListener(t *testing.T) { DestinationNamespace: "default", DestinationName: "db", ConnectTimeoutMs: 100, - LocalBindAddress: addrs[0], + LocalBindAddress: "localhost", + LocalBindPort: ports[0], resolver: &connect.StaticResolver{ Addr: testSvr.Addr, CertURI: agConnect.TestSpiffeIDService(t, "db"), @@ -88,7 +92,8 @@ func TestUpstreamListener(t *testing.T) { // Proxy and fake remote service are running, play the part of the app // connecting to a remote connect service over TCP. - conn, err := net.Dial("tcp", cfg.LocalBindAddress) + conn, err := net.Dial("tcp", + fmt.Sprintf("%s:%d", cfg.LocalBindAddress, cfg.LocalBindPort)) require.NoError(t, err) TestEchoConn(t, conn, "") } diff --git a/connect/proxy/testing.go b/connect/proxy/testing.go index 9ed8c41c4..f986cfe50 100644 --- a/connect/proxy/testing.go +++ b/connect/proxy/testing.go @@ -7,20 +7,13 @@ import ( "net" "sync/atomic" - "github.com/hashicorp/consul/lib/freeport" "github.com/mitchellh/go-testing-interface" "github.com/stretchr/testify/require" ) -// TestLocalBindAddrs returns n localhost address:port strings with free ports -// for binding test listeners to. -func TestLocalBindAddrs(t testing.T, n int) []string { - ports := freeport.GetT(t, n) - addrs := make([]string, n) - for i, p := range ports { - addrs[i] = fmt.Sprintf("localhost:%d", p) - } - return addrs +// TestLocalAddr makes a localhost address on the given port +func TestLocalAddr(port int) string { + return fmt.Sprintf("localhost:%d", port) } // TestTCPServer is a simple TCP echo server for use during tests. diff --git a/connect/service.go b/connect/service.go index a614f227f..18e6dd89e 100644 --- a/connect/service.go +++ b/connect/service.go @@ -86,7 +86,7 @@ func NewServiceWithLogger(serviceID string, client *api.Client, return nil, err } s.rootsWatch = p - s.rootsWatch.Handler = s.rootsWatchHandler + s.rootsWatch.HybridHandler = s.rootsWatchHandler p, err = watch.Parse(map[string]interface{}{ "type": "connect_leaf", @@ -95,7 +95,7 @@ func NewServiceWithLogger(serviceID string, client *api.Client, return nil, err } s.leafWatch = p - s.leafWatch.Handler = s.leafWatchHandler + s.leafWatch.HybridHandler = s.leafWatchHandler //go s.rootsWatch.RunWithClientAndLogger(s.client, s.logger) //go s.leafWatch.RunWithClientAndLogger(s.client, s.logger) From 153808db7cfb8a0a20a206778c28a4c7f9e1400f Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 26 Apr 2018 18:06:26 +0100 Subject: [PATCH 177/539] Don't allow connect watches in agent/cli yet --- agent/agent.go | 10 ++++++++++ agent/agent_test.go | 12 ++++++++++++ command/watch/watch.go | 5 +++++ command/watch/watch_test.go | 20 ++++++++++++++++++++ 4 files changed, 47 insertions(+) diff --git a/agent/agent.go b/agent/agent.go index 8f7dd9043..f62495a01 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -621,6 +621,16 @@ func (a *Agent) reloadWatches(cfg *config.RuntimeConfig) error { return fmt.Errorf("Handler type '%s' not recognized", params["handler_type"]) } + // Don't let people use connect watches via this mechanism for now as it + // needs thought about how to do securely and shouldn't be necessary. Note + // that if the type assertion fails an type is not a string then + // ParseExample below will error so we don't need to handle that case. + if typ, ok := params["type"].(string); ok { + if strings.HasPrefix(typ, "connect_") { + return fmt.Errorf("Watch type %s is not allowed in agent config", typ) + } + } + // Parse the watches, excluding 'handler' and 'args' wp, err := watch.ParseExempt(params, []string{"handler", "args"}) if err != nil { diff --git a/agent/agent_test.go b/agent/agent_test.go index c22ce56ba..caa76a28d 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -2259,6 +2259,18 @@ func TestAgent_reloadWatches(t *testing.T) { t.Fatalf("bad: %s", err) } + // Should fail to reload with connect watches + newConf.Watches = []map[string]interface{}{ + { + "type": "connect_roots", + "key": "asdf", + "args": []interface{}{"ls"}, + }, + } + if err := a.reloadWatches(&newConf); err == nil || !strings.Contains(err.Error(), "not allowed in agent config") { + t.Fatalf("bad: %s", err) + } + // Should still succeed with only HTTPS addresses newConf.HTTPSAddrs = newConf.HTTPAddrs newConf.HTTPAddrs = make([]net.Addr, 0) diff --git a/command/watch/watch.go b/command/watch/watch.go index 3b8c67836..bf4691457 100644 --- a/command/watch/watch.go +++ b/command/watch/watch.go @@ -135,6 +135,11 @@ func (c *cmd) Run(args []string) int { return 1 } + if strings.HasPrefix(wp.Type, "connect_") { + c.UI.Error(fmt.Sprintf("Type %s is not supported in the CLI tool", wp.Type)) + return 1 + } + // Create and test the HTTP client client, err := c.http.APIClient() if err != nil { diff --git a/command/watch/watch_test.go b/command/watch/watch_test.go index 153377f65..b1fed48c9 100644 --- a/command/watch/watch_test.go +++ b/command/watch/watch_test.go @@ -33,3 +33,23 @@ func TestWatchCommand(t *testing.T) { t.Fatalf("bad: %#v", ui.OutputWriter.String()) } } + +func TestWatchCommandNoConnect(t *testing.T) { + t.Parallel() + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + + ui := cli.NewMockUi() + c := New(ui, nil) + args := []string{"-http-addr=" + a.HTTPAddr(), "-type=connect_leaf"} + + code := c.Run(args) + if code != 1 { + t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) + } + + if !strings.Contains(ui.ErrorWriter.String(), + "Type connect_leaf is not supported in the CLI tool") { + t.Fatalf("bad: %#v", ui.ErrorWriter.String()) + } +} From a29f3c6b96922dd038ad8a721c9e9c1203fce7c0 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Thu, 26 Apr 2018 20:14:37 -0700 Subject: [PATCH 178/539] Fix some inconsistencies around the CA provider code --- agent/connect/ca.go | 17 ++++++------ agent/consul/connect_ca_endpoint.go | 2 +- agent/consul/connect_ca_provider.go | 42 +++++++++++++++++------------ agent/consul/leader.go | 2 +- agent/structs/connect_ca.go | 8 +++--- 5 files changed, 39 insertions(+), 32 deletions(-) diff --git a/agent/connect/ca.go b/agent/connect/ca.go index 87b01994e..ff0f0813d 100644 --- a/agent/connect/ca.go +++ b/agent/connect/ca.go @@ -1,7 +1,6 @@ package connect import ( - "bytes" "crypto" "crypto/ecdsa" "crypto/rsa" @@ -28,21 +27,21 @@ func ParseCert(pemValue string) (*x509.Certificate, error) { return x509.ParseCertificate(block.Bytes) } -// ParseCertFingerprint parses the x509 certificate from a PEM-encoded value -// and returns the SHA-1 fingerprint. -func ParseCertFingerprint(pemValue string) (string, error) { +// CalculateCertFingerprint parses the x509 certificate from a PEM-encoded value +// and calculates the SHA-1 fingerprint. +func CalculateCertFingerprint(pemValue string) (string, error) { // The _ result below is not an error but the remaining PEM bytes. block, _ := pem.Decode([]byte(pemValue)) if block == nil { return "", fmt.Errorf("no PEM-encoded data found") } - hash := sha1.Sum(block.Bytes) - hexified := make([][]byte, len(hash)) - for i, data := range hash { - hexified[i] = []byte(fmt.Sprintf("%02X", data)) + if block.Type != "CERTIFICATE" { + return "", fmt.Errorf("first PEM-block should be CERTIFICATE type") } - return string(bytes.Join(hexified, []byte(":"))), nil + + hash := sha1.Sum(block.Bytes) + return HexString(hash[:]), nil } // ParseSigner parses a crypto.Signer from a PEM-encoded key. The private key diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index f52c9218e..72ac2adbc 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -98,7 +98,7 @@ func (s *ConnectCA) ConfigurationSet( return err } - id, err := connect.ParseCertFingerprint(newRootPEM) + id, err := connect.CalculateCertFingerprint(newRootPEM) if err != nil { return fmt.Errorf("error parsing root fingerprint: %v", err) } diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index f9321138b..afb74fe78 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -222,7 +222,7 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { // Cert template for generation sn := &big.Int{} - sn.SetUint64(providerState.LeafIndex + 1) + sn.SetUint64(providerState.SerialIndex + 1) template := x509.Certificate{ SerialNumber: sn, Subject: pkix.Name{CommonName: serviceId.Service}, @@ -240,6 +240,7 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth, }, + // todo(kyhavlov): add a way to set the cert lifetime here from the CA config NotAfter: time.Now().Add(3 * 24 * time.Hour), NotBefore: time.Now(), AuthorityKeyId: keyId, @@ -258,20 +259,7 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { return "", fmt.Errorf("error encoding private key: %s", err) } - // Increment the leaf cert index - newState := *providerState - newState.LeafIndex += 1 - args := &structs.CARequest{ - Op: structs.CAOpSetProviderState, - ProviderState: &newState, - } - resp, err := c.srv.raftApply(structs.ConnectCARequestType, args) - if err != nil { - return "", err - } - if respErr, ok := resp.(error); ok { - return "", respErr - } + c.incrementSerialIndex(providerState) // Set the response return buf.String(), nil @@ -306,10 +294,9 @@ func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { // Create the cross-signing template from the existing root CA serialNum := &big.Int{} - serialNum.SetUint64(providerState.LeafIndex + 1) + serialNum.SetUint64(providerState.SerialIndex + 1) template := *cert template.SerialNumber = serialNum - template.Subject = rootCA.Subject template.SignatureAlgorithm = rootCA.SignatureAlgorithm template.SubjectKeyId = keyId template.AuthorityKeyId = keyId @@ -326,9 +313,30 @@ func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { return "", fmt.Errorf("error encoding private key: %s", err) } + c.incrementSerialIndex(providerState) + return buf.String(), nil } +// incrementSerialIndex increments the cert serial number index in the provider state +func (c *ConsulCAProvider) incrementSerialIndex(providerState *structs.CAConsulProviderState) error { + newState := *providerState + newState.SerialIndex += 1 + args := &structs.CARequest{ + Op: structs.CAOpSetProviderState, + ProviderState: &newState, + } + resp, err := c.srv.raftApply(structs.ConnectCARequestType, args) + if err != nil { + return err + } + if respErr, ok := resp.(error); ok { + return respErr + } + + return nil +} + // generatePrivateKey returns a new private key func generatePrivateKey() (string, error) { var pk *ecdsa.PrivateKey diff --git a/agent/consul/leader.go b/agent/consul/leader.go index d9c3a83ea..2f01b8833 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -426,7 +426,7 @@ func (s *Server) initializeCA() error { return fmt.Errorf("error getting root cert: %v", err) } - id, err := connect.ParseCertFingerprint(rootPEM) + id, err := connect.CalculateCertFingerprint(rootPEM) if err != nil { return fmt.Errorf("error parsing root fingerprint: %v", err) } diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index c46db703a..0570057b6 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -168,10 +168,10 @@ type ConsulCAProviderConfig struct { // CAConsulProviderState is used to track the built-in Consul CA provider's state. type CAConsulProviderState struct { - ID string - PrivateKey string - RootCert string - LeafIndex uint64 + ID string + PrivateKey string + RootCert string + SerialIndex uint64 RaftIndex } From 19b9399f2fbef438fbbf4252da140507c0a12a7b Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Thu, 26 Apr 2018 23:02:18 -0700 Subject: [PATCH 179/539] Add more tests for built-in provider --- agent/consul/connect_ca_provider.go | 1 - agent/consul/connect_ca_provider_test.go | 150 +++++++++++++++++++++++ agent/consul/server_test.go | 2 + 3 files changed, 152 insertions(+), 1 deletion(-) diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index afb74fe78..1c509e2b0 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -341,7 +341,6 @@ func (c *ConsulCAProvider) incrementSerialIndex(providerState *structs.CAConsulP func generatePrivateKey() (string, error) { var pk *ecdsa.PrivateKey - // If we have no key, then create a new one. pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) if err != nil { return "", fmt.Errorf("error generating private key: %s", err) diff --git a/agent/consul/connect_ca_provider_test.go b/agent/consul/connect_ca_provider_test.go index 583f91722..ead41309f 100644 --- a/agent/consul/connect_ca_provider_test.go +++ b/agent/consul/connect_ca_provider_test.go @@ -3,7 +3,9 @@ package consul import ( "os" "testing" + "time" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/testrpc" "github.com/stretchr/testify/assert" ) @@ -25,8 +27,156 @@ func TestCAProvider_Bootstrap(t *testing.T) { root, err := provider.ActiveRoot() assert.NoError(err) + // Intermediate should be the same cert. + inter, err := provider.ActiveIntermediate() + assert.NoError(err) + + // Make sure we initialize without errors and that the + // root cert gets set to the active cert. state := s1.fsm.State() _, activeRoot, err := state.CARootActive(nil) assert.NoError(err) assert.Equal(root, activeRoot.RootCert) + assert.Equal(inter, activeRoot.RootCert) +} + +func TestCAProvider_Bootstrap_WithCert(t *testing.T) { + t.Parallel() + + // Make sure setting a custom private key/root cert works. + assert := assert.New(t) + rootCA := connect.TestCA(t, nil) + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.CAConfig.Config["PrivateKey"] = rootCA.SigningKey + c.CAConfig.Config["RootCert"] = rootCA.RootCert + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + provider := s1.getCAProvider() + + root, err := provider.ActiveRoot() + assert.NoError(err) + + // Make sure we initialize without errors and that the + // root cert we provided gets set to the active cert. + state := s1.fsm.State() + _, activeRoot, err := state.CARootActive(nil) + assert.NoError(err) + assert.Equal(root, activeRoot.RootCert) + assert.Equal(rootCA.RootCert, activeRoot.RootCert) +} + +func TestCAProvider_SignLeaf(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + provider := s1.getCAProvider() + + spiffeService := &connect.SpiffeIDService{ + Host: s1.config.NodeName, + Namespace: "default", + Datacenter: s1.config.Datacenter, + Service: "foo", + } + + // Generate a leaf cert for the service. + { + raw, _ := connect.TestCSR(t, spiffeService) + + csr, err := connect.ParseCSR(raw) + assert.NoError(err) + + cert, err := provider.Sign(csr) + assert.NoError(err) + + parsed, err := connect.ParseCert(cert) + assert.NoError(err) + assert.Equal(parsed.URIs[0], spiffeService.URI()) + assert.Equal(parsed.Subject.CommonName, "foo") + assert.Equal(parsed.SerialNumber.Uint64(), uint64(1)) + + // Ensure the cert is valid now and expires within the correct limit. + assert.True(parsed.NotAfter.Sub(time.Now()) < 3*24*time.Hour) + assert.True(parsed.NotBefore.Before(time.Now())) + } + + // Generate a new cert for another service and make sure + // the serial number is incremented. + spiffeService.Service = "bar" + { + raw, _ := connect.TestCSR(t, spiffeService) + + csr, err := connect.ParseCSR(raw) + assert.NoError(err) + + cert, err := provider.Sign(csr) + assert.NoError(err) + + parsed, err := connect.ParseCert(cert) + assert.NoError(err) + assert.Equal(parsed.URIs[0], spiffeService.URI()) + assert.Equal(parsed.Subject.CommonName, "bar") + assert.Equal(parsed.SerialNumber.Uint64(), uint64(2)) + + // Ensure the cert is valid now and expires within the correct limit. + assert.True(parsed.NotAfter.Sub(time.Now()) < 3*24*time.Hour) + assert.True(parsed.NotBefore.Before(time.Now())) + } +} + +func TestCAProvider_CrossSignCA(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + + // Make sure setting a custom private key/root cert works. + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + provider := s1.getCAProvider() + + rootCA := connect.TestCA(t, nil) + rootPEM, err := provider.ActiveRoot() + assert.NoError(err) + root, err := connect.ParseCert(rootPEM) + assert.NoError(err) + + // Have the provider cross sign our new CA cert. + cert, err := connect.ParseCert(rootCA.RootCert) + assert.NoError(err) + oldSubject := cert.Subject.CommonName + xcPEM, err := provider.CrossSignCA(cert) + assert.NoError(err) + + xc, err := connect.ParseCert(xcPEM) + assert.NoError(err) + + // AuthorityKeyID and SubjectKeyID should be the signing root's. + assert.Equal(root.AuthorityKeyId, xc.AuthorityKeyId) + assert.Equal(root.SubjectKeyId, xc.SubjectKeyId) + + // Subject name should not have changed. + assert.NotEqual(root.Subject.CommonName, xc.Subject.CommonName) + assert.Equal(oldSubject, xc.Subject.CommonName) + + // Issuer should be the signing root. + assert.Equal(root.Issuer.CommonName, xc.Issuer.CommonName) } diff --git a/agent/consul/server_test.go b/agent/consul/server_test.go index 3afbb6f07..84ec6743a 100644 --- a/agent/consul/server_test.go +++ b/agent/consul/server_test.go @@ -91,6 +91,8 @@ func testServerConfig(t *testing.T) (string, *Config) { // looks like several depend on it. config.RPCHoldTimeout = 5 * time.Second + config.ConnectEnabled = true + return dir, config } From 7c0976208d0b4881de41530861916fb831ce0ea0 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Thu, 26 Apr 2018 23:28:27 -0700 Subject: [PATCH 180/539] Add tests for the built in CA's state store table --- agent/consul/fsm/commands_oss_test.go | 39 ++++++++++ agent/consul/state/connect_ca.go | 41 +++------- agent/consul/state/connect_ca_test.go | 103 ++++++++++++++++++++++++++ 3 files changed, 154 insertions(+), 29 deletions(-) diff --git a/agent/consul/fsm/commands_oss_test.go b/agent/consul/fsm/commands_oss_test.go index a52e6d7b6..280bf5b38 100644 --- a/agent/consul/fsm/commands_oss_test.go +++ b/agent/consul/fsm/commands_oss_test.go @@ -1318,3 +1318,42 @@ func TestFSM_CARoots(t *testing.T) { assert.Len(roots, 2) } } + +func TestFSM_CABuiltinProvider(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + fsm, err := New(nil, os.Stderr) + assert.Nil(err) + + // Provider state. + expected := &structs.CAConsulProviderState{ + ID: "foo", + PrivateKey: "a", + RootCert: "b", + SerialIndex: 2, + RaftIndex: structs.RaftIndex{ + CreateIndex: 1, + ModifyIndex: 1, + }, + } + + // Create a new request. + req := structs.CARequest{ + Op: structs.CAOpSetProviderState, + ProviderState: expected, + } + + { + buf, err := structs.Encode(structs.ConnectCARequestType, req) + assert.Nil(err) + assert.True(fsm.Apply(makeLog(buf)).(bool)) + } + + // Verify it's in the state store. + { + _, state, err := fsm.state.CAProviderState("foo") + assert.Nil(err) + assert.Equal(expected, state) + } +} diff --git a/agent/consul/state/connect_ca.go b/agent/consul/state/connect_ca.go index 7c4cea294..a7f51a52a 100644 --- a/agent/consul/state/connect_ca.go +++ b/agent/consul/state/connect_ca.go @@ -319,19 +319,19 @@ func (s *Store) CARootSetCAS(idx, cidx uint64, rs []*structs.CARoot) (bool, erro return true, nil } -// CAProviderState is used to pull the built-in provider state from the snapshot. -func (s *Snapshot) CAProviderState() (*structs.CAConsulProviderState, error) { - c, err := s.tx.First(caBuiltinProviderTableName, "id") +// CAProviderState is used to pull the built-in provider states from the snapshot. +func (s *Snapshot) CAProviderState() ([]*structs.CAConsulProviderState, error) { + ixns, err := s.tx.Get(caBuiltinProviderTableName, "id") if err != nil { return nil, err } - state, ok := c.(*structs.CAConsulProviderState) - if !ok { - return nil, nil + var ret []*structs.CAConsulProviderState + for wrapped := ixns.Next(); wrapped != nil; wrapped = ixns.Next() { + ret = append(ret, wrapped.(*structs.CAConsulProviderState)) } - return state, nil + return ret, nil } // CAProviderState is used when restoring from a snapshot. @@ -339,6 +339,9 @@ func (s *Restore) CAProviderState(state *structs.CAConsulProviderState) error { if err := s.tx.Insert(caBuiltinProviderTableName, state); err != nil { return fmt.Errorf("failed restoring built-in CA state: %s", err) } + if err := indexUpdateMaxTxn(s.tx, state.ModifyIndex, caBuiltinProviderTableName); err != nil { + return fmt.Errorf("failed updating index: %s", err) + } return nil } @@ -365,27 +368,6 @@ func (s *Store) CAProviderState(id string) (uint64, *structs.CAConsulProviderSta return idx, state, nil } -// CAProviderStates is used to get the Consul CA provider state for the given ID. -func (s *Store) CAProviderStates() (uint64, []*structs.CAConsulProviderState, error) { - tx := s.db.Txn(false) - defer tx.Abort() - - // Get the index - idx := maxIndexTxn(tx, caBuiltinProviderTableName) - - // Get all - iter, err := tx.Get(caBuiltinProviderTableName, "id") - if err != nil { - return 0, nil, fmt.Errorf("failed CA provider state lookup: %s", err) - } - - var results []*structs.CAConsulProviderState - for v := iter.Next(); v != nil; v = iter.Next() { - results = append(results, v.(*structs.CAConsulProviderState)) - } - return idx, results, nil -} - // CASetProviderState is used to set the current built-in CA provider state. func (s *Store) CASetProviderState(idx uint64, state *structs.CAConsulProviderState) (bool, error) { tx := s.db.Txn(true) @@ -419,7 +401,8 @@ func (s *Store) CASetProviderState(idx uint64, state *structs.CAConsulProviderSt return true, nil } -// CADeleteProviderState is used to remove the Consul CA provider state for the given ID. +// CADeleteProviderState is used to remove the built-in Consul CA provider +// state for the given ID. func (s *Store) CADeleteProviderState(id string) error { tx := s.db.Txn(true) defer tx.Abort() diff --git a/agent/consul/state/connect_ca_test.go b/agent/consul/state/connect_ca_test.go index cd37f526b..4639c7f5a 100644 --- a/agent/consul/state/connect_ca_test.go +++ b/agent/consul/state/connect_ca_test.go @@ -349,3 +349,106 @@ func TestStore_CARoot_Snapshot_Restore(t *testing.T) { assert.Equal(roots, actual) }() } + +func TestStore_CABuiltinProvider(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + { + expected := &structs.CAConsulProviderState{ + ID: "foo", + PrivateKey: "a", + RootCert: "b", + SerialIndex: 1, + } + + ok, err := s.CASetProviderState(0, expected) + assert.NoError(err) + assert.True(ok) + + idx, state, err := s.CAProviderState(expected.ID) + assert.NoError(err) + assert.Equal(idx, uint64(0)) + assert.Equal(expected, state) + } + + { + expected := &structs.CAConsulProviderState{ + ID: "bar", + PrivateKey: "c", + RootCert: "d", + SerialIndex: 2, + } + + ok, err := s.CASetProviderState(1, expected) + assert.NoError(err) + assert.True(ok) + + idx, state, err := s.CAProviderState(expected.ID) + assert.NoError(err) + assert.Equal(idx, uint64(1)) + assert.Equal(expected, state) + } +} + +func TestStore_CABuiltinProvider_Snapshot_Restore(t *testing.T) { + assert := assert.New(t) + s := testStateStore(t) + + // Create multiple state entries. + before := []*structs.CAConsulProviderState{ + { + ID: "bar", + PrivateKey: "y", + RootCert: "z", + SerialIndex: 2, + }, + { + ID: "foo", + PrivateKey: "a", + RootCert: "b", + SerialIndex: 1, + }, + } + + for i, state := range before { + ok, err := s.CASetProviderState(uint64(98+i), state) + assert.NoError(err) + assert.True(ok) + } + + // Take a snapshot. + snap := s.Snapshot() + defer snap.Close() + + // Modify the state store. + after := &structs.CAConsulProviderState{ + ID: "foo", + PrivateKey: "c", + RootCert: "d", + SerialIndex: 1, + } + ok, err := s.CASetProviderState(100, after) + assert.NoError(err) + assert.True(ok) + + snapped, err := snap.CAProviderState() + assert.NoError(err) + assert.Equal(before, snapped) + + // Restore onto a new state store. + s2 := testStateStore(t) + restore := s2.Restore() + for _, entry := range snapped { + assert.NoError(restore.CAProviderState(entry)) + } + restore.Commit() + + // Verify the restored values match those from before the snapshot. + for _, state := range before { + idx, res, err := s2.CAProviderState(state.ID) + assert.NoError(err) + assert.Equal(idx, uint64(99)) + assert.Equal(state, res) + } +} From 0e184f3f5b848bb37f6dd1aa95aaac225a60ca06 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 27 Apr 2018 20:10:15 -0700 Subject: [PATCH 181/539] Fix config tests --- agent/config/runtime_test.go | 3 +++ 1 file changed, 3 insertions(+) diff --git a/agent/config/runtime_test.go b/agent/config/runtime_test.go index 1db5ab207..9dff7733b 100644 --- a/agent/config/runtime_test.go +++ b/agent/config/runtime_test.go @@ -3415,6 +3415,7 @@ func TestFullConfig(t *testing.T) { }, CheckUpdateInterval: 16507 * time.Second, ClientAddrs: []*net.IPAddr{ipAddr("93.83.18.19")}, + ConnectEnabled: true, ConnectProxyBindMinPort: 2000, ConnectProxyBindMaxPort: 3000, ConnectCAProvider: "b8j4ynx9", @@ -4092,6 +4093,8 @@ func TestSanitize(t *testing.T) { } ], "ClientAddrs": [], + "ConnectCAConfig": {}, + "ConnectCAProvider": "", "ConnectEnabled": false, "ConnectProxyBindMaxPort": 0, "ConnectProxyBindMinPort": 0, From b28e11fdd318c7754a625b291a2f9f8997d416f1 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Sun, 29 Apr 2018 20:44:40 -0700 Subject: [PATCH 182/539] Fill out connect CA rpc endpoint tests --- agent/consul/connect_ca_endpoint.go | 4 +- agent/consul/connect_ca_endpoint_test.go | 207 +++++++++++++++++++++-- agent/consul/connect_ca_provider.go | 4 +- agent/testagent.go | 3 + 4 files changed, 196 insertions(+), 22 deletions(-) diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 72ac2adbc..35dbe46e8 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -130,7 +130,7 @@ func (s *ConnectCA) ConfigurationSet( // If the config has been committed, update the local provider instance s.srv.setCAProvider(newProvider) - s.srv.logger.Printf("[INFO] connect: provider config updated") + s.srv.logger.Printf("[INFO] connect: CA provider config updated") return nil } @@ -295,7 +295,7 @@ func (s *ConnectCA) Sign( } // Set the response - reply = &structs.IssuedCert{ + *reply = structs.IssuedCert{ SerialNumber: connect.HexString(cert.SerialNumber.Bytes()), CertPEM: pem, Service: serviceId.Service, diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index f2404eb4c..321bcfcb4 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -4,6 +4,7 @@ import ( "crypto/x509" "os" "testing" + "time" "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" @@ -12,6 +13,14 @@ import ( "github.com/stretchr/testify/assert" ) +func testParseCert(t *testing.T, pemValue string) *x509.Certificate { + cert, err := connect.ParseCert(pemValue) + if err != nil { + t.Fatal(err) + } + return cert +} + // Test listing root CAs. func TestConnectCARoots(t *testing.T) { t.Parallel() @@ -30,16 +39,18 @@ func TestConnectCARoots(t *testing.T) { ca1 := connect.TestCA(t, nil) ca2 := connect.TestCA(t, nil) ca2.Active = false - ok, err := state.CARootSetCAS(1, 0, []*structs.CARoot{ca1, ca2}) + idx, _, err := state.CARoots(nil) + assert.NoError(err) + ok, err := state.CARootSetCAS(idx, idx, []*structs.CARoot{ca1, ca2}) assert.True(ok) - assert.Nil(err) + assert.NoError(err) // Request args := &structs.DCSpecificRequest{ Datacenter: "dc1", } var reply structs.IndexedCARoots - assert.Nil(msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", args, &reply)) + assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", args, &reply)) // Verify assert.Equal(ca1.ID, reply.ActiveRootID) @@ -51,11 +62,173 @@ func TestConnectCARoots(t *testing.T) { } } +func TestConnectCAConfig_GetSet(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Get the starting config + { + args := &structs.DCSpecificRequest{ + Datacenter: "dc1", + } + var reply structs.CAConfiguration + assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationGet", args, &reply)) + + actual, err := ParseConsulCAConfig(reply.Config) + assert.NoError(err) + expected, err := ParseConsulCAConfig(s1.config.CAConfig.Config) + assert.NoError(err) + assert.Equal(reply.Provider, s1.config.CAConfig.Provider) + assert.Equal(actual, expected) + } + + // Update a config value + newConfig := &structs.CAConfiguration{ + Provider: "consul", + Config: map[string]interface{}{ + "PrivateKey": "", + "RootCert": "", + "RotationPeriod": 180 * 24 * time.Hour, + }, + } + { + args := &structs.CARequest{ + Datacenter: "dc1", + Config: newConfig, + } + var reply interface{} + + assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationSet", args, &reply)) + } + + // Verify the new config was set + { + args := &structs.DCSpecificRequest{ + Datacenter: "dc1", + } + var reply structs.CAConfiguration + assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationGet", args, &reply)) + + actual, err := ParseConsulCAConfig(reply.Config) + assert.NoError(err) + expected, err := ParseConsulCAConfig(newConfig.Config) + assert.NoError(err) + assert.Equal(reply.Provider, newConfig.Provider) + assert.Equal(actual, expected) + } +} + +func TestConnectCAConfig_TriggerRotation(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Store the current root + rootReq := &structs.DCSpecificRequest{ + Datacenter: "dc1", + } + var rootList structs.IndexedCARoots + assert.Nil(msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", rootReq, &rootList)) + assert.Len(rootList.Roots, 1) + oldRoot := rootList.Roots[0] + + // Update the provider config to use a new private key, which should + // cause a rotation. + newKey, err := generatePrivateKey() + assert.NoError(err) + newConfig := &structs.CAConfiguration{ + Provider: "consul", + Config: map[string]interface{}{ + "PrivateKey": newKey, + "RootCert": "", + "RotationPeriod": 90 * 24 * time.Hour, + }, + } + { + args := &structs.CARequest{ + Datacenter: "dc1", + Config: newConfig, + } + var reply interface{} + + assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationSet", args, &reply)) + } + + // Make sure the new root has been added along with an intermediate + // cross-signed by the old root. + { + args := &structs.DCSpecificRequest{ + Datacenter: "dc1", + } + var reply structs.IndexedCARoots + assert.Nil(msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", args, &reply)) + assert.Len(reply.Roots, 2) + + for _, r := range reply.Roots { + if r.ID == oldRoot.ID { + // The old root should no longer be marked as the active root, + // and none of its other fields should have changed. + assert.False(r.Active) + assert.Equal(r.Name, oldRoot.Name) + assert.Equal(r.RootCert, oldRoot.RootCert) + assert.Equal(r.SigningCert, oldRoot.SigningCert) + assert.Equal(r.IntermediateCerts, oldRoot.IntermediateCerts) + } else { + // The new root should have a valid cross-signed cert from the old + // root as an intermediate. + assert.True(r.Active) + assert.Len(r.IntermediateCerts, 1) + + xc := testParseCert(t, r.IntermediateCerts[0]) + oldRootCert := testParseCert(t, oldRoot.RootCert) + newRootCert := testParseCert(t, r.RootCert) + + // Should have the authority/subject key IDs and signature algo of the + // (old) signing CA. + assert.Equal(xc.AuthorityKeyId, oldRootCert.AuthorityKeyId) + assert.Equal(xc.SubjectKeyId, oldRootCert.SubjectKeyId) + assert.Equal(xc.SignatureAlgorithm, oldRootCert.SignatureAlgorithm) + + // The common name and SAN should not have changed. + assert.Equal(xc.Subject.CommonName, newRootCert.Subject.CommonName) + assert.Equal(xc.URIs, newRootCert.URIs) + } + } + } + + // Verify the new config was set. + { + args := &structs.DCSpecificRequest{ + Datacenter: "dc1", + } + var reply structs.CAConfiguration + assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationGet", args, &reply)) + + actual, err := ParseConsulCAConfig(reply.Config) + assert.NoError(err) + expected, err := ParseConsulCAConfig(newConfig.Config) + assert.NoError(err) + assert.Equal(reply.Provider, newConfig.Provider) + assert.Equal(actual, expected) + } +} + // Test CA signing -// -// NOTE(mitchellh): Just testing the happy path and not all the other validation -// issues because the internals of this method will probably be gutted for the -// CA plugins then we can just test mocks. func TestConnectCASign(t *testing.T) { t.Parallel() @@ -68,32 +241,30 @@ func TestConnectCASign(t *testing.T) { testrpc.WaitForLeader(t, s1.RPC, "dc1") - // Insert a CA - state := s1.fsm.State() - ca := connect.TestCA(t, nil) - ok, err := state.CARootSetCAS(1, 0, []*structs.CARoot{ca}) - assert.True(ok) - assert.Nil(err) - // Generate a CSR and request signing spiffeId := connect.TestSpiffeIDService(t, "web") csr, _ := connect.TestCSR(t, spiffeId) args := &structs.CASignRequest{ - Datacenter: "dc01", + Datacenter: "dc1", CSR: csr, } var reply structs.IssuedCert - assert.Nil(msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", args, &reply)) + assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", args, &reply)) + + // Get the current CA + state := s1.fsm.State() + _, ca, err := state.CARootActive(nil) + assert.NoError(err) // Verify that the cert is signed by the CA roots := x509.NewCertPool() assert.True(roots.AppendCertsFromPEM([]byte(ca.RootCert))) leaf, err := connect.ParseCert(reply.CertPEM) - assert.Nil(err) + assert.NoError(err) _, err = leaf.Verify(x509.VerifyOptions{ Roots: roots, }) - assert.Nil(err) + assert.NoError(err) // Verify other fields assert.Equal("web", reply.Service) diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index 1c509e2b0..cb2bcad57 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -30,7 +30,7 @@ type ConsulCAProvider struct { // NewConsulCAProvider returns a new instance of the Consul CA provider, // bootstrapping its state in the state store necessary func NewConsulCAProvider(rawConfig map[string]interface{}, srv *Server) (*ConsulCAProvider, error) { - conf, err := decodeConfig(rawConfig) + conf, err := ParseConsulCAConfig(rawConfig) if err != nil { return nil, err } @@ -116,7 +116,7 @@ func NewConsulCAProvider(rawConfig map[string]interface{}, srv *Server) (*Consul return provider, nil } -func decodeConfig(raw map[string]interface{}) (*structs.ConsulCAProviderConfig, error) { +func ParseConsulCAConfig(raw map[string]interface{}) (*structs.ConsulCAProviderConfig, error) { var config *structs.ConsulCAProviderConfig if err := mapstructure.WeakDecode(raw, &config); err != nil { return nil, fmt.Errorf("error decoding config: %s", err) diff --git a/agent/testagent.go b/agent/testagent.go index 581143016..c2e4ddf01 100644 --- a/agent/testagent.go +++ b/agent/testagent.go @@ -334,6 +334,9 @@ func TestConfig(sources ...config.Source) *config.RuntimeConfig { server = true node_id = "` + nodeID + `" node_name = "Node ` + nodeID + `" + connect { + enabled = true + } performance { raft_multiplier = 1 } From 4c1b82834b48cf800368caff36e2db876daf1492 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Mon, 30 Apr 2018 17:35:02 +0100 Subject: [PATCH 183/539] Add support for measuring tx/rx packets through proxied connections. --- connect/proxy/conn.go | 45 ++++++++++++++++++++++++++++++++++---- connect/proxy/conn_test.go | 9 ++++++++ 2 files changed, 50 insertions(+), 4 deletions(-) diff --git a/connect/proxy/conn.go b/connect/proxy/conn.go index 70019e55c..fe52853f0 100644 --- a/connect/proxy/conn.go +++ b/connect/proxy/conn.go @@ -8,8 +8,9 @@ import ( // Conn represents a single proxied TCP connection. type Conn struct { - src, dst net.Conn - stopping int32 + src, dst net.Conn + srcW, dstW countWriter + stopping int32 } // NewConn returns a conn joining the two given net.Conn @@ -17,6 +18,8 @@ func NewConn(src, dst net.Conn) *Conn { return &Conn{ src: src, dst: dst, + srcW: countWriter{w: src}, + dstW: countWriter{w: dst}, stopping: 0, } } @@ -47,10 +50,10 @@ func (c *Conn) CopyBytes() error { // causing this goroutine to exit but not the outer one. See // TestConnSrcClosing which will fail if you comment the defer below. defer c.Close() - io.Copy(c.dst, c.src) + io.Copy(&c.dstW, c.src) }() - _, err := io.Copy(c.src, c.dst) + _, err := io.Copy(&c.srcW, c.dst) // Note that we don't wait for the other goroutine to finish because it either // already has due to it's src conn closing, or it will once our defer fires // and closes the source conn. No need for the extra coordination. @@ -59,3 +62,37 @@ func (c *Conn) CopyBytes() error { } return err } + +// Stats returns number of bytes transmitted and recieved. Transmit means bytes +// written to dst, receive means bytes written to src. +func (c *Conn) Stats() (txBytes, rxBytes uint64) { + return c.srcW.Written(), c.dstW.Written() +} + +// countWriter is an io.Writer that counts the number of bytes being written +// before passing them through. We use it to gather metrics for bytes +// sent/received. Note that since we are always copying between a net.TCPConn +// and a tls.Conn, none of the optimisations using syscalls like splice and +// ReaderTo/WriterFrom can be used anyway and io.Copy falls back to a generic +// buffered read/write loop. +// +// We use atomic updates to synchronize reads and writes here. It's the cheapest +// uncontended option based on +// https://gist.github.com/banks/e76b40c0cc4b01503f0a0e4e0af231d5. Further +// optimization can be made when if/when identified as a real overhead. +type countWriter struct { + written uint64 + w io.Writer +} + +// Write implements io.Writer +func (cw *countWriter) Write(p []byte) (n int, err error) { + n, err = cw.w.Write(p) + atomic.AddUint64(&cw.written, uint64(n)) + return +} + +// Written returns how many bytes have been written to w. +func (cw *countWriter) Written() uint64 { + return atomic.LoadUint64(&cw.written) +} diff --git a/connect/proxy/conn_test.go b/connect/proxy/conn_test.go index a37720ea0..4de428ad0 100644 --- a/connect/proxy/conn_test.go +++ b/connect/proxy/conn_test.go @@ -7,6 +7,7 @@ import ( "testing" "time" + "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) @@ -88,6 +89,10 @@ func TestConn(t *testing.T) { require.Nil(t, err) require.Equal(t, "ping 2\n", got) + tx, rx := c.Stats() + assert.Equal(t, uint64(7), tx) + assert.Equal(t, uint64(7), rx) + _, err = src.Write([]byte("pong 1\n")) require.Nil(t, err) _, err = dst.Write([]byte("pong 2\n")) @@ -101,6 +106,10 @@ func TestConn(t *testing.T) { require.Nil(t, err) require.Equal(t, "pong 2\n", got) + tx, rx = c.Stats() + assert.Equal(t, uint64(14), tx) + assert.Equal(t, uint64(14), rx) + c.Close() ret := <-retCh From 8b38cdaba190303f434238fb9692aefb2f9955f7 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Mon, 30 Apr 2018 18:17:39 +0100 Subject: [PATCH 184/539] Add TODO for false-sharing --- connect/proxy/conn.go | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/connect/proxy/conn.go b/connect/proxy/conn.go index fe52853f0..d55e861bf 100644 --- a/connect/proxy/conn.go +++ b/connect/proxy/conn.go @@ -8,7 +8,10 @@ import ( // Conn represents a single proxied TCP connection. type Conn struct { - src, dst net.Conn + src, dst net.Conn + // TODO(banks): benchmark and consider adding _ [8]uint64 padding between + // these to prevent false sharing between the rx and tx goroutines when + // running on separate cores. srcW, dstW countWriter stopping int32 } From 554f367dad772da7c18e506018908ce480f9a76e Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Mon, 30 Apr 2018 22:27:46 +0100 Subject: [PATCH 185/539] Fix build error introduced in bad merge of TLS stuff --- connect/service.go | 29 ++--------------------------- 1 file changed, 2 insertions(+), 27 deletions(-) diff --git a/connect/service.go b/connect/service.go index 18e6dd89e..4f38558a3 100644 --- a/connect/service.go +++ b/connect/service.go @@ -252,21 +252,7 @@ func (s *Service) rootsWatchHandler(blockParam watch.BlockingParamVal, raw inter roots.AppendCertsFromPEM([]byte(root.RootCertPEM)) } - // Note that SetTLSConfig takes care of adding a dynamic GetConfigForClient - // hook that will fetch this updated config for new incoming connections on a - // server. That means all future connections are validated against the new - // roots. On a client, we only expose Dial and we fetch the most recent config - // each time so all future Dials (direct or via an http.Client with our dial - // hook) will grab this new config. - newCfg := s.serverTLSCfg.TLSConfig() - // Server-side verification uses ClientCAs. - newCfg.ClientCAs = roots - s.serverTLSCfg.SetTLSConfig(newCfg) - - newCfg = s.clientTLSCfg.TLSConfig() - // Client-side verification uses RootCAs. - newCfg.RootCAs = roots - s.clientTLSCfg.SetTLSConfig(newCfg) + s.tlsCfg.SetRoots(roots) } func (s *Service) leafWatchHandler(blockParam watch.BlockingParamVal, raw interface{}) { @@ -286,16 +272,5 @@ func (s *Service) leafWatchHandler(blockParam watch.BlockingParamVal, raw interf return } - // Note that SetTLSConfig takes care of adding a dynamic GetClientCertificate - // hook that will fetch the first cert from the Certificates slice of the - // current config for each outbound client request even if the client is using - // an old version of the config struct so all we need to do it set that and - // all existing clients will start using the new cert. - newCfg := s.serverTLSCfg.TLSConfig() - newCfg.Certificates = []tls.Certificate{cert} - s.serverTLSCfg.SetTLSConfig(newCfg) - - newCfg = s.clientTLSCfg.TLSConfig() - newCfg.Certificates = []tls.Certificate{cert} - s.clientTLSCfg.SetTLSConfig(newCfg) + s.tlsCfg.SetLeaf(&cert) } From dcd277de8a49be524fa9801b8c0893b821ee14cf Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Mon, 30 Apr 2018 22:23:49 +0100 Subject: [PATCH 186/539] Wire up agent leaf endpoint to cache framework to support blocking. --- agent/agent.go | 10 +++ agent/agent_endpoint.go | 45 ++++++---- agent/agent_endpoint_test.go | 116 +++++++++++++++++++------- agent/agent_test.go | 9 -- agent/cache-types/connect_ca_leaf.go | 14 ++-- agent/connect/testing_ca.go | 49 ++++++++++- agent/connect_ca_endpoint_test.go | 12 +-- agent/consul/connect_ca_endpoint.go | 33 ++++++-- agent/consul/connect_ca_provider.go | 17 ++-- agent/consul/testing.go | 26 ------ agent/consul/testing_endpoint.go | 43 ---------- agent/consul/testing_endpoint_test.go | 42 ---------- agent/consul/testing_test.go | 13 --- 13 files changed, 225 insertions(+), 204 deletions(-) delete mode 100644 agent/consul/testing.go delete mode 100644 agent/consul/testing_endpoint.go delete mode 100644 agent/consul/testing_endpoint_test.go delete mode 100644 agent/consul/testing_test.go diff --git a/agent/agent.go b/agent/agent.go index f62495a01..9dfe2abea 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2679,6 +2679,16 @@ func (a *Agent) registerCache() { RefreshTimeout: 10 * time.Minute, }) + a.cache.RegisterType(cachetype.ConnectCALeafName, &cachetype.ConnectCALeaf{ + RPC: a.delegate, + Cache: a.cache, + }, &cache.RegisterOptions{ + // Maintain a blocking query, retry dropped connections quickly + Refresh: true, + RefreshTimer: 0, + RefreshTimeout: 10 * time.Minute, + }) + a.cache.RegisterType(cachetype.IntentionMatchName, &cachetype.IntentionMatch{ RPC: a.delegate, }, &cache.RegisterOptions{ diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index d500b17ba..b13e6d076 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -28,9 +28,7 @@ import ( "github.com/hashicorp/serf/serf" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" - // NOTE(mitcehllh): This is temporary while certs are stubbed out. - "github.com/mitchellh/go-testing-interface" ) type Self struct { @@ -918,24 +916,39 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. return nil, fmt.Errorf("unknown service ID: %s", id) } - // Create a CSR. - // TODO(mitchellh): This is obviously not production ready! - csr, pk := connect.TestCSR(&testing.RuntimeT{}, &connect.SpiffeIDService{ - Host: "1234.consul", - Namespace: "default", - Datacenter: s.agent.config.Datacenter, - Service: service.Service, - }) + args := cachetype.ConnectCALeafRequest{ + Service: service.Service, // Need name not ID + } + var qOpts structs.QueryOptions + // Store DC in the ConnectCALeafRequest but query opts separately + if done := s.parse(resp, req, &args.Datacenter, &qOpts); done { + return nil, nil + } + args.MinQueryIndex = qOpts.MinQueryIndex - // Request signing - var reply structs.IssuedCert - args := structs.CASignRequest{CSR: csr} - if err := s.agent.RPC("ConnectCA.Sign", &args, &reply); err != nil { + // Validate token + // TODO(banks): support correct proxy token checking too + rule, err := s.agent.resolveToken(qOpts.Token) + if err != nil { return nil, err } - reply.PrivateKeyPEM = pk + if rule != nil && !rule.ServiceWrite(service.Service, nil) { + return nil, acl.ErrPermissionDenied + } - return &reply, nil + raw, err := s.agent.cache.Get(cachetype.ConnectCALeafName, &args) + if err != nil { + return nil, err + } + + reply, ok := raw.(*structs.IssuedCert) + if !ok { + // This should never happen, but we want to protect against panics + return nil, fmt.Errorf("internal error: response type not correct") + } + setIndex(resp, reply.ModifyIndex) + + return reply, nil } // GET /v1/agent/connect/proxy/:proxy_service_id diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index d5ea7305a..fad92cb9a 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2,6 +2,7 @@ package agent import ( "bytes" + "crypto/tls" "crypto/x509" "fmt" "io" @@ -2105,7 +2106,7 @@ func TestAgentConnectCARoots_empty(t *testing.T) { t.Parallel() assert := assert.New(t) - a := NewTestAgent(t.Name(), "") + a := NewTestAgent(t.Name(), "connect { enabled = false }") defer a.Shutdown() req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/roots", nil) @@ -2128,13 +2129,9 @@ func TestAgentConnectCARoots_list(t *testing.T) { // Grab the initial cache hit count cacheHits := a.cache.Hits() - // Set some CAs - var reply interface{} - ca1 := connect.TestCA(t, nil) - ca1.Active = false - ca2 := connect.TestCA(t, nil) - require.Nil(a.RPC("Test.ConnectCASetRoots", - []*structs.CARoot{ca1, ca2}, &reply)) + // Set some CAs. Note that NewTestAgent already bootstraps one CA so this just + // adds a second and makes it active. + ca2 := connect.TestCAConfigSet(t, a, nil) // List req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/roots", nil) @@ -2152,7 +2149,7 @@ func TestAgentConnectCARoots_list(t *testing.T) { require.Equal("", r.SigningKey) } - // That should've been a cache miss, so not hit change + // That should've been a cache miss, so no hit change require.Equal(cacheHits, a.cache.Hits()) // Test caching @@ -2169,24 +2166,21 @@ func TestAgentConnectCARoots_list(t *testing.T) { // Test that caching is updated in the background { - // Set some new CAs - var reply interface{} - ca := connect.TestCA(t, nil) - require.Nil(a.RPC("Test.ConnectCASetRoots", - []*structs.CARoot{ca}, &reply)) + // Set a new CA + ca := connect.TestCAConfigSet(t, a, nil) retry.Run(t, func(r *retry.R) { // List it again obj, err := a.srv.AgentConnectCARoots(httptest.NewRecorder(), req) - if err != nil { - r.Fatal(err) - } + r.Check(err) value := obj.(structs.IndexedCARoots) if ca.ID != value.ActiveRootID { r.Fatalf("%s != %s", ca.ID, value.ActiveRootID) } - if len(value.Roots) != 1 { + // There are now 3 CAs because we didn't complete rotation on the original + // 2 + if len(value.Roots) != 3 { r.Fatalf("bad len: %d", len(value.Roots)) } }) @@ -2205,13 +2199,16 @@ func TestAgentConnectCALeafCert_good(t *testing.T) { t.Parallel() assert := assert.New(t) + require := require.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() - // Set CAs - var reply interface{} - ca1 := connect.TestCA(t, nil) - assert.Nil(a.RPC("Test.ConnectCASetRoots", []*structs.CARoot{ca1}, &reply)) + // CA already setup by default by NewTestAgent but force a new one so we can + // verify it was signed easily. + ca1 := connect.TestCAConfigSet(t, a, nil) + + // Grab the initial cache hit count + cacheHits := a.cache.Hits() { // Register a local service @@ -2227,7 +2224,7 @@ func TestAgentConnectCALeafCert_good(t *testing.T) { req, _ := http.NewRequest("PUT", "/v1/agent/service/register", jsonReader(args)) resp := httptest.NewRecorder() _, err := a.srv.AgentRegisterService(resp, req) - assert.Nil(err) + require.NoError(err) if !assert.Equal(200, resp.Code) { t.Log("Body: ", resp.Body.String()) } @@ -2237,23 +2234,86 @@ func TestAgentConnectCALeafCert_good(t *testing.T) { req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/foo", nil) resp := httptest.NewRecorder() obj, err := a.srv.AgentConnectCALeafCert(resp, req) - assert.Nil(err) + require.NoError(err) // Get the issued cert issued, ok := obj.(*structs.IssuedCert) assert.True(ok) // Verify that the cert is signed by the CA + requireLeafValidUnderCA(t, issued, ca1) + + // Verify blocking index + assert.True(issued.ModifyIndex > 0) + assert.Equal(fmt.Sprintf("%d", issued.ModifyIndex), + resp.Header().Get("X-Consul-Index")) + + // That should've been a cache miss, so no hit change + require.Equal(cacheHits, a.cache.Hits()) + + // Test caching + { + // Fetch it again + obj2, err := a.srv.AgentConnectCALeafCert(httptest.NewRecorder(), req) + require.NoError(err) + require.Equal(obj, obj2) + + // Should cache hit this time and not make request + require.Equal(cacheHits+1, a.cache.Hits()) + cacheHits++ + } + + // Test that caching is updated in the background + { + // Set a new CA + ca := connect.TestCAConfigSet(t, a, nil) + + retry.Run(t, func(r *retry.R) { + // Try and sign again (note no index/wait arg since cache should update in + // background even if we aren't actively blocking) + obj, err := a.srv.AgentConnectCALeafCert(httptest.NewRecorder(), req) + r.Check(err) + + issued2 := obj.(*structs.IssuedCert) + if issued.CertPEM == issued2.CertPEM { + r.Fatalf("leaf has not updated") + } + + // Got a new leaf. Sanity check it's a whole new key as well as differnt + // cert. + if issued.PrivateKeyPEM == issued2.PrivateKeyPEM { + r.Fatalf("new leaf has same private key as before") + } + + // Verify that the cert is signed by the new CA + requireLeafValidUnderCA(t, issued2, ca) + }) + + // Should be a cache hit! The data should've updated in the cache + // in the background so this should've been fetched directly from + // the cache. + if v := a.cache.Hits(); v < cacheHits+1 { + t.Fatalf("expected at least one more cache hit, still at %d", v) + } + cacheHits = a.cache.Hits() + } +} + +func requireLeafValidUnderCA(t *testing.T, issued *structs.IssuedCert, + ca *structs.CARoot) { + roots := x509.NewCertPool() - assert.True(roots.AppendCertsFromPEM([]byte(ca1.RootCert))) + require.True(t, roots.AppendCertsFromPEM([]byte(ca.RootCert))) leaf, err := connect.ParseCert(issued.CertPEM) - assert.Nil(err) + require.NoError(t, err) _, err = leaf.Verify(x509.VerifyOptions{ Roots: roots, }) - assert.Nil(err) + require.NoError(t, err) - // TODO(mitchellh): verify the private key matches the cert + // Verify the private key matches. tls.LoadX509Keypair does this for us! + _, err = tls.X509KeyPair([]byte(issued.CertPEM), []byte(issued.PrivateKeyPEM)) + require.NoError(t, err) } func TestAgentConnectProxy(t *testing.T) { diff --git a/agent/agent_test.go b/agent/agent_test.go index caa76a28d..730c10bc9 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -18,7 +18,6 @@ import ( "github.com/stretchr/testify/require" "github.com/hashicorp/consul/agent/checks" - "github.com/hashicorp/consul/agent/consul" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/testutil" @@ -28,14 +27,6 @@ import ( "github.com/pascaldekloe/goe/verify" ) -// TestMain is the main entrypoint for `go test`. -func TestMain(m *testing.M) { - // Enable the test RPC endpoints - consul.TestEndpoint() - - os.Exit(m.Run()) -} - func externalIP() (string, error) { addrs, err := net.InterfaceAddrs() if err != nil { diff --git a/agent/cache-types/connect_ca_leaf.go b/agent/cache-types/connect_ca_leaf.go index c6a2eee73..2c1cd156a 100644 --- a/agent/cache-types/connect_ca_leaf.go +++ b/agent/cache-types/connect_ca_leaf.go @@ -48,7 +48,7 @@ func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache // is so that the goroutine doesn't block forever if we return for other // reasons. newRootCACh := make(chan error, 1) - go c.waitNewRootCA(newRootCACh, opts.Timeout) + go c.waitNewRootCA(reqReal.Datacenter, newRootCACh, opts.Timeout) // Get our prior cert (if we had one) and use that to determine our // expiration time. If no cert exists, we expire immediately since we @@ -110,7 +110,10 @@ func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache // Request signing var reply structs.IssuedCert - args := structs.CASignRequest{CSR: csr} + args := structs.CASignRequest{ + Datacenter: reqReal.Datacenter, + CSR: csr, + } if err := c.RPC.RPC("ConnectCA.Sign", &args, &reply); err != nil { return result, err } @@ -139,11 +142,12 @@ func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache // waitNewRootCA blocks until a new root CA is available or the timeout is // reached (on timeout ErrTimeout is returned on the channel). -func (c *ConnectCALeaf) waitNewRootCA(ch chan<- error, timeout time.Duration) { +func (c *ConnectCALeaf) waitNewRootCA(datacenter string, ch chan<- error, + timeout time.Duration) { // Fetch some new roots. This will block until our MinQueryIndex is // matched or the timeout is reached. rawRoots, err := c.Cache.Get(ConnectCARootName, &structs.DCSpecificRequest{ - Datacenter: "", + Datacenter: datacenter, QueryOptions: structs.QueryOptions{ MinQueryIndex: atomic.LoadUint64(&c.caIndex), MaxQueryTime: timeout, @@ -186,7 +190,7 @@ func (c *ConnectCALeaf) waitNewRootCA(ch chan<- error, timeout time.Duration) { } // ConnectCALeafRequest is the cache.Request implementation for the -// COnnectCALeaf cache type. This is implemented here and not in structs +// ConnectCALeaf cache type. This is implemented here and not in structs // since this is only used for cache-related requests and not forwarded // directly to any Consul servers. type ConnectCALeafRequest struct { diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index e12372589..fbb5eed49 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -39,7 +39,6 @@ var testCACounter uint64 // SigningCert. func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { var result structs.CARoot - result.ID = testUUID(t) result.Active = true result.Name = fmt.Sprintf("Test CA %d", atomic.AddUint64(&testCACounter, 1)) @@ -86,6 +85,10 @@ func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { t.Fatalf("error encoding private key: %s", err) } result.RootCert = buf.String() + result.ID, err = CalculateCertFingerprint(result.RootCert) + if err != nil { + t.Fatalf("error generating CA ID fingerprint: %s", err) + } // If there is a prior CA to cross-sign with, then we need to create that // and set it as the signing cert. @@ -286,3 +289,47 @@ func testUUID(t testing.T) string { return ret } + +// TestAgentRPC is an interface that an RPC client must implement. This is a +// helper interface that is implemented by the agent delegate so that test +// helpers can make RPCs without introducing an import cycle on `agent`. +type TestAgentRPC interface { + RPC(method string, args interface{}, reply interface{}) error +} + +// TestCAConfigSet sets a CARoot returned by TestCA into the TestAgent state. It +// requires that TestAgent had connect enabled in it's config. If ca is nil, a +// new CA is created. +// +// It returns the CARoot passed or created. +// +// Note that we have to use an interface for the TestAgent.RPC method since we +// can't introduce an import cycle by importing `agent.TestAgent` here directly. +// It also means this will work in a few other places we mock that method. +func TestCAConfigSet(t testing.T, a TestAgentRPC, + ca *structs.CARoot) *structs.CARoot { + t.Helper() + + if ca == nil { + ca = TestCA(t, nil) + } + newConfig := &structs.CAConfiguration{ + Provider: "consul", + Config: map[string]interface{}{ + "PrivateKey": ca.SigningKey, + "RootCert": ca.RootCert, + "RotationPeriod": 180 * 24 * time.Hour, + }, + } + args := &structs.CARequest{ + Datacenter: "dc1", + Config: newConfig, + } + var reply interface{} + + err := a.RPC("ConnectCA.ConfigurationSet", args, &reply) + if err != nil { + t.Fatalf("failed to set test CA config: %s", err) + } + return ca +} diff --git a/agent/connect_ca_endpoint_test.go b/agent/connect_ca_endpoint_test.go index bcf209ffe..a9b355e0d 100644 --- a/agent/connect_ca_endpoint_test.go +++ b/agent/connect_ca_endpoint_test.go @@ -14,7 +14,7 @@ func TestConnectCARoots_empty(t *testing.T) { t.Parallel() assert := assert.New(t) - a := NewTestAgent(t.Name(), "") + a := NewTestAgent(t.Name(), "connect { enabled = false }") defer a.Shutdown() req, _ := http.NewRequest("GET", "/v1/connect/ca/roots", nil) @@ -34,13 +34,9 @@ func TestConnectCARoots_list(t *testing.T) { a := NewTestAgent(t.Name(), "") defer a.Shutdown() - // Set some CAs - var reply interface{} - ca1 := connect.TestCA(t, nil) - ca1.Active = false - ca2 := connect.TestCA(t, nil) - assert.Nil(a.RPC("Test.ConnectCASetRoots", - []*structs.CARoot{ca1, ca2}, &reply)) + // Set some CAs. Note that NewTestAgent already bootstraps one CA so this just + // adds a second and makes it active. + ca2 := connect.TestCAConfigSet(t, a, nil) // List req, _ := http.NewRequest("GET", "/v1/connect/ca/roots", nil) diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 35dbe46e8..136cbcb49 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -272,14 +272,6 @@ func (s *ConnectCA) Sign( return err } - provider := s.srv.getCAProvider() - - // todo(kyhavlov): more validation on the CSR before signing - pem, err := provider.Sign(csr) - if err != nil { - return err - } - // Parse the SPIFFE ID spiffeId, err := connect.ParseCertURI(csr.URIs[0]) if err != nil { @@ -289,6 +281,27 @@ func (s *ConnectCA) Sign( if !ok { return fmt.Errorf("SPIFFE ID in CSR must be a service ID") } + + provider := s.srv.getCAProvider() + + // todo(kyhavlov): more validation on the CSR before signing + pem, err := provider.Sign(csr) + if err != nil { + return err + } + + // TODO(banks): when we implement IssuedCerts table we can use the insert to + // that as the raft index to return in response. Right now we can rely on only + // the built-in provider being supported and the implementation detail that we + // have to write a SerialIndex update to the provider config table for every + // cert issued so in all cases this index will be higher than any previous + // sign response. This has to happen after the provider.Sign call to observe + // the index update. + modIdx, _, err := s.srv.fsm.State().CAConfig() + if err != nil { + return err + } + cert, err := connect.ParseCert(pem) if err != nil { return err @@ -302,6 +315,10 @@ func (s *ConnectCA) Sign( ServiceURI: cert.URIs[0].String(), ValidAfter: cert.NotBefore, ValidBefore: cert.NotAfter, + RaftIndex: structs.RaftIndex{ + ModifyIndex: modIdx, + CreateIndex: modIdx, + }, } return nil diff --git a/agent/consul/connect_ca_provider.go b/agent/consul/connect_ca_provider.go index cb2bcad57..0d7d851b0 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/consul/connect_ca_provider.go @@ -250,7 +250,7 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { // Create the certificate, PEM encode it and return that value. var buf bytes.Buffer bs, err := x509.CreateCertificate( - rand.Reader, &template, caCert, signer.Public(), signer) + rand.Reader, &template, caCert, csr.PublicKey, signer) if err != nil { return "", fmt.Errorf("error generating certificate: %s", err) } @@ -259,7 +259,10 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { return "", fmt.Errorf("error encoding private key: %s", err) } - c.incrementSerialIndex(providerState) + err = c.incrementSerialIndex(providerState) + if err != nil { + return "", err + } // Set the response return buf.String(), nil @@ -313,15 +316,19 @@ func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { return "", fmt.Errorf("error encoding private key: %s", err) } - c.incrementSerialIndex(providerState) + err = c.incrementSerialIndex(providerState) + if err != nil { + return "", err + } return buf.String(), nil } -// incrementSerialIndex increments the cert serial number index in the provider state +// incrementSerialIndex increments the cert serial number index in the provider +// state. func (c *ConsulCAProvider) incrementSerialIndex(providerState *structs.CAConsulProviderState) error { newState := *providerState - newState.SerialIndex += 1 + newState.SerialIndex++ args := &structs.CARequest{ Op: structs.CAOpSetProviderState, ProviderState: &newState, diff --git a/agent/consul/testing.go b/agent/consul/testing.go deleted file mode 100644 index afae7c1a1..000000000 --- a/agent/consul/testing.go +++ /dev/null @@ -1,26 +0,0 @@ -package consul - -import ( - "sync" -) - -// testEndpointsOnce ensures that endpoints for testing are registered once. -var testEndpointsOnce sync.Once - -// TestEndpoints registers RPC endpoints specifically for testing. These -// endpoints enable some internal data access that we normally disallow, but -// are useful for modifying server state. -// -// To use this, modify TestMain to call this function prior to running tests. -// -// These should NEVER be registered outside of tests. -// -// NOTE(mitchellh): This was created so that the downstream agent tests can -// modify internal Connect CA state. When the CA plugin work comes in with -// a more complete CA API, this may no longer be necessary and we can remove it. -// That would be ideal. -func TestEndpoint() { - testEndpointsOnce.Do(func() { - registerEndpoint(func(s *Server) interface{} { return &Test{s} }) - }) -} diff --git a/agent/consul/testing_endpoint.go b/agent/consul/testing_endpoint.go deleted file mode 100644 index 6e3cec12f..000000000 --- a/agent/consul/testing_endpoint.go +++ /dev/null @@ -1,43 +0,0 @@ -package consul - -import ( - "github.com/hashicorp/consul/agent/structs" -) - -// Test is an RPC endpoint that is only available during `go test` when -// `TestEndpoint` is called. This is not and must not ever be available -// during a real running Consul agent, since it this endpoint bypasses -// critical ACL checks. -type Test struct { - // srv is a pointer back to the server. - srv *Server -} - -// ConnectCASetRoots sets the current CA roots state. -func (s *Test) ConnectCASetRoots( - args []*structs.CARoot, - reply *interface{}) error { - - // Get the highest index - state := s.srv.fsm.State() - idx, _, err := state.CARoots(nil) - if err != nil { - return err - } - - // Commit - resp, err := s.srv.raftApply(structs.ConnectCARequestType, &structs.CARequest{ - Op: structs.CAOpSetRoots, - Index: idx, - Roots: args, - }) - if err != nil { - s.srv.logger.Printf("[ERR] consul.test: Apply failed %v", err) - return err - } - if respErr, ok := resp.(error); ok { - return respErr - } - - return nil -} diff --git a/agent/consul/testing_endpoint_test.go b/agent/consul/testing_endpoint_test.go deleted file mode 100644 index e20213695..000000000 --- a/agent/consul/testing_endpoint_test.go +++ /dev/null @@ -1,42 +0,0 @@ -package consul - -import ( - "os" - "testing" - - "github.com/hashicorp/consul/agent/connect" - "github.com/hashicorp/consul/agent/structs" - "github.com/hashicorp/consul/testrpc" - "github.com/hashicorp/net-rpc-msgpackrpc" - "github.com/stretchr/testify/assert" -) - -// Test setting the CAs -func TestTestConnectCASetRoots(t *testing.T) { - t.Parallel() - - assert := assert.New(t) - dir1, s1 := testServer(t) - defer os.RemoveAll(dir1) - defer s1.Shutdown() - codec := rpcClient(t, s1) - defer codec.Close() - - testrpc.WaitForLeader(t, s1.RPC, "dc1") - - // Prepare - ca1 := connect.TestCA(t, nil) - ca2 := connect.TestCA(t, nil) - ca2.Active = false - - // Request - args := []*structs.CARoot{ca1, ca2} - var reply interface{} - assert.Nil(msgpackrpc.CallWithCodec(codec, "Test.ConnectCASetRoots", args, &reply)) - - // Verify they're there - state := s1.fsm.State() - _, actual, err := state.CARoots(nil) - assert.Nil(err) - assert.Len(actual, 2) -} diff --git a/agent/consul/testing_test.go b/agent/consul/testing_test.go deleted file mode 100644 index 98e8dd743..000000000 --- a/agent/consul/testing_test.go +++ /dev/null @@ -1,13 +0,0 @@ -package consul - -import ( - "os" - "testing" -) - -func TestMain(m *testing.M) { - // Register the test RPC endpoint - TestEndpoint() - - os.Exit(m.Run()) -} From 02ab461dae45a213255334396b8a81f79765939a Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 26 Apr 2018 14:01:20 +0100 Subject: [PATCH 187/539] TLS watching integrated into Service with some basic tests. There are also a lot of small bug fixes found when testing lots of things end-to-end for the first time and some cleanup now it's integrated with real CA code. --- agent/agent_endpoint.go | 69 +++++++- agent/agent_endpoint_test.go | 242 +++++++++++++++++++++++++++- agent/config/builder.go | 203 ++++++++++++----------- agent/config/runtime.go | 6 +- agent/config/runtime_test.go | 18 ++- agent/connect/testing_ca.go | 2 +- agent/connect/testing_spiffe.go | 2 +- agent/http_oss.go | 2 +- agent/local/state.go | 7 +- agent/local/state_test.go | 15 ++ agent/structs/connect.go | 7 +- agent/structs/service_definition.go | 5 +- api/agent.go | 3 - api/agent_test.go | 75 ++++++++- connect/proxy/config.go | 20 +-- connect/proxy/config_test.go | 10 -- connect/proxy/listener.go | 18 ++- connect/proxy/proxy.go | 65 +++++--- connect/resolver.go | 8 +- connect/service.go | 51 ++++-- connect/service_test.go | 94 ++++++++++- connect/testing.go | 11 +- connect/tls.go | 61 ++++++- connect/tls_test.go | 42 +++++ testutil/server.go | 3 + watch/funcs.go | 6 +- watch/funcs_test.go | 93 ++++------- 27 files changed, 868 insertions(+), 270 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index b13e6d076..fde7ca5e2 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -28,7 +28,6 @@ import ( "github.com/hashicorp/serf/serf" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" - // NOTE(mitcehllh): This is temporary while certs are stubbed out. ) type Self struct { @@ -1000,14 +999,71 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http } contentHash := fmt.Sprintf("%x", hash) + // Merge globals defaults + config := make(map[string]interface{}) + for k, v := range s.agent.config.ConnectProxyDefaultConfig { + if _, ok := config[k]; !ok { + config[k] = v + } + } + + execMode := "daemon" + // If there is a global default mode use that instead + if s.agent.config.ConnectProxyDefaultExecMode != "" { + execMode = s.agent.config.ConnectProxyDefaultExecMode + } + // If it's actually set though, use the one set + if proxy.Proxy.ExecMode != structs.ProxyExecModeUnspecified { + execMode = proxy.Proxy.ExecMode.String() + } + + // TODO(banks): default the binary to current binary. Probably needs to be + // done deeper though as it will be needed for actually managing proxy + // lifecycle. + command := proxy.Proxy.Command + if command == "" { + if execMode == "daemon" { + command = s.agent.config.ConnectProxyDefaultDaemonCommand + } + if execMode == "script" { + command = s.agent.config.ConnectProxyDefaultScriptCommand + } + } + // No global defaults set either... + if command == "" { + command = "consul connect proxy" + } + + // Set defaults for anything that is still not specified but required. + // Note that these are not included in the content hash. Since we expect + // them to be static in general but some like the default target service + // port might not be. In that edge case services can set that explicitly + // when they re-register which will be caught though. + for k, v := range proxy.Proxy.Config { + config[k] = v + } + if _, ok := config["bind_port"]; !ok { + config["bind_port"] = proxy.Proxy.ProxyService.Port + } + if _, ok := config["bind_address"]; !ok { + // Default to binding to the same address the agent is configured to + // bind to. + config["bind_address"] = s.agent.config.BindAddr.String() + } + if _, ok := config["local_service_address"]; !ok { + // Default to localhost and the port the service registered with + config["local_service_address"] = fmt.Sprintf("127.0.0.1:%d", + target.Port) + } + reply := &api.ConnectProxyConfig{ ProxyServiceID: proxy.Proxy.ProxyService.ID, TargetServiceID: target.ID, TargetServiceName: target.Service, ContentHash: contentHash, - ExecMode: api.ProxyExecMode(proxy.Proxy.ExecMode.String()), - Command: proxy.Proxy.Command, - Config: proxy.Proxy.Config, + ExecMode: api.ProxyExecMode(execMode), + Command: command, + Config: config, } return contentHash, reply, nil }) @@ -1040,10 +1096,13 @@ func (s *HTTPServer) agentLocalBlockingQuery(resp http.ResponseWriter, hash stri // Apply a small amount of jitter to the request. wait += lib.RandomStagger(wait / 16) timeout = time.NewTimer(wait) - ws = memdb.NewWatchSet() } for { + // Must reset this every loop in case the Watch set is already closed but + // hash remains same. In that case we'll need to re-block on ws.Watch() + // again. + ws = memdb.NewWatchSet() curHash, curResp, err := fn(ws) if err != nil { return curResp, err diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index fad92cb9a..be97bf5a4 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2316,7 +2316,7 @@ func requireLeafValidUnderCA(t *testing.T, issued *structs.IssuedCert, require.NoError(t, err) } -func TestAgentConnectProxy(t *testing.T) { +func TestAgentConnectProxyConfig_Blocking(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), "") @@ -2354,7 +2354,7 @@ func TestAgentConnectProxy(t *testing.T) { TargetServiceName: "test", ContentHash: "84346af2031659c9", ExecMode: "daemon", - Command: "", + Command: "consul connect proxy", Config: map[string]interface{}{ "upstreams": []interface{}{ map[string]interface{}{ @@ -2362,15 +2362,17 @@ func TestAgentConnectProxy(t *testing.T) { "local_port": float64(3131), }, }, - "bind_port": float64(1234), - "connect_timeout_ms": float64(500), + "bind_address": "127.0.0.1", + "local_service_address": "127.0.0.1:8000", + "bind_port": float64(1234), + "connect_timeout_ms": float64(500), }, } ur, err := copystructure.Copy(expectedResponse) require.NoError(t, err) updatedResponse := ur.(*api.ConnectProxyConfig) - updatedResponse.ContentHash = "7d53473b0e9db5a" + updatedResponse.ContentHash = "e1e3395f0d00cd41" upstreams := updatedResponse.Config["upstreams"].([]interface{}) upstreams = append(upstreams, map[string]interface{}{ @@ -2431,6 +2433,41 @@ func TestAgentConnectProxy(t *testing.T) { wantErr: false, wantResp: updatedResponse, }, + { + // This test exercises a case that caused a busy loop to eat CPU for the + // entire duration of the blocking query. If a service gets re-registered + // wth same proxy config then the old proxy config chan is closed causing + // blocked watchset.Watch to return false indicating a change. But since + // the hash is the same when the blocking fn is re-called we should just + // keep blocking on the next iteration. The bug hit was that the WatchSet + // ws was not being reset in the loop and so when you try to `Watch` it + // the second time it just returns immediately making the blocking loop + // into a busy-poll! + // + // This test though doesn't catch that because busy poll still has the + // correct external behaviour. I don't want to instrument the loop to + // assert it's not executing too fast here as I can't think of a clean way + // and the issue is fixed now so this test doesn't actually catch the + // error, but does provide an easy way to verify the behaviour by hand: + // 1. Make this test fail e.g. change wantErr to true + // 2. Add a log.Println or similar into the blocking loop/function + // 3. See whether it's called just once or many times in a tight loop. + name: "blocking fetch interrupted with no change (same hash)", + url: "/v1/agent/connect/proxy/test-proxy?wait=200ms&hash=" + expectedResponse.ContentHash, + updateFunc: func() { + time.Sleep(100 * time.Millisecond) + // Re-register with _same_ proxy config + req, _ := http.NewRequest("PUT", "/v1/agent/service/register", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err = a.srv.AgentRegisterService(resp, req) + require.NoError(t, err) + require.Equal(t, 200, resp.Code, "body: %s", resp.Body.String()) + }, + wantWait: 200 * time.Millisecond, + wantCode: 200, + wantErr: false, + wantResp: expectedResponse, + }, } for _, tt := range tests { @@ -2479,6 +2516,201 @@ func TestAgentConnectProxy(t *testing.T) { } } +func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { + t.Parallel() + + // Define a local service with a managed proxy. It's registered in the test + // loop to make sure agent state is predictable whatever order tests execute + // since some alter this service config. + reg := &structs.ServiceDefinition{ + ID: "test-id", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{}, + } + + tests := []struct { + name string + globalConfig string + proxy structs.ServiceDefinitionConnectProxy + wantMode api.ProxyExecMode + wantCommand string + wantConfig map[string]interface{} + }{ + { + name: "defaults", + globalConfig: ` + bind_addr = "0.0.0.0" + connect { + enabled = true + proxy_defaults = { + bind_min_port = 10000 + bind_max_port = 10000 + } + } + `, + proxy: structs.ServiceDefinitionConnectProxy{}, + wantMode: api.ProxyExecModeDaemon, + wantCommand: "consul connect proxy", + wantConfig: map[string]interface{}{ + "bind_address": "0.0.0.0", + "bind_port": 10000, // "randomly" chosen from our range of 1 + "local_service_address": "127.0.0.1:8000", // port from service reg + }, + }, + { + name: "global defaults - script", + globalConfig: ` + bind_addr = "0.0.0.0" + connect { + enabled = true + proxy_defaults = { + bind_min_port = 10000 + bind_max_port = 10000 + exec_mode = "script" + script_command = "script.sh" + } + } + `, + proxy: structs.ServiceDefinitionConnectProxy{}, + wantMode: api.ProxyExecModeScript, + wantCommand: "script.sh", + wantConfig: map[string]interface{}{ + "bind_address": "0.0.0.0", + "bind_port": 10000, // "randomly" chosen from our range of 1 + "local_service_address": "127.0.0.1:8000", // port from service reg + }, + }, + { + name: "global defaults - daemon", + globalConfig: ` + bind_addr = "0.0.0.0" + connect { + enabled = true + proxy_defaults = { + bind_min_port = 10000 + bind_max_port = 10000 + exec_mode = "daemon" + daemon_command = "daemon.sh" + } + } + `, + proxy: structs.ServiceDefinitionConnectProxy{}, + wantMode: api.ProxyExecModeDaemon, + wantCommand: "daemon.sh", + wantConfig: map[string]interface{}{ + "bind_address": "0.0.0.0", + "bind_port": 10000, // "randomly" chosen from our range of 1 + "local_service_address": "127.0.0.1:8000", // port from service reg + }, + }, + { + name: "global default config merge", + globalConfig: ` + bind_addr = "0.0.0.0" + connect { + enabled = true + proxy_defaults = { + bind_min_port = 10000 + bind_max_port = 10000 + config = { + connect_timeout_ms = 1000 + } + } + } + `, + proxy: structs.ServiceDefinitionConnectProxy{ + Config: map[string]interface{}{ + "foo": "bar", + }, + }, + wantMode: api.ProxyExecModeDaemon, + wantCommand: "consul connect proxy", + wantConfig: map[string]interface{}{ + "bind_address": "0.0.0.0", + "bind_port": 10000, // "randomly" chosen from our range of 1 + "local_service_address": "127.0.0.1:8000", // port from service reg + "connect_timeout_ms": 1000, + "foo": "bar", + }, + }, + { + name: "overrides in reg", + globalConfig: ` + bind_addr = "0.0.0.0" + connect { + enabled = true + proxy_defaults = { + bind_min_port = 10000 + bind_max_port = 10000 + exec_mode = "daemon" + daemon_command = "daemon.sh" + script_command = "script.sh" + config = { + connect_timeout_ms = 1000 + } + } + } + `, + proxy: structs.ServiceDefinitionConnectProxy{ + ExecMode: "script", + Command: "foo.sh", + Config: map[string]interface{}{ + "connect_timeout_ms": 2000, + "bind_address": "127.0.0.1", + "bind_port": 1024, + "local_service_address": "127.0.0.1:9191", + }, + }, + wantMode: api.ProxyExecModeScript, + wantCommand: "foo.sh", + wantConfig: map[string]interface{}{ + "bind_address": "127.0.0.1", + "bind_port": float64(1024), + "local_service_address": "127.0.0.1:9191", + "connect_timeout_ms": float64(2000), + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + assert := assert.New(t) + require := require.New(t) + + a := NewTestAgent(t.Name(), tt.globalConfig) + defer a.Shutdown() + + // Register the basic service with the required config + { + reg.Connect.Proxy = &tt.proxy + req, _ := http.NewRequest("PUT", "/v1/agent/service/register", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + req, _ := http.NewRequest("GET", "/v1/agent/connect/proxy/test-id-proxy", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentConnectProxyConfig(resp, req) + require.NoError(err) + + proxyCfg := obj.(*api.ConnectProxyConfig) + assert.Equal("test-id-proxy", proxyCfg.ProxyServiceID) + assert.Equal("test-id", proxyCfg.TargetServiceID) + assert.Equal("test", proxyCfg.TargetServiceName) + assert.Equal(tt.wantMode, proxyCfg.ExecMode) + assert.Equal(tt.wantCommand, proxyCfg.Command) + require.Equal(tt.wantConfig, proxyCfg.Config) + }) + } +} + func TestAgentConnectAuthorize_badBody(t *testing.T) { t.Parallel() diff --git a/agent/config/builder.go b/agent/config/builder.go index 6ad6c70b5..3d9818adc 100644 --- a/agent/config/builder.go +++ b/agent/config/builder.go @@ -531,6 +531,17 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) { connectCAConfig = c.Connect.CAConfig } + proxyDefaultExecMode := "" + proxyDefaultDaemonCommand := "" + proxyDefaultScriptCommand := "" + proxyDefaultConfig := make(map[string]interface{}) + if c.Connect != nil && c.Connect.ProxyDefaults != nil { + proxyDefaultExecMode = b.stringVal(c.Connect.ProxyDefaults.ExecMode) + proxyDefaultDaemonCommand = b.stringVal(c.Connect.ProxyDefaults.DaemonCommand) + proxyDefaultScriptCommand = b.stringVal(c.Connect.ProxyDefaults.ScriptCommand) + proxyDefaultConfig = c.Connect.ProxyDefaults.Config + } + // ---------------------------------------------------------------- // build runtime config // @@ -638,100 +649,104 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) { TelemetryStatsiteAddr: b.stringVal(c.Telemetry.StatsiteAddr), // Agent - AdvertiseAddrLAN: advertiseAddrLAN, - AdvertiseAddrWAN: advertiseAddrWAN, - BindAddr: bindAddr, - Bootstrap: b.boolVal(c.Bootstrap), - BootstrapExpect: b.intVal(c.BootstrapExpect), - CAFile: b.stringVal(c.CAFile), - CAPath: b.stringVal(c.CAPath), - CertFile: b.stringVal(c.CertFile), - CheckUpdateInterval: b.durationVal("check_update_interval", c.CheckUpdateInterval), - Checks: checks, - ClientAddrs: clientAddrs, - ConnectEnabled: connectEnabled, - ConnectProxyBindMinPort: proxyBindMinPort, - ConnectProxyBindMaxPort: proxyBindMaxPort, - ConnectCAProvider: connectCAProvider, - ConnectCAConfig: connectCAConfig, - DataDir: b.stringVal(c.DataDir), - Datacenter: strings.ToLower(b.stringVal(c.Datacenter)), - DevMode: b.boolVal(b.Flags.DevMode), - DisableAnonymousSignature: b.boolVal(c.DisableAnonymousSignature), - DisableCoordinates: b.boolVal(c.DisableCoordinates), - DisableHostNodeID: b.boolVal(c.DisableHostNodeID), - DisableKeyringFile: b.boolVal(c.DisableKeyringFile), - DisableRemoteExec: b.boolVal(c.DisableRemoteExec), - DisableUpdateCheck: b.boolVal(c.DisableUpdateCheck), - DiscardCheckOutput: b.boolVal(c.DiscardCheckOutput), - DiscoveryMaxStale: b.durationVal("discovery_max_stale", c.DiscoveryMaxStale), - EnableAgentTLSForChecks: b.boolVal(c.EnableAgentTLSForChecks), - EnableDebug: b.boolVal(c.EnableDebug), - EnableScriptChecks: b.boolVal(c.EnableScriptChecks), - EnableSyslog: b.boolVal(c.EnableSyslog), - EnableUI: b.boolVal(c.UI), - EncryptKey: b.stringVal(c.EncryptKey), - EncryptVerifyIncoming: b.boolVal(c.EncryptVerifyIncoming), - EncryptVerifyOutgoing: b.boolVal(c.EncryptVerifyOutgoing), - KeyFile: b.stringVal(c.KeyFile), - LeaveDrainTime: b.durationVal("performance.leave_drain_time", c.Performance.LeaveDrainTime), - LeaveOnTerm: leaveOnTerm, - LogLevel: b.stringVal(c.LogLevel), - NodeID: types.NodeID(b.stringVal(c.NodeID)), - NodeMeta: c.NodeMeta, - NodeName: b.nodeName(c.NodeName), - NonVotingServer: b.boolVal(c.NonVotingServer), - PidFile: b.stringVal(c.PidFile), - RPCAdvertiseAddr: rpcAdvertiseAddr, - RPCBindAddr: rpcBindAddr, - RPCHoldTimeout: b.durationVal("performance.rpc_hold_timeout", c.Performance.RPCHoldTimeout), - RPCMaxBurst: b.intVal(c.Limits.RPCMaxBurst), - RPCProtocol: b.intVal(c.RPCProtocol), - RPCRateLimit: rate.Limit(b.float64Val(c.Limits.RPCRate)), - RaftProtocol: b.intVal(c.RaftProtocol), - RaftSnapshotThreshold: b.intVal(c.RaftSnapshotThreshold), - RaftSnapshotInterval: b.durationVal("raft_snapshot_interval", c.RaftSnapshotInterval), - ReconnectTimeoutLAN: b.durationVal("reconnect_timeout", c.ReconnectTimeoutLAN), - ReconnectTimeoutWAN: b.durationVal("reconnect_timeout_wan", c.ReconnectTimeoutWAN), - RejoinAfterLeave: b.boolVal(c.RejoinAfterLeave), - RetryJoinIntervalLAN: b.durationVal("retry_interval", c.RetryJoinIntervalLAN), - RetryJoinIntervalWAN: b.durationVal("retry_interval_wan", c.RetryJoinIntervalWAN), - RetryJoinLAN: b.expandAllOptionalAddrs("retry_join", c.RetryJoinLAN), - RetryJoinMaxAttemptsLAN: b.intVal(c.RetryJoinMaxAttemptsLAN), - RetryJoinMaxAttemptsWAN: b.intVal(c.RetryJoinMaxAttemptsWAN), - RetryJoinWAN: b.expandAllOptionalAddrs("retry_join_wan", c.RetryJoinWAN), - SegmentName: b.stringVal(c.SegmentName), - Segments: segments, - SerfAdvertiseAddrLAN: serfAdvertiseAddrLAN, - SerfAdvertiseAddrWAN: serfAdvertiseAddrWAN, - SerfBindAddrLAN: serfBindAddrLAN, - SerfBindAddrWAN: serfBindAddrWAN, - SerfPortLAN: serfPortLAN, - SerfPortWAN: serfPortWAN, - ServerMode: b.boolVal(c.ServerMode), - ServerName: b.stringVal(c.ServerName), - ServerPort: serverPort, - Services: services, - SessionTTLMin: b.durationVal("session_ttl_min", c.SessionTTLMin), - SkipLeaveOnInt: skipLeaveOnInt, - StartJoinAddrsLAN: b.expandAllOptionalAddrs("start_join", c.StartJoinAddrsLAN), - StartJoinAddrsWAN: b.expandAllOptionalAddrs("start_join_wan", c.StartJoinAddrsWAN), - SyslogFacility: b.stringVal(c.SyslogFacility), - TLSCipherSuites: b.tlsCipherSuites("tls_cipher_suites", c.TLSCipherSuites), - TLSMinVersion: b.stringVal(c.TLSMinVersion), - TLSPreferServerCipherSuites: b.boolVal(c.TLSPreferServerCipherSuites), - TaggedAddresses: c.TaggedAddresses, - TranslateWANAddrs: b.boolVal(c.TranslateWANAddrs), - UIDir: b.stringVal(c.UIDir), - UnixSocketGroup: b.stringVal(c.UnixSocket.Group), - UnixSocketMode: b.stringVal(c.UnixSocket.Mode), - UnixSocketUser: b.stringVal(c.UnixSocket.User), - VerifyIncoming: b.boolVal(c.VerifyIncoming), - VerifyIncomingHTTPS: b.boolVal(c.VerifyIncomingHTTPS), - VerifyIncomingRPC: b.boolVal(c.VerifyIncomingRPC), - VerifyOutgoing: b.boolVal(c.VerifyOutgoing), - VerifyServerHostname: b.boolVal(c.VerifyServerHostname), - Watches: c.Watches, + AdvertiseAddrLAN: advertiseAddrLAN, + AdvertiseAddrWAN: advertiseAddrWAN, + BindAddr: bindAddr, + Bootstrap: b.boolVal(c.Bootstrap), + BootstrapExpect: b.intVal(c.BootstrapExpect), + CAFile: b.stringVal(c.CAFile), + CAPath: b.stringVal(c.CAPath), + CertFile: b.stringVal(c.CertFile), + CheckUpdateInterval: b.durationVal("check_update_interval", c.CheckUpdateInterval), + Checks: checks, + ClientAddrs: clientAddrs, + ConnectEnabled: connectEnabled, + ConnectCAProvider: connectCAProvider, + ConnectCAConfig: connectCAConfig, + ConnectProxyBindMinPort: proxyBindMinPort, + ConnectProxyBindMaxPort: proxyBindMaxPort, + ConnectProxyDefaultExecMode: proxyDefaultExecMode, + ConnectProxyDefaultDaemonCommand: proxyDefaultDaemonCommand, + ConnectProxyDefaultScriptCommand: proxyDefaultScriptCommand, + ConnectProxyDefaultConfig: proxyDefaultConfig, + DataDir: b.stringVal(c.DataDir), + Datacenter: strings.ToLower(b.stringVal(c.Datacenter)), + DevMode: b.boolVal(b.Flags.DevMode), + DisableAnonymousSignature: b.boolVal(c.DisableAnonymousSignature), + DisableCoordinates: b.boolVal(c.DisableCoordinates), + DisableHostNodeID: b.boolVal(c.DisableHostNodeID), + DisableKeyringFile: b.boolVal(c.DisableKeyringFile), + DisableRemoteExec: b.boolVal(c.DisableRemoteExec), + DisableUpdateCheck: b.boolVal(c.DisableUpdateCheck), + DiscardCheckOutput: b.boolVal(c.DiscardCheckOutput), + DiscoveryMaxStale: b.durationVal("discovery_max_stale", c.DiscoveryMaxStale), + EnableAgentTLSForChecks: b.boolVal(c.EnableAgentTLSForChecks), + EnableDebug: b.boolVal(c.EnableDebug), + EnableScriptChecks: b.boolVal(c.EnableScriptChecks), + EnableSyslog: b.boolVal(c.EnableSyslog), + EnableUI: b.boolVal(c.UI), + EncryptKey: b.stringVal(c.EncryptKey), + EncryptVerifyIncoming: b.boolVal(c.EncryptVerifyIncoming), + EncryptVerifyOutgoing: b.boolVal(c.EncryptVerifyOutgoing), + KeyFile: b.stringVal(c.KeyFile), + LeaveDrainTime: b.durationVal("performance.leave_drain_time", c.Performance.LeaveDrainTime), + LeaveOnTerm: leaveOnTerm, + LogLevel: b.stringVal(c.LogLevel), + NodeID: types.NodeID(b.stringVal(c.NodeID)), + NodeMeta: c.NodeMeta, + NodeName: b.nodeName(c.NodeName), + NonVotingServer: b.boolVal(c.NonVotingServer), + PidFile: b.stringVal(c.PidFile), + RPCAdvertiseAddr: rpcAdvertiseAddr, + RPCBindAddr: rpcBindAddr, + RPCHoldTimeout: b.durationVal("performance.rpc_hold_timeout", c.Performance.RPCHoldTimeout), + RPCMaxBurst: b.intVal(c.Limits.RPCMaxBurst), + RPCProtocol: b.intVal(c.RPCProtocol), + RPCRateLimit: rate.Limit(b.float64Val(c.Limits.RPCRate)), + RaftProtocol: b.intVal(c.RaftProtocol), + RaftSnapshotThreshold: b.intVal(c.RaftSnapshotThreshold), + RaftSnapshotInterval: b.durationVal("raft_snapshot_interval", c.RaftSnapshotInterval), + ReconnectTimeoutLAN: b.durationVal("reconnect_timeout", c.ReconnectTimeoutLAN), + ReconnectTimeoutWAN: b.durationVal("reconnect_timeout_wan", c.ReconnectTimeoutWAN), + RejoinAfterLeave: b.boolVal(c.RejoinAfterLeave), + RetryJoinIntervalLAN: b.durationVal("retry_interval", c.RetryJoinIntervalLAN), + RetryJoinIntervalWAN: b.durationVal("retry_interval_wan", c.RetryJoinIntervalWAN), + RetryJoinLAN: b.expandAllOptionalAddrs("retry_join", c.RetryJoinLAN), + RetryJoinMaxAttemptsLAN: b.intVal(c.RetryJoinMaxAttemptsLAN), + RetryJoinMaxAttemptsWAN: b.intVal(c.RetryJoinMaxAttemptsWAN), + RetryJoinWAN: b.expandAllOptionalAddrs("retry_join_wan", c.RetryJoinWAN), + SegmentName: b.stringVal(c.SegmentName), + Segments: segments, + SerfAdvertiseAddrLAN: serfAdvertiseAddrLAN, + SerfAdvertiseAddrWAN: serfAdvertiseAddrWAN, + SerfBindAddrLAN: serfBindAddrLAN, + SerfBindAddrWAN: serfBindAddrWAN, + SerfPortLAN: serfPortLAN, + SerfPortWAN: serfPortWAN, + ServerMode: b.boolVal(c.ServerMode), + ServerName: b.stringVal(c.ServerName), + ServerPort: serverPort, + Services: services, + SessionTTLMin: b.durationVal("session_ttl_min", c.SessionTTLMin), + SkipLeaveOnInt: skipLeaveOnInt, + StartJoinAddrsLAN: b.expandAllOptionalAddrs("start_join", c.StartJoinAddrsLAN), + StartJoinAddrsWAN: b.expandAllOptionalAddrs("start_join_wan", c.StartJoinAddrsWAN), + SyslogFacility: b.stringVal(c.SyslogFacility), + TLSCipherSuites: b.tlsCipherSuites("tls_cipher_suites", c.TLSCipherSuites), + TLSMinVersion: b.stringVal(c.TLSMinVersion), + TLSPreferServerCipherSuites: b.boolVal(c.TLSPreferServerCipherSuites), + TaggedAddresses: c.TaggedAddresses, + TranslateWANAddrs: b.boolVal(c.TranslateWANAddrs), + UIDir: b.stringVal(c.UIDir), + UnixSocketGroup: b.stringVal(c.UnixSocket.Group), + UnixSocketMode: b.stringVal(c.UnixSocket.Mode), + UnixSocketUser: b.stringVal(c.UnixSocket.User), + VerifyIncoming: b.boolVal(c.VerifyIncoming), + VerifyIncomingHTTPS: b.boolVal(c.VerifyIncomingHTTPS), + VerifyIncomingRPC: b.boolVal(c.VerifyIncomingRPC), + VerifyOutgoing: b.boolVal(c.VerifyOutgoing), + VerifyServerHostname: b.boolVal(c.VerifyServerHostname), + Watches: c.Watches, } if rt.BootstrapExpect == 1 { diff --git a/agent/config/runtime.go b/agent/config/runtime.go index 15a7ac2ba..ea04d5aa0 100644 --- a/agent/config/runtime.go +++ b/agent/config/runtime.go @@ -633,15 +633,15 @@ type RuntimeConfig struct { // ConnectProxyDefaultExecMode is used where a registration doesn't include an // exec_mode. Defaults to daemon. - ConnectProxyDefaultExecMode *string + ConnectProxyDefaultExecMode string // ConnectProxyDefaultDaemonCommand is used to start proxy in exec_mode = // daemon if not specified at registration time. - ConnectProxyDefaultDaemonCommand *string + ConnectProxyDefaultDaemonCommand string // ConnectProxyDefaultScriptCommand is used to start proxy in exec_mode = // script if not specified at registration time. - ConnectProxyDefaultScriptCommand *string + ConnectProxyDefaultScriptCommand string // ConnectProxyDefaultConfig is merged with any config specified at // registration time to allow global control of defaults. diff --git a/agent/config/runtime_test.go b/agent/config/runtime_test.go index 9dff7733b..36fffe16a 100644 --- a/agent/config/runtime_test.go +++ b/agent/config/runtime_test.go @@ -2830,7 +2830,9 @@ func TestFullConfig(t *testing.T) { script_command = "proxyctl.sh" config = { foo = "bar" - connect_timeout_ms = 1000 + # hack float since json parses numbers as float and we have to + # assert against the same thing + connect_timeout_ms = 1000.0 pedantic_mode = true } } @@ -3423,6 +3425,14 @@ func TestFullConfig(t *testing.T) { "g4cvJyys": "IRLXE9Ds", "hyMy9Oxn": "XeBp4Sis", }, + ConnectProxyDefaultExecMode: "script", + ConnectProxyDefaultDaemonCommand: "consul connect proxy", + ConnectProxyDefaultScriptCommand: "proxyctl.sh", + ConnectProxyDefaultConfig: map[string]interface{}{ + "foo": "bar", + "connect_timeout_ms": float64(1000), + "pedantic_mode": true, + }, DNSAddrs: []net.Addr{tcpAddr("93.95.95.81:7001"), udpAddr("93.95.95.81:7001")}, DNSARecordLimit: 29907, DNSAllowStale: true, @@ -4099,9 +4109,9 @@ func TestSanitize(t *testing.T) { "ConnectProxyBindMaxPort": 0, "ConnectProxyBindMinPort": 0, "ConnectProxyDefaultConfig": {}, - "ConnectProxyDefaultDaemonCommand": null, - "ConnectProxyDefaultExecMode": null, - "ConnectProxyDefaultScriptCommand": null, + "ConnectProxyDefaultDaemonCommand": "", + "ConnectProxyDefaultExecMode": "", + "ConnectProxyDefaultScriptCommand": "", "ConsulCoordinateUpdateBatchSize": 0, "ConsulCoordinateUpdateMaxBatches": 0, "ConsulCoordinateUpdatePeriod": "15s", diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index fbb5eed49..552c57535 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -150,7 +150,7 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) (string, string spiffeId := &SpiffeIDService{ Host: fmt.Sprintf("%s.consul", testClusterID), Namespace: "default", - Datacenter: "dc01", + Datacenter: "dc1", Service: service, } diff --git a/agent/connect/testing_spiffe.go b/agent/connect/testing_spiffe.go index e2e7a470f..d6a70cb81 100644 --- a/agent/connect/testing_spiffe.go +++ b/agent/connect/testing_spiffe.go @@ -9,7 +9,7 @@ func TestSpiffeIDService(t testing.T, service string) *SpiffeIDService { return &SpiffeIDService{ Host: testClusterID + ".consul", Namespace: "default", - Datacenter: "dc01", + Datacenter: "dc1", Service: service, } } diff --git a/agent/http_oss.go b/agent/http_oss.go index d9b8068ef..9b9857e40 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -48,7 +48,7 @@ func init() { registerEndpoint("/v1/connect/ca/roots", []string{"GET"}, (*HTTPServer).ConnectCARoots) registerEndpoint("/v1/connect/intentions", []string{"GET", "POST"}, (*HTTPServer).IntentionEndpoint) registerEndpoint("/v1/connect/intentions/match", []string{"GET"}, (*HTTPServer).IntentionMatch) - registerEndpoint("/v1/connect/intentions/", []string{"GET"}, (*HTTPServer).IntentionSpecific) + registerEndpoint("/v1/connect/intentions/", []string{"GET", "PUT", "DELETE"}, (*HTTPServer).IntentionSpecific) registerEndpoint("/v1/coordinate/datacenters", []string{"GET"}, (*HTTPServer).CoordinateDatacenters) registerEndpoint("/v1/coordinate/nodes", []string{"GET"}, (*HTTPServer).CoordinateNodes) registerEndpoint("/v1/coordinate/node/", []string{"GET"}, (*HTTPServer).CoordinateNode) diff --git a/agent/local/state.go b/agent/local/state.go index 839b3cdb2..8df600b32 100644 --- a/agent/local/state.go +++ b/agent/local/state.go @@ -608,7 +608,12 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*str l.Lock() defer l.Unlock() - // Allocate port if needed (min and max inclusive) + // Does this proxy instance allready exist? + if existing, ok := l.managedProxies[svc.ID]; ok { + svc.Port = existing.Proxy.ProxyService.Port + } + + // Allocate port if needed (min and max inclusive). rangeLen := l.config.ProxyBindMaxPort - l.config.ProxyBindMinPort + 1 if svc.Port < 1 && l.config.ProxyBindMinPort > 0 && rangeLen > 0 { // This should be a really short list so don't bother optimising lookup yet. diff --git a/agent/local/state_test.go b/agent/local/state_test.go index 16975d963..dd887ccb1 100644 --- a/agent/local/state_test.go +++ b/agent/local/state_test.go @@ -1721,6 +1721,21 @@ func TestStateProxyManagement(t *testing.T) { // Port is non-deterministic but could be either of 20000 or 20001 assert.Contains([]int{20000, 20001}, svc.Port) + { + // Re-registering same proxy again should not pick a random port but re-use + // the assigned one. + svcDup, err := state.AddProxy(&p1, "fake-token") + require.NoError(err) + + assert.Equal("web-proxy", svcDup.ID) + assert.Equal("web-proxy", svcDup.Service) + assert.Equal(structs.ServiceKindConnectProxy, svcDup.Kind) + assert.Equal("web", svcDup.ProxyDestination) + assert.Equal("", svcDup.Address, "should have empty address by default") + // Port must be same as before + assert.Equal(svc.Port, svcDup.Port) + } + // Second proxy should claim other port p2 := p1 p2.TargetServiceID = "cache" diff --git a/agent/structs/connect.go b/agent/structs/connect.go index 20970c1bf..90513ae8c 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -24,8 +24,11 @@ type ConnectAuthorizeRequest struct { type ProxyExecMode int const ( + // ProxyExecModeUnspecified uses the global default proxy mode. + ProxyExecModeUnspecified ProxyExecMode = iota + // ProxyExecModeDaemon executes a proxy process as a supervised daemon. - ProxyExecModeDaemon ProxyExecMode = iota + ProxyExecModeDaemon // ProxyExecModeScript executes a proxy config script on each change to it's // config. @@ -35,6 +38,8 @@ const ( // String implements Stringer func (m ProxyExecMode) String() string { switch m { + case ProxyExecModeUnspecified: + return "global_default" case ProxyExecModeDaemon: return "daemon" case ProxyExecModeScript: diff --git a/agent/structs/service_definition.go b/agent/structs/service_definition.go index 2ed424178..d4dc21414 100644 --- a/agent/structs/service_definition.go +++ b/agent/structs/service_definition.go @@ -55,10 +55,11 @@ func (s *ServiceDefinition) ConnectManagedProxy() (*ConnectManagedProxy, error) // which we shouldn't hard code ourselves here... ns := s.NodeService() - execMode := ProxyExecModeDaemon + execMode := ProxyExecModeUnspecified switch s.Connect.Proxy.ExecMode { case "": - execMode = ProxyExecModeDaemon + // Use default + break case "daemon": execMode = ProxyExecModeDaemon case "script": diff --git a/api/agent.go b/api/agent.go index a81fd96f8..b8125c91e 100644 --- a/api/agent.go +++ b/api/agent.go @@ -609,9 +609,6 @@ func (a *Agent) ConnectCARoots(q *QueryOptions) (*CARootList, *QueryMeta, error) } // ConnectCALeaf gets the leaf certificate for the given service ID. -// -// TODO(mitchellh): we need to test this better once we have a way to -// configure CAs from the API package (when the CA work is done). func (a *Agent) ConnectCALeaf(serviceID string, q *QueryOptions) (*LeafCert, *QueryMeta, error) { r := a.c.newRequest("GET", "/v1/agent/connect/ca/leaf/"+serviceID) r.setQueryOptions(q) diff --git a/api/agent_test.go b/api/agent_test.go index 8cc58e012..1f816c23a 100644 --- a/api/agent_test.go +++ b/api/agent_test.go @@ -1049,17 +1049,71 @@ func TestAPI_AgentConnectCARoots_empty(t *testing.T) { agent := c.Agent() list, meta, err := agent.ConnectCARoots(nil) - require.Nil(err) + require.NoError(err) require.Equal(uint64(0), meta.LastIndex) require.Len(list.Roots, 0) } +func TestAPI_AgentConnectCARoots_list(t *testing.T) { + t.Parallel() + + require := require.New(t) + c, s := makeClientWithConfig(t, nil, func(c *testutil.TestServerConfig) { + // Force auto port range to 1 port so we have deterministic response. + c.Connect = map[string]interface{}{ + "enabled": true, + } + }) + defer s.Stop() + + agent := c.Agent() + list, meta, err := agent.ConnectCARoots(nil) + require.NoError(err) + require.True(meta.LastIndex > 0) + require.Len(list.Roots, 1) +} + +func TestAPI_AgentConnectCALeaf(t *testing.T) { + t.Parallel() + + require := require.New(t) + c, s := makeClientWithConfig(t, nil, func(c *testutil.TestServerConfig) { + // Force auto port range to 1 port so we have deterministic response. + c.Connect = map[string]interface{}{ + "enabled": true, + } + }) + defer s.Stop() + + agent := c.Agent() + // Setup service + reg := &AgentServiceRegistration{ + Name: "foo", + Tags: []string{"bar", "baz"}, + Port: 8000, + } + require.NoError(agent.ServiceRegister(reg)) + + leaf, meta, err := agent.ConnectCALeaf("foo", nil) + require.NoError(err) + require.True(meta.LastIndex > 0) + // Sanity checks here as we have actual certificate validation checks at many + // other levels. + require.NotEmpty(leaf.SerialNumber) + require.NotEmpty(leaf.CertPEM) + require.NotEmpty(leaf.PrivateKeyPEM) + require.Equal("foo", leaf.Service) + require.True(strings.HasSuffix(leaf.ServiceURI, "/svc/foo")) + require.True(leaf.ModifyIndex > 0) + require.True(leaf.ValidAfter.Before(time.Now())) + require.True(leaf.ValidBefore.After(time.Now())) +} + // TODO(banks): once we have CA stuff setup properly we can probably make this // much more complete. This is just a sanity check that the agent code basically // works. func TestAPI_AgentConnectAuthorize(t *testing.T) { t.Parallel() - require := require.New(t) c, s := makeClient(t) defer s.Stop() @@ -1079,7 +1133,15 @@ func TestAPI_AgentConnectAuthorize(t *testing.T) { func TestAPI_AgentConnectProxyConfig(t *testing.T) { t.Parallel() - c, s := makeClient(t) + c, s := makeClientWithConfig(t, nil, func(c *testutil.TestServerConfig) { + // Force auto port range to 1 port so we have deterministic response. + c.Connect = map[string]interface{}{ + "proxy_defaults": map[string]interface{}{ + "bind_min_port": 20000, + "bind_max_port": 20000, + }, + } + }) defer s.Stop() agent := c.Agent() @@ -1107,9 +1169,12 @@ func TestAPI_AgentConnectProxyConfig(t *testing.T) { TargetServiceName: "foo", ContentHash: "e662ea8600d84cf0", ExecMode: "daemon", - Command: "", + Command: "consul connect proxy", Config: map[string]interface{}{ - "foo": "bar", + "bind_address": "127.0.0.1", + "bind_port": float64(20000), + "foo": "bar", + "local_service_address": "127.0.0.1:8000", }, } require.Equal(t, expectConfig, config) diff --git a/connect/proxy/config.go b/connect/proxy/config.go index 6fad0bd55..840afa896 100644 --- a/connect/proxy/config.go +++ b/connect/proxy/config.go @@ -52,11 +52,6 @@ type Config struct { // private key to be used in development instead of the ones supplied by // Connect. DevServiceKeyFile string `json:"dev_service_key_file" hcl:"dev_service_key_file"` - - // service is a connect.Service instance representing the proxied service. It - // is created internally by the code responsible for setting up config as it - // may depend on other external dependencies - service *connect.Service } // PublicListenerConfig contains the parameters needed for the incoming mTLS @@ -89,6 +84,9 @@ func (plc *PublicListenerConfig) applyDefaults() { if plc.HandshakeTimeoutMs == 0 { plc.HandshakeTimeoutMs = 10000 } + if plc.BindAddress == "" { + plc.BindAddress = "0.0.0.0" + } } // UpstreamConfig configures an upstream (outgoing) listener. @@ -258,7 +256,6 @@ func NewAgentConfigWatcher(client *api.Client, proxyID string, func (w *AgentConfigWatcher) handler(blockVal watch.BlockingParamVal, val interface{}) { - log.Printf("DEBUG: got hash %s", blockVal.(watch.WaitHashVal)) resp, ok := val.(*api.ConnectProxyConfig) if !ok { @@ -266,25 +263,16 @@ func (w *AgentConfigWatcher) handler(blockVal watch.BlockingParamVal, return } - // Setup Service instance now we know target ID etc - service, err := connect.NewService(resp.TargetServiceID, w.client) - if err != nil { - w.logger.Printf("[WARN] proxy config watch failed to initialize"+ - " service: %s", err) - return - } - // Create proxy config from the response cfg := &Config{ ProxyID: w.proxyID, // Token should be already setup in the client ProxiedServiceID: resp.TargetServiceID, ProxiedServiceNamespace: "default", - service: service, } // Unmarshal configs - err = mapstructure.Decode(resp.Config, &cfg.PublicListener) + err := mapstructure.Decode(resp.Config, &cfg.PublicListener) if err != nil { w.logger.Printf("[ERR] proxy config watch public listener config "+ "couldn't be parsed: %s", err) diff --git a/connect/proxy/config_test.go b/connect/proxy/config_test.go index e576d5f82..1473e8fea 100644 --- a/connect/proxy/config_test.go +++ b/connect/proxy/config_test.go @@ -175,11 +175,6 @@ func TestAgentConfigWatcher(t *testing.T) { }, } - // nil this out as comparisons are problematic, we'll explicitly sanity check - // it's reasonable later. - assert.NotNil(t, cfg.service) - cfg.service = nil - assert.Equal(t, expectCfg, cfg) // TODO(banks): Sanity check the service is viable and gets TLS certs eventually from @@ -213,11 +208,6 @@ func TestAgentConfigWatcher(t *testing.T) { }) expectCfg.PublicListener.LocalConnectTimeoutMs = 444 - // nil this out as comparisons are problematic, we'll explicitly sanity check - // it's reasonable later. - assert.NotNil(t, cfg.service) - cfg.service = nil - assert.Equal(t, expectCfg, cfg) } diff --git a/connect/proxy/listener.go b/connect/proxy/listener.go index 12134f840..33f1f5292 100644 --- a/connect/proxy/listener.go +++ b/connect/proxy/listener.go @@ -20,8 +20,10 @@ type Listener struct { // Service is the connect service instance to use. Service *connect.Service + // listenFunc, dialFunc and bindAddr are set by type-specific constructors listenFunc func() (net.Listener, error) dialFunc func() (net.Conn, error) + bindAddr string stopFlag int32 stopChan chan struct{} @@ -42,17 +44,17 @@ type Listener struct { // connections and proxy them to the configured local application over TCP. func NewPublicListener(svc *connect.Service, cfg PublicListenerConfig, logger *log.Logger) *Listener { + bindAddr := fmt.Sprintf("%s:%d", cfg.BindAddress, cfg.BindPort) return &Listener{ Service: svc, listenFunc: func() (net.Listener, error) { - return tls.Listen("tcp", - fmt.Sprintf("%s:%d", cfg.BindAddress, cfg.BindPort), - svc.ServerTLSConfig()) + return tls.Listen("tcp", bindAddr, svc.ServerTLSConfig()) }, dialFunc: func() (net.Conn, error) { return net.DialTimeout("tcp", cfg.LocalServiceAddress, time.Duration(cfg.LocalConnectTimeoutMs)*time.Millisecond) }, + bindAddr: bindAddr, stopChan: make(chan struct{}), listeningChan: make(chan struct{}), logger: logger, @@ -63,11 +65,11 @@ func NewPublicListener(svc *connect.Service, cfg PublicListenerConfig, // connections that are proxied to a discovered Connect service instance. func NewUpstreamListener(svc *connect.Service, cfg UpstreamConfig, logger *log.Logger) *Listener { + bindAddr := fmt.Sprintf("%s:%d", cfg.LocalBindAddress, cfg.LocalBindPort) return &Listener{ Service: svc, listenFunc: func() (net.Listener, error) { - return net.Listen("tcp", - fmt.Sprintf("%s:%d", cfg.LocalBindAddress, cfg.LocalBindPort)) + return net.Listen("tcp", bindAddr) }, dialFunc: func() (net.Conn, error) { if cfg.resolver == nil { @@ -78,6 +80,7 @@ func NewUpstreamListener(svc *connect.Service, cfg UpstreamConfig, defer cancel() return svc.Dial(ctx, cfg.resolver) }, + bindAddr: bindAddr, stopChan: make(chan struct{}), listeningChan: make(chan struct{}), logger: logger, @@ -142,3 +145,8 @@ func (l *Listener) Close() error { func (l *Listener) Wait() { <-l.listeningChan } + +// BindAddr returns the address the listen is bound to. +func (l *Listener) BindAddr() string { + return l.bindAddr +} diff --git a/connect/proxy/proxy.go b/connect/proxy/proxy.go index bda6f3afb..717d45ae6 100644 --- a/connect/proxy/proxy.go +++ b/connect/proxy/proxy.go @@ -1,6 +1,8 @@ package proxy import ( + "bytes" + "crypto/x509" "log" "github.com/hashicorp/consul/api" @@ -14,6 +16,7 @@ type Proxy struct { cfgWatcher ConfigWatcher stopChan chan struct{} logger *log.Logger + service *connect.Service } // NewFromConfigFile returns a Proxy instance configured just from a local file. @@ -27,12 +30,11 @@ func NewFromConfigFile(client *api.Client, filename string, } service, err := connect.NewDevServiceFromCertFiles(cfg.ProxiedServiceID, - client, logger, cfg.DevCAFile, cfg.DevServiceCertFile, + logger, cfg.DevCAFile, cfg.DevServiceCertFile, cfg.DevServiceKeyFile) if err != nil { return nil, err } - cfg.service = service p := &Proxy{ proxyID: cfg.ProxyID, @@ -40,6 +42,7 @@ func NewFromConfigFile(client *api.Client, filename string, cfgWatcher: NewStaticConfigWatcher(cfg), stopChan: make(chan struct{}), logger: logger, + service: service, } return p, nil } @@ -47,16 +50,18 @@ func NewFromConfigFile(client *api.Client, filename string, // New returns a Proxy with the given id, consuming the provided (configured) // agent. It is ready to Run(). func New(client *api.Client, proxyID string, logger *log.Logger) (*Proxy, error) { + cw, err := NewAgentConfigWatcher(client, proxyID, logger) + if err != nil { + return nil, err + } p := &Proxy{ - proxyID: proxyID, - client: client, - cfgWatcher: &AgentConfigWatcher{ - client: client, - proxyID: proxyID, - logger: logger, - }, - stopChan: make(chan struct{}), - logger: logger, + proxyID: proxyID, + client: client, + cfgWatcher: cw, + stopChan: make(chan struct{}), + logger: logger, + // Can't load service yet as we only have the proxy's ID not the service's + // until initial config fetch happens. } return p, nil } @@ -71,16 +76,29 @@ func (p *Proxy) Serve() error { select { case newCfg := <-p.cfgWatcher.Watch(): p.logger.Printf("[DEBUG] got new config") - if newCfg.service == nil { - p.logger.Printf("[ERR] new config has nil service") - continue - } + if cfg == nil { // Initial setup + // Setup Service instance now we know target ID etc + service, err := connect.NewService(newCfg.ProxiedServiceID, p.client) + if err != nil { + return err + } + p.service = service + + go func() { + <-service.ReadyWait() + p.logger.Printf("[INFO] proxy loaded config and ready to serve") + tcfg := service.ServerTLSConfig() + cert, _ := tcfg.GetCertificate(nil) + leaf, _ := x509.ParseCertificate(cert.Certificate[0]) + p.logger.Printf("[DEBUG] leaf: %s roots: %s", leaf.URIs[0], bytes.Join(tcfg.RootCAs.Subjects(), []byte(","))) + }() + newCfg.PublicListener.applyDefaults() - l := NewPublicListener(newCfg.service, newCfg.PublicListener, p.logger) - err := p.startListener("public listener", l) + l := NewPublicListener(p.service, newCfg.PublicListener, p.logger) + err = p.startListener("public listener", l) if err != nil { return err } @@ -93,7 +111,13 @@ func (p *Proxy) Serve() error { uc.applyDefaults() uc.resolver = UpstreamResolverFromClient(p.client, uc) - l := NewUpstreamListener(newCfg.service, uc, p.logger) + if uc.LocalBindPort < 1 { + p.logger.Printf("[ERR] upstream %s has no local_bind_port. "+ + "Can't start upstream.", uc.String()) + continue + } + + l := NewUpstreamListener(p.service, uc, p.logger) err := p.startListener(uc.String(), l) if err != nil { p.logger.Printf("[ERR] failed to start upstream %s: %s", uc.String(), @@ -110,6 +134,7 @@ func (p *Proxy) Serve() error { // startPublicListener is run from the internal state machine loop func (p *Proxy) startListener(name string, l *Listener) error { + p.logger.Printf("[INFO] %s starting on %s", name, l.BindAddr()) go func() { err := l.Serve() if err != nil { @@ -122,6 +147,7 @@ func (p *Proxy) startListener(name string, l *Listener) error { go func() { <-p.stopChan l.Close() + }() return nil @@ -131,4 +157,7 @@ func (p *Proxy) startListener(name string, l *Listener) error { // called only once. func (p *Proxy) Close() { close(p.stopChan) + if p.service != nil { + p.service.Close() + } } diff --git a/connect/resolver.go b/connect/resolver.go index 9873fcdf1..98d8c88d3 100644 --- a/connect/resolver.go +++ b/connect/resolver.go @@ -7,7 +7,6 @@ import ( "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/api" - testing "github.com/mitchellh/go-testing-interface" ) // Resolver is the interface implemented by a service discovery mechanism to get @@ -122,7 +121,12 @@ func (cr *ConsulResolver) resolveService(ctx context.Context) (string, connect.C // propagating these trust domains we need to actually fetch the trust domain // somehow. We also need to implement namespaces. Use of test function here is // temporary pending the work on trust domains. - certURI := connect.TestSpiffeIDService(&testing.RuntimeT{}, cr.Name) + certURI := &connect.SpiffeIDService{ + Host: "11111111-2222-3333-4444-555555555555.consul", + Namespace: "default", + Datacenter: svcs[idx].Node.Datacenter, + Service: svcs[idx].Service.ProxyDestination, + } return fmt.Sprintf("%s:%d", addr, port), certURI, nil } diff --git a/connect/service.go b/connect/service.go index 4f38558a3..af9fbfcb7 100644 --- a/connect/service.go +++ b/connect/service.go @@ -41,7 +41,8 @@ type Service struct { // fetch certificates and print a loud error message. It will not Close() or // kill the process since that could lead to a crash loop in every service if // ACL token was revoked. All attempts to dial will error and any incoming - // connections will fail to verify. + // connections will fail to verify. It may be nil if the Service is being + // configured from local files for development or testing. client *api.Client // tlsCfg is the dynamic TLS config @@ -63,6 +64,10 @@ type Service struct { // NewService creates and starts a Service. The caller must close the returned // service to free resources and allow the program to exit normally. This is // typically called in a signal handler. +// +// Caller must provide client which is already configured to speak to the local +// Consul agent, and with an ACL token that has `service:write` privileges for +// the serviceID specified. func NewService(serviceID string, client *api.Client) (*Service, error) { return NewServiceWithLogger(serviceID, client, log.New(os.Stderr, "", log.LstdFlags)) @@ -89,7 +94,8 @@ func NewServiceWithLogger(serviceID string, client *api.Client, s.rootsWatch.HybridHandler = s.rootsWatchHandler p, err = watch.Parse(map[string]interface{}{ - "type": "connect_leaf", + "type": "connect_leaf", + "service_id": s.serviceID, }) if err != nil { return nil, err @@ -97,26 +103,33 @@ func NewServiceWithLogger(serviceID string, client *api.Client, s.leafWatch = p s.leafWatch.HybridHandler = s.leafWatchHandler - //go s.rootsWatch.RunWithClientAndLogger(s.client, s.logger) - //go s.leafWatch.RunWithClientAndLogger(s.client, s.logger) + go s.rootsWatch.RunWithClientAndLogger(client, s.logger) + go s.leafWatch.RunWithClientAndLogger(client, s.logger) return s, nil } // NewDevServiceFromCertFiles creates a Service using certificate and key files // passed instead of fetching them from the client. -func NewDevServiceFromCertFiles(serviceID string, client *api.Client, - logger *log.Logger, caFile, certFile, keyFile string) (*Service, error) { - s := &Service{ - serviceID: serviceID, - client: client, - logger: logger, - } +func NewDevServiceFromCertFiles(serviceID string, logger *log.Logger, + caFile, certFile, keyFile string) (*Service, error) { + tlsCfg, err := devTLSConfigFromFiles(caFile, certFile, keyFile) if err != nil { return nil, err } - s.tlsCfg = newDynamicTLSConfig(tlsCfg) + return NewDevServiceWithTLSConfig(serviceID, logger, tlsCfg) +} + +// NewDevServiceWithTLSConfig creates a Service using static TLS config passed. +// It's mostly useful for testing. +func NewDevServiceWithTLSConfig(serviceID string, logger *log.Logger, + tlsCfg *tls.Config) (*Service, error) { + s := &Service{ + serviceID: serviceID, + logger: logger, + tlsCfg: newDynamicTLSConfig(tlsCfg), + } return s, nil } @@ -274,3 +287,17 @@ func (s *Service) leafWatchHandler(blockParam watch.BlockingParamVal, raw interf s.tlsCfg.SetLeaf(&cert) } + +// Ready returns whether or not both roots and a leaf certificate are +// configured. If both are non-nil, they are assumed to be valid and usable. +func (s *Service) Ready() bool { + return s.tlsCfg.Ready() +} + +// ReadyWait returns a chan that is closed when the the Service becomes ready +// for use. Note that if the Service is ready when it is called it returns a nil +// chan. Ready means that it has root and leaf certificates configured which we +// assume are valid. +func (s *Service) ReadyWait() <-chan struct{} { + return s.tlsCfg.ReadyWait() +} diff --git a/connect/service_test.go b/connect/service_test.go index 20433d1f5..64ca28fc7 100644 --- a/connect/service_test.go +++ b/connect/service_test.go @@ -1,16 +1,21 @@ package connect import ( + "bytes" "context" "crypto/tls" + "crypto/x509" "fmt" "io" "io/ioutil" "net/http" + "strings" "testing" "time" + "github.com/hashicorp/consul/agent" "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/testutil/retry" "github.com/stretchr/testify/require" ) @@ -111,10 +116,91 @@ func TestService_Dial(t *testing.T) { } func TestService_ServerTLSConfig(t *testing.T) { - // TODO(banks): it's mostly meaningless to test this now since we directly set - // the tlsCfg in our TestService helper which is all we'd be asserting on here - // not the actual implementation. Once agent tls fetching is built, it becomes - // more meaningful to actually verify it's returning the correct config. + require := require.New(t) + + a := agent.NewTestAgent("007", "") + defer a.Shutdown() + client := a.Client() + agent := client.Agent() + + // NewTestAgent setup a CA already by default + + // Register a local agent service with a managed proxy + reg := &api.AgentServiceRegistration{ + Name: "web", + Port: 8080, + } + err := agent.ServiceRegister(reg) + require.NoError(err) + + // Now we should be able to create a service that will eventually get it's TLS + // all by itself! + service, err := NewService("web", client) + require.NoError(err) + + // Wait for it to be ready + select { + case <-service.ReadyWait(): + // continue with test case below + case <-time.After(1 * time.Second): + t.Fatalf("timeout waiting for Service.ReadyWait after 1s") + } + + tlsCfg := service.ServerTLSConfig() + + // Sanity check it has a leaf with the right ServiceID and that validates with + // the given roots. + require.NotNil(tlsCfg.GetCertificate) + leaf, err := tlsCfg.GetCertificate(&tls.ClientHelloInfo{}) + require.NoError(err) + cert, err := x509.ParseCertificate(leaf.Certificate[0]) + require.NoError(err) + require.Len(cert.URIs, 1) + require.True(strings.HasSuffix(cert.URIs[0].String(), "/svc/web")) + + // Verify it as a client would + err = clientSideVerifier(tlsCfg, leaf.Certificate) + require.NoError(err) + + // Now test that rotating the root updates + { + // Setup a new generated CA + connect.TestCAConfigSet(t, a, nil) + } + + // After some time, both root and leaves should be different but both should + // still be correct. + oldRootSubjects := bytes.Join(tlsCfg.RootCAs.Subjects(), []byte(", ")) + //oldLeafSerial := connect.HexString(cert.SerialNumber.Bytes()) + oldLeafKeyID := connect.HexString(cert.SubjectKeyId) + retry.Run(t, func(r *retry.R) { + updatedCfg := service.ServerTLSConfig() + + // Wait until roots are different + rootSubjects := bytes.Join(updatedCfg.RootCAs.Subjects(), []byte(", ")) + if bytes.Equal(oldRootSubjects, rootSubjects) { + r.Fatalf("root certificates should have changed, got %s", + rootSubjects) + } + + leaf, err := updatedCfg.GetCertificate(&tls.ClientHelloInfo{}) + r.Check(err) + cert, err := x509.ParseCertificate(leaf.Certificate[0]) + r.Check(err) + + // TODO(banks): Current CA implementation resets the serial index when CA + // config changes which means same serial is issued by new CA config failing + // this test. Re-enable once the CA is changed to fix that. + + // if oldLeafSerial == connect.HexString(cert.SerialNumber.Bytes()) { + // r.Fatalf("leaf certificate should have changed, got serial %s", + // oldLeafSerial) + // } + if oldLeafKeyID == connect.HexString(cert.SubjectKeyId) { + r.Fatalf("leaf should have a different key, got matching SubjectKeyID = %s", + oldLeafKeyID) + } + }) } func TestService_HTTPClient(t *testing.T) { diff --git a/connect/testing.go b/connect/testing.go index 491036aaf..073134b12 100644 --- a/connect/testing.go +++ b/connect/testing.go @@ -8,6 +8,7 @@ import ( "log" "net" "net/http" + "os" "sync/atomic" "github.com/hashicorp/consul/agent/connect" @@ -20,16 +21,12 @@ import ( func TestService(t testing.T, service string, ca *structs.CARoot) *Service { t.Helper() - // Don't need to talk to client since we are setting TLSConfig locally. This - // will cause server verification to skip AuthZ too. - svc, err := NewService(service, nil) + // Don't need to talk to client since we are setting TLSConfig locally + svc, err := NewDevServiceWithTLSConfig(service, + log.New(os.Stderr, "", log.LstdFlags), TestTLSConfig(t, service, ca)) if err != nil { t.Fatal(err) } - - // Override the tlsConfig hackily. - svc.tlsCfg = newDynamicTLSConfig(TestTLSConfig(t, service, ca)) - return svc } diff --git a/connect/tls.go b/connect/tls.go index f5cb95a75..6f14cd787 100644 --- a/connect/tls.go +++ b/connect/tls.go @@ -4,7 +4,9 @@ import ( "crypto/tls" "crypto/x509" "errors" + "fmt" "io/ioutil" + "log" "sync" "github.com/hashicorp/consul/agent/connect" @@ -104,7 +106,8 @@ func verifyServerCertMatchesURI(certs []*x509.Certificate, if cert.URIs[0].String() == expectedStr { return nil } - return errors.New("peer certificate mismatch") + return fmt.Errorf("peer certificate mismatch got %s, want %s", + cert.URIs[0].String(), expectedStr) } // newServerSideVerifier returns a verifierFunc that wraps the provided @@ -115,21 +118,25 @@ func newServerSideVerifier(client *api.Client, serviceID string) verifierFunc { return func(tlsCfg *tls.Config, rawCerts [][]byte) error { leaf, err := verifyChain(tlsCfg, rawCerts, false) if err != nil { + log.Printf("connect: failed TLS verification: %s", err) return err } // Check leaf is a cert we understand if len(leaf.URIs) < 1 { + log.Printf("connect: invalid leaf certificate") return errors.New("connect: invalid leaf certificate") } certURI, err := connect.ParseCertURI(leaf.URIs[0]) if err != nil { + log.Printf("connect: invalid leaf certificate URI") return errors.New("connect: invalid leaf certificate URI") } // No AuthZ if there is no client. if client == nil { + log.Printf("connect: nil client") return nil } @@ -148,9 +155,11 @@ func newServerSideVerifier(client *api.Client, serviceID string) verifierFunc { } resp, err := client.Agent().ConnectAuthorize(req) if err != nil { + log.Printf("connect: authz call failed: %s", err) return errors.New("connect: authz call failed: " + err.Error()) } if !resp.Authorized { + log.Printf("connect: authz call denied: %s", resp.Reason) return errors.New("connect: authz denied: " + resp.Reason) } return nil @@ -217,9 +226,17 @@ func verifyChain(tlsCfg *tls.Config, rawCerts [][]byte, client bool) (*x509.Cert type dynamicTLSConfig struct { base *tls.Config - sync.Mutex + sync.RWMutex leaf *tls.Certificate roots *x509.CertPool + // readyCh is closed when the config first gets both leaf and roots set. + // Watchers can wait on this via ReadyWait. + readyCh chan struct{} +} + +type tlsCfgUpdate struct { + ch chan struct{} + next *tlsCfgUpdate } // newDynamicTLSConfig returns a dynamicTLSConfig constructed from base. @@ -235,6 +252,9 @@ func newDynamicTLSConfig(base *tls.Config) *dynamicTLSConfig { if base.RootCAs != nil { cfg.roots = base.RootCAs } + if !cfg.Ready() { + cfg.readyCh = make(chan struct{}) + } return cfg } @@ -246,8 +266,8 @@ func newDynamicTLSConfig(base *tls.Config) *dynamicTLSConfig { // client can use this config for a long time and will still verify against the // latest roots even though the roots in the struct is has can't change. func (cfg *dynamicTLSConfig) Get(v verifierFunc) *tls.Config { - cfg.Lock() - defer cfg.Unlock() + cfg.RLock() + defer cfg.RUnlock() copy := cfg.base.Clone() copy.RootCAs = cfg.roots copy.ClientCAs = cfg.roots @@ -281,6 +301,7 @@ func (cfg *dynamicTLSConfig) SetRoots(roots *x509.CertPool) error { cfg.Lock() defer cfg.Unlock() cfg.roots = roots + cfg.notify() return nil } @@ -289,19 +310,43 @@ func (cfg *dynamicTLSConfig) SetLeaf(leaf *tls.Certificate) error { cfg.Lock() defer cfg.Unlock() cfg.leaf = leaf + cfg.notify() return nil } +// notify is called under lock during an update to check if we are now ready. +func (cfg *dynamicTLSConfig) notify() { + if cfg.readyCh != nil && cfg.leaf != nil && cfg.roots != nil { + close(cfg.readyCh) + cfg.readyCh = nil + } +} + // Roots returns the current CA root CertPool. func (cfg *dynamicTLSConfig) Roots() *x509.CertPool { - cfg.Lock() - defer cfg.Unlock() + cfg.RLock() + defer cfg.RUnlock() return cfg.roots } // Leaf returns the current Leaf certificate. func (cfg *dynamicTLSConfig) Leaf() *tls.Certificate { - cfg.Lock() - defer cfg.Unlock() + cfg.RLock() + defer cfg.RUnlock() return cfg.leaf } + +// Ready returns whether or not both roots and a leaf certificate are +// configured. If both are non-nil, they are assumed to be valid and usable. +func (cfg *dynamicTLSConfig) Ready() bool { + cfg.RLock() + defer cfg.RUnlock() + return cfg.leaf != nil && cfg.roots != nil +} + +// ReadyWait returns a chan that is closed when the the tlsConfig becomes Ready +// for use. Note that if the config is ready when it is called it returns a nil +// chan. +func (cfg *dynamicTLSConfig) ReadyWait() <-chan struct{} { + return cfg.readyCh +} diff --git a/connect/tls_test.go b/connect/tls_test.go index aa1063f3e..a9fd6fe8c 100644 --- a/connect/tls_test.go +++ b/connect/tls_test.go @@ -358,3 +358,45 @@ func TestDynamicTLSConfig(t *testing.T) { requireCorrectVerifier(t, newCfg, gotBefore, v1Ch) requireCorrectVerifier(t, newCfg, gotAfter, v2Ch) } + +func TestDynamicTLSConfig_Ready(t *testing.T) { + require := require.New(t) + + ca1 := connect.TestCA(t, nil) + baseCfg := TestTLSConfig(t, "web", ca1) + + c := newDynamicTLSConfig(defaultTLSConfig()) + readyCh := c.ReadyWait() + assertBlocked(t, readyCh) + require.False(c.Ready(), "no roots or leaf, should not be ready") + + err := c.SetLeaf(&baseCfg.Certificates[0]) + require.NoError(err) + assertBlocked(t, readyCh) + require.False(c.Ready(), "no roots, should not be ready") + + err = c.SetRoots(baseCfg.RootCAs) + require.NoError(err) + assertNotBlocked(t, readyCh) + require.True(c.Ready(), "should be ready") +} + +func assertBlocked(t *testing.T, ch <-chan struct{}) { + t.Helper() + select { + case <-ch: + t.Fatalf("want blocked chan") + default: + return + } +} + +func assertNotBlocked(t *testing.T, ch <-chan struct{}) { + t.Helper() + select { + case <-ch: + return + default: + t.Fatalf("want unblocked chan but it blocked") + } +} diff --git a/testutil/server.go b/testutil/server.go index 06c0fdfd2..f188079d7 100644 --- a/testutil/server.go +++ b/testutil/server.go @@ -17,6 +17,7 @@ import ( "fmt" "io" "io/ioutil" + "log" "net" "net/http" "os" @@ -94,6 +95,7 @@ type TestServerConfig struct { VerifyIncomingHTTPS bool `json:"verify_incoming_https,omitempty"` VerifyOutgoing bool `json:"verify_outgoing,omitempty"` EnableScriptChecks bool `json:"enable_script_checks,omitempty"` + Connect map[string]interface{} `json:"connect,omitempty"` ReadyTimeout time.Duration `json:"-"` Stdout, Stderr io.Writer `json:"-"` Args []string `json:"-"` @@ -211,6 +213,7 @@ func newTestServerConfigT(t *testing.T, cb ServerConfigCallback) (*TestServer, e return nil, errors.Wrap(err, "failed marshaling json") } + log.Printf("CONFIG JSON: %s", string(b)) configFile := filepath.Join(tmpdir, "config.json") if err := ioutil.WriteFile(configFile, b, 0644); err != nil { defer os.RemoveAll(tmpdir) diff --git a/watch/funcs.go b/watch/funcs.go index 5e72e40a6..3b1b854ed 100644 --- a/watch/funcs.go +++ b/watch/funcs.go @@ -236,8 +236,7 @@ func eventWatch(params map[string]interface{}) (WatcherFunc, error) { // connectRootsWatch is used to watch for changes to Connect Root certificates. func connectRootsWatch(params map[string]interface{}) (WatcherFunc, error) { - // We don't support stale since roots are likely to be cached locally in the - // agent anyway. + // We don't support stale since roots are cached locally in the agent. fn := func(p *Plan) (BlockingParamVal, interface{}, error) { agent := p.client.Agent() @@ -257,8 +256,7 @@ func connectRootsWatch(params map[string]interface{}) (WatcherFunc, error) { // connectLeafWatch is used to watch for changes to Connect Leaf certificates // for given local service id. func connectLeafWatch(params map[string]interface{}) (WatcherFunc, error) { - // We don't support stale since certs are likely to be cached locally in the - // agent anyway. + // We don't support stale since certs are cached locally in the agent. var serviceID string if err := assignValue(params, "service_id", &serviceID); err != nil { diff --git a/watch/funcs_test.go b/watch/funcs_test.go index d5253de44..b304a803f 100644 --- a/watch/funcs_test.go +++ b/watch/funcs_test.go @@ -7,8 +7,10 @@ import ( "testing" "time" + "github.com/stretchr/testify/assert" + "github.com/hashicorp/consul/agent" - "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/agent/connect" consulapi "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/watch" "github.com/stretchr/testify/require" @@ -526,14 +528,12 @@ func TestEventWatch(t *testing.T) { } func TestConnectRootsWatch(t *testing.T) { - // TODO(banks) enable and make it work once this is supported. Note that this - // test actually passes currently just by busy-polling the roots endpoint - // until it changes. - t.Skip("CA and Leaf implementation don't actually support blocking yet") t.Parallel() - a := agent.NewTestAgent(t.Name(), ``) + // NewTestAgent will bootstrap a new CA + a := agent.NewTestAgent(t.Name(), "") defer a.Shutdown() + var originalCAID string invoke := makeInvokeCh() plan := mustParse(t, `{"type":"connect_roots"}`) plan.Handler = func(idx uint64, raw interface{}) { @@ -544,7 +544,14 @@ func TestConnectRootsWatch(t *testing.T) { if !ok || v == nil { return // ignore } - // TODO(banks): verify the right roots came back. + // Only 1 CA is the bootstrapped state (i.e. first response). Ignore this + // state and wait for the new CA to show up too. + if len(v.Roots) == 1 { + originalCAID = v.ActiveRootID + return + } + assert.NotEmpty(t, originalCAID) + assert.NotEqual(t, originalCAID, v.ActiveRootID) invoke <- nil } @@ -553,22 +560,8 @@ func TestConnectRootsWatch(t *testing.T) { go func() { defer wg.Done() time.Sleep(20 * time.Millisecond) - // TODO(banks): this is a hack since CA config is in flux. We _did_ expose a - // temporary agent endpoint for PUTing config, but didn't expose it in `api` - // package intentionally. If we are going to hack around with temporary API, - // we can might as well drop right down to the RPC level... - args := structs.CAConfiguration{ - Provider: "static", - Config: map[string]interface{}{ - "Name": "test-1", - "Generate": true, - }, - } - var reply interface{} - if err := a.RPC("ConnectCA.ConfigurationSet", &args, &reply); err != nil { - t.Fatalf("err: %v", err) - } - + // Set a new CA + connect.TestCAConfigSet(t, a, nil) }() wg.Add(1) @@ -588,9 +581,8 @@ func TestConnectRootsWatch(t *testing.T) { } func TestConnectLeafWatch(t *testing.T) { - // TODO(banks) enable and make it work once this is supported. - t.Skip("CA and Leaf implementation don't actually support blocking yet") t.Parallel() + // NewTestAgent will bootstrap a new CA a := agent.NewTestAgent(t.Name(), ``) defer a.Shutdown() @@ -606,25 +598,10 @@ func TestConnectLeafWatch(t *testing.T) { require.Nil(t, err) } - // Setup a new generated CA - // - // TODO(banks): this is a hack since CA config is in flux. We _did_ expose a - // temporary agent endpoint for PUTing config, but didn't expose it in `api` - // package intentionally. If we are going to hack around with temporary API, - // we can might as well drop right down to the RPC level... - args := structs.CAConfiguration{ - Provider: "static", - Config: map[string]interface{}{ - "Name": "test-1", - "Generate": true, - }, - } - var reply interface{} - if err := a.RPC("ConnectCA.ConfigurationSet", &args, &reply); err != nil { - t.Fatalf("err: %v", err) - } + var lastCert *consulapi.LeafCert - invoke := makeInvokeCh() + //invoke := makeInvokeCh() + invoke := make(chan error) plan := mustParse(t, `{"type":"connect_leaf", "service_id":"web"}`) plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { @@ -634,7 +611,18 @@ func TestConnectLeafWatch(t *testing.T) { if !ok || v == nil { return // ignore } - // TODO(banks): verify the right leaf came back. + if lastCert == nil { + // Initial fetch, just store the cert and return + lastCert = v + return + } + // TODO(banks): right now the root rotation actually causes Serial numbers + // to reset so these end up all being the same. That needs fixing but it's + // a bigger task than I want to bite off for this PR. + //assert.NotEqual(t, lastCert.SerialNumber, v.SerialNumber) + assert.NotEqual(t, lastCert.CertPEM, v.CertPEM) + assert.NotEqual(t, lastCert.PrivateKeyPEM, v.PrivateKeyPEM) + assert.NotEqual(t, lastCert.ModifyIndex, v.ModifyIndex) invoke <- nil } @@ -643,20 +631,8 @@ func TestConnectLeafWatch(t *testing.T) { go func() { defer wg.Done() time.Sleep(20 * time.Millisecond) - - // Change the CA which should eventually trigger a leaf change but probably - // won't now so this test has no way to succeed yet. - args := structs.CAConfiguration{ - Provider: "static", - Config: map[string]interface{}{ - "Name": "test-2", - "Generate": true, - }, - } - var reply interface{} - if err := a.RPC("ConnectCA.ConfigurationSet", &args, &reply); err != nil { - t.Fatalf("err: %v", err) - } + // Change the CA to trigger a leaf change + connect.TestCAConfigSet(t, a, nil) }() wg.Add(1) @@ -740,6 +716,7 @@ func TestConnectProxyConfigWatch(t *testing.T) { } func mustParse(t *testing.T, q string) *watch.Plan { + t.Helper() var params map[string]interface{} if err := json.Unmarshal([]byte(q), ¶ms); err != nil { t.Fatal(err) From c47ad68f25f594e98396a5d487633051fe9d9e9c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 25 Apr 2018 14:21:03 -0700 Subject: [PATCH 188/539] wip --- agent/proxy/daemon.go | 24 ++++++++++++++++++++++++ agent/proxy/manager.go | 1 + agent/proxy/proxy.go | 12 ++++++++++++ 3 files changed, 37 insertions(+) create mode 100644 agent/proxy/daemon.go create mode 100644 agent/proxy/manager.go create mode 100644 agent/proxy/proxy.go diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go new file mode 100644 index 000000000..f432271b6 --- /dev/null +++ b/agent/proxy/daemon.go @@ -0,0 +1,24 @@ +package proxy + +import ( + "os/exec" +) + +// Daemon is a long-running proxy process. It is expected to keep running +// and to use blocking queries to detect changes in configuration, certs, +// and more. +// +// Consul will ensure that if the daemon crashes, that it is restarted. +type Daemon struct { + // Command is the command to execute to start this daemon. This must + // be a Cmd that isn't yet started. + Command *exec.Cmd + + // ProxyToken is the special local-only ACL token that allows a proxy + // to communicate to the Connect-specific endpoints. + ProxyToken string +} + +// Start starts the daemon and keeps it running. +func (p *Daemon) Start() error { +} diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go new file mode 100644 index 000000000..943b369ff --- /dev/null +++ b/agent/proxy/manager.go @@ -0,0 +1 @@ +package proxy diff --git a/agent/proxy/proxy.go b/agent/proxy/proxy.go new file mode 100644 index 000000000..d42ec3903 --- /dev/null +++ b/agent/proxy/proxy.go @@ -0,0 +1,12 @@ +// Package proxy contains logic for agent interaction with proxies, +// primarily "managed" proxies. Managed proxies are proxy processes for +// Connect-compatible endpoints that Consul owns and controls the lifecycle +// for. +// +// This package does not contain the built-in proxy for Connect. The source +// for that is available in the "connect/proxy" package. +package proxy + +// EnvProxyToken is the name of the environment variable that is passed +// to managed proxies containing the proxy token. +const EnvProxyToken = "CONNECT_PROXY_TOKEN" From c2f50f1688b1c00ee27185e7d53364dfce100b82 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 25 Apr 2018 16:54:00 -0700 Subject: [PATCH 189/539] agent/proxy: Daemon works, tests cover it too --- agent/proxy/daemon.go | 128 +++++++++++++++++++++++++++++++++++++ agent/proxy/daemon_test.go | 91 ++++++++++++++++++++++++++ agent/proxy/proxy_test.go | 116 +++++++++++++++++++++++++++++++++ 3 files changed, 335 insertions(+) create mode 100644 agent/proxy/daemon_test.go create mode 100644 agent/proxy/proxy_test.go diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index f432271b6..74fa62d44 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -1,7 +1,11 @@ package proxy import ( + "fmt" + "log" + "os" "os/exec" + "sync" ) // Daemon is a long-running proxy process. It is expected to keep running @@ -17,8 +21,132 @@ type Daemon struct { // ProxyToken is the special local-only ACL token that allows a proxy // to communicate to the Connect-specific endpoints. ProxyToken string + + // Logger is where logs will be sent around the management of this + // daemon. The actual logs for the daemon itself will be sent to + // a file. + Logger *log.Logger + + // process is the started process + lock sync.Mutex + process *os.Process + stopCh chan struct{} } // Start starts the daemon and keeps it running. +// +// This function returns after the process is successfully started. func (p *Daemon) Start() error { + p.lock.Lock() + defer p.lock.Unlock() + + // If the daemon is already started, return no error. + if p.stopCh != nil { + return nil + } + + // Start it for the first time + process, err := p.start() + if err != nil { + return err + } + + // Create the stop channel we use to notify when we've gracefully stopped + stopCh := make(chan struct{}) + p.stopCh = stopCh + + // Store the process so that we can signal it later + p.process = process + + go p.keepAlive(stopCh) + + return nil +} + +func (p *Daemon) keepAlive(stopCh chan struct{}) { + p.lock.Lock() + process := p.process + p.lock.Unlock() + + for { + if process == nil { + p.lock.Lock() + + // If we gracefully stopped (stopCh is closed) then don't restart. We + // check stopCh and not p.stopCh because the latter could reference + // a new process. + select { + case <-stopCh: + p.lock.Unlock() + return + default: + } + + // Process isn't started currently. We're restarting. + var err error + process, err = p.start() + if err != nil { + p.Logger.Printf("[ERR] agent/proxy: error restarting daemon: %s", err) + } + + p.process = process + p.lock.Unlock() + } + + _, err := process.Wait() + process = nil + p.Logger.Printf("[INFO] agent/proxy: daemon exited: %s", err) + } +} + +// start starts and returns the process. This will create a copy of the +// configured *exec.Command with the modifications documented on Daemon +// such as setting the proxy token environmental variable. +func (p *Daemon) start() (*os.Process, error) { + cmd := *p.Command + + // Add the proxy token to the environment. We first copy the env because + // it is a slice and therefore the "copy" above will only copy the slice + // reference. We allocate an exactly sized slice. + cmd.Env = make([]string, len(p.Command.Env), len(p.Command.Env)+1) + copy(cmd.Env, p.Command.Env) + cmd.Env = append(cmd.Env, fmt.Sprintf("%s=%s", EnvProxyToken, p.ProxyToken)) + + // Start it + err := cmd.Start() + return cmd.Process, err +} + +// Stop stops the daemon. +// +// This will attempt a graceful stop (SIGINT) before force killing the +// process (SIGKILL). In either case, the process won't be automatically +// restarted unless Start is called again. +// +// This is safe to call multiple times. If the daemon is already stopped, +// then this returns no error. +func (p *Daemon) Stop() error { + p.lock.Lock() + defer p.lock.Unlock() + + // If we don't have a stopCh then we never even started yet. + if p.stopCh == nil { + return nil + } + + // If stopCh is closed, then we're already stopped + select { + case <-p.stopCh: + return nil + default: + } + + err := p.process.Signal(os.Interrupt) + + // This signals that we've stopped and therefore don't want to restart + close(p.stopCh) + p.stopCh = nil + + return err + //return p.Command.Process.Kill() } diff --git a/agent/proxy/daemon_test.go b/agent/proxy/daemon_test.go new file mode 100644 index 000000000..0af971b93 --- /dev/null +++ b/agent/proxy/daemon_test.go @@ -0,0 +1,91 @@ +package proxy + +import ( + "io/ioutil" + "os" + "path/filepath" + "testing" + + "github.com/hashicorp/consul/testutil/retry" + "github.com/hashicorp/go-uuid" + "github.com/stretchr/testify/require" +) + +func TestDaemonStartStop(t *testing.T) { + require := require.New(t) + td, closer := testTempDir(t) + defer closer() + + path := filepath.Join(td, "file") + uuid, err := uuid.GenerateUUID() + require.NoError(err) + + d := &Daemon{ + Command: helperProcess("start-stop", path), + ProxyToken: uuid, + Logger: testLogger, + } + require.NoError(d.Start()) + + // Wait for the file to exist + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + + r.Fatalf("error: %s", err) + }) + + // Verify that the contents of the file is the token. This verifies + // that we properly passed the token as an env var. + data, err := ioutil.ReadFile(path) + require.NoError(err) + require.Equal(uuid, string(data)) + + // Stop the process + require.NoError(d.Stop()) + + // File should no longer exist. + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if os.IsNotExist(err) { + return + } + + // err might be nil here but that's okay + r.Fatalf("should not exist: %s", err) + }) +} + +func TestDaemonRestart(t *testing.T) { + require := require.New(t) + td, closer := testTempDir(t) + defer closer() + path := filepath.Join(td, "file") + + d := &Daemon{ + Command: helperProcess("restart", path), + Logger: testLogger, + } + require.NoError(d.Start()) + defer d.Stop() + + // Wait for the file to exist. We save the func so we can reuse the test. + waitFile := func() { + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + r.Fatalf("error waiting for path: %s", err) + }) + } + waitFile() + + // Delete the file + require.NoError(os.Remove(path)) + + // File should re-appear because the process is restart + waitFile() +} diff --git a/agent/proxy/proxy_test.go b/agent/proxy/proxy_test.go new file mode 100644 index 000000000..fa8eef128 --- /dev/null +++ b/agent/proxy/proxy_test.go @@ -0,0 +1,116 @@ +package proxy + +import ( + "fmt" + "io/ioutil" + "log" + "os" + "os/exec" + "os/signal" + "testing" + "time" +) + +// testLogger is a logger that can be used by tests that require a +// *log.Logger instance. +var testLogger = log.New(os.Stderr, "logger: ", log.LstdFlags) + +// testTempDir returns a temporary directory and a cleanup function. +func testTempDir(t *testing.T) (string, func()) { + t.Helper() + + td, err := ioutil.TempDir("", "test-agent-proxy") + if err != nil { + t.Fatalf("err: %s", err) + } + + return td, func() { + if err := os.RemoveAll(td); err != nil { + t.Fatalf("err: %s", err) + } + } +} + +// helperProcess returns an *exec.Cmd that can be used to execute the +// TestHelperProcess function below. This can be used to test multi-process +// interactions. +func helperProcess(s ...string) *exec.Cmd { + cs := []string{"-test.run=TestHelperProcess", "--"} + cs = append(cs, s...) + env := []string{"GO_WANT_HELPER_PROCESS=1"} + + cmd := exec.Command(os.Args[0], cs...) + cmd.Env = append(env, os.Environ()...) + cmd.Stdout = os.Stdout + cmd.Stderr = os.Stderr + return cmd +} + +// This is not a real test. This is just a helper process kicked off by tests +// using the helperProcess helper function. +func TestHelperProcess(t *testing.T) { + if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" { + return + } + + defer os.Exit(0) + + args := os.Args + for len(args) > 0 { + if args[0] == "--" { + args = args[1:] + break + } + + args = args[1:] + } + + if len(args) == 0 { + fmt.Fprintf(os.Stderr, "No command\n") + os.Exit(2) + } + + cmd, args := args[0], args[1:] + switch cmd { + // While running, this creates a file in the given directory (args[0]) + // and deletes it only whe nit is stopped. + case "start-stop": + ch := make(chan os.Signal, 1) + signal.Notify(ch, os.Interrupt) + defer signal.Stop(ch) + + path := args[0] + data := []byte(os.Getenv(EnvProxyToken)) + + if err := ioutil.WriteFile(path, data, 0644); err != nil { + t.Fatalf("err: %s", err) + } + defer os.Remove(path) + + <-ch + + // Restart writes to a file and keeps running while that file still + // exists. When that file is removed, this process exits. This can be + // used to test restarting. + case "restart": + // Write the file + path := args[0] + if err := ioutil.WriteFile(path, []byte("hello"), 0644); err != nil { + fmt.Fprintf(os.Stderr, "Error: %s\n", err) + os.Exit(1) + } + + // While the file still exists, do nothing. When the file no longer + // exists, we exit. + for { + time.Sleep(25 * time.Millisecond) + if _, err := os.Stat(path); os.IsNotExist(err) { + break + } + } + + default: + fmt.Fprintf(os.Stderr, "Unknown command: %q\n", cmd) + os.Exit(2) + } +} From 659ab7ee2d00e4c7e16bdcea9cd7337c798ac91c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 25 Apr 2018 17:39:32 -0700 Subject: [PATCH 190/539] agent/proxy: exponential backoff on restarts --- agent/proxy/daemon.go | 40 ++++++++++++++++++++++++++++++++++++++++ agent/proxy/manager.go | 1 - 2 files changed, 40 insertions(+), 1 deletion(-) delete mode 100644 agent/proxy/manager.go diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index 74fa62d44..3a8c1b11b 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -6,6 +6,16 @@ import ( "os" "os/exec" "sync" + "time" +) + +// Constants related to restart timers with the daemon mode proxies. At some +// point we will probably want to expose these knobs to an end user, but +// reasonable defaults are chosen. +const ( + DaemonRestartHealthy = 10 * time.Second // time before considering healthy + DaemonRestartBackoffMin = 3 // 3 attempts before backing off + DaemonRestartMaxWait = 1 * time.Minute // maximum backoff wait time ) // Daemon is a long-running proxy process. It is expected to keep running @@ -68,8 +78,38 @@ func (p *Daemon) keepAlive(stopCh chan struct{}) { process := p.process p.lock.Unlock() + // attemptsDeadline is the time at which we consider the daemon to have + // been alive long enough that we can reset the attempt counter. + // + // attempts keeps track of the number of restart attempts we've had and + // is used to calculate the wait time using an exponential backoff. + var attemptsDeadline time.Time + var attempts uint + for { if process == nil { + // If we're passed the attempt deadline then reset the attempts + if !attemptsDeadline.IsZero() && time.Now().After(attemptsDeadline) { + attempts = 0 + } + attemptsDeadline = time.Now().Add(DaemonRestartHealthy) + attempts++ + + // Calculate the exponential backoff and wait if we have to + if attempts > DaemonRestartBackoffMin { + waitTime := (1 << (attempts - DaemonRestartBackoffMin)) * time.Second + if waitTime > DaemonRestartMaxWait { + waitTime = DaemonRestartMaxWait + } + + if waitTime > 0 { + p.Logger.Printf( + "[WARN] agent/proxy: waiting %s before restarting daemon", + waitTime) + time.Sleep(waitTime) + } + } + p.lock.Lock() // If we gracefully stopped (stopCh is closed) then don't restart. We diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go deleted file mode 100644 index 943b369ff..000000000 --- a/agent/proxy/manager.go +++ /dev/null @@ -1 +0,0 @@ -package proxy From 76c6849ffee86423108997fb4c46e839f3bcff46 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 26 Apr 2018 21:11:56 -0700 Subject: [PATCH 191/539] agent/local: store proxy on local state, wip, not working yet --- agent/local/proxy.go | 26 ++++++++++++++++++++++++++ agent/local/state.go | 19 ++++++++++++++++--- agent/proxy/daemon_test.go | 4 ++++ agent/proxy/proxy.go | 16 ++++++++++++++++ 4 files changed, 62 insertions(+), 3 deletions(-) create mode 100644 agent/local/proxy.go diff --git a/agent/local/proxy.go b/agent/local/proxy.go new file mode 100644 index 000000000..7f004a7ab --- /dev/null +++ b/agent/local/proxy.go @@ -0,0 +1,26 @@ +package local + +import ( + "fmt" + "os/exec" + + "github.com/hashicorp/consul/agent/proxy" + "github.com/hashicorp/consul/agent/structs" +) + +// newProxyProcess returns the proxy.Proxy for the given ManagedProxy +// state entry. proxy.Proxy is the actual managed process. The returned value +// is the initialized struct but isn't explicitly started. +func (s *State) newProxyProcess(p *structs.ConnectManagedProxy, pToken string) (proxy.Proxy, error) { + switch p.ExecMode { + case structs.ProxyExecModeDaemon: + return &proxy.Daemon{ + Command: exec.Command(p.Command), + ProxyToken: pToken, + Logger: s.logger, + }, nil + + default: + return nil, fmt.Errorf("unsupported managed proxy type: %q", p.ExecMode) + } +} diff --git a/agent/local/state.go b/agent/local/state.go index 8df600b32..ccb4d77e1 100644 --- a/agent/local/state.go +++ b/agent/local/state.go @@ -14,6 +14,7 @@ import ( "github.com/hashicorp/go-uuid" "github.com/hashicorp/consul/acl" + "github.com/hashicorp/consul/agent/proxy" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/agent/token" "github.com/hashicorp/consul/api" @@ -126,6 +127,9 @@ type ManagedProxy struct { // use service-scoped ACL tokens distributed externally. ProxyToken string + // ManagedProxy is the managed proxy itself that is running. + ManagedProxy proxy.Proxy + // WatchCh is a close-only chan that is closed when the proxy is removed or // updated. WatchCh chan struct{} @@ -603,6 +607,14 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*str return nil, err } + // Initialize the managed proxy process. This doesn't start anything, + // it only sets up the structures we'll use. To start the proxy, the + // caller should call Proxy and use the returned ManagedProxy instance. + proxyProcess, err := l.newProxyProcess(proxy, pToken) + if err != nil { + return nil, err + } + // Lock now. We can't lock earlier as l.Service would deadlock and shouldn't // anyway to minimise the critical section. l.Lock() @@ -650,9 +662,10 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*str close(old.WatchCh) } l.managedProxies[svc.ID] = &ManagedProxy{ - Proxy: proxy, - ProxyToken: pToken, - WatchCh: make(chan struct{}), + Proxy: proxy, + ProxyToken: pToken, + ManagedProxy: proxyProcess, + WatchCh: make(chan struct{}), } // No need to trigger sync as proxy state is local only. diff --git a/agent/proxy/daemon_test.go b/agent/proxy/daemon_test.go index 0af971b93..dbd2099bc 100644 --- a/agent/proxy/daemon_test.go +++ b/agent/proxy/daemon_test.go @@ -11,6 +11,10 @@ import ( "github.com/stretchr/testify/require" ) +func TestDaemon_impl(t *testing.T) { + var _ Proxy = new(Daemon) +} + func TestDaemonStartStop(t *testing.T) { require := require.New(t) td, closer := testTempDir(t) diff --git a/agent/proxy/proxy.go b/agent/proxy/proxy.go index d42ec3903..44228b521 100644 --- a/agent/proxy/proxy.go +++ b/agent/proxy/proxy.go @@ -10,3 +10,19 @@ package proxy // EnvProxyToken is the name of the environment variable that is passed // to managed proxies containing the proxy token. const EnvProxyToken = "CONNECT_PROXY_TOKEN" + +// Proxy is the interface implemented by all types of managed proxies. +// +// Calls to all the functions on this interface must be concurrency safe. +// Please read the documentation carefully on top of each function for expected +// behavior. +type Proxy interface { + // Start starts the proxy. If an error is returned then the managed + // proxy registration is rejected. Therefore, this should only fail if + // the configuration of the proxy itself is irrecoverable, and should + // retry starting for other failures. + Start() error + + // Stop stops the proxy. + Stop() error +} From 536f31571b2436258930b5114ac9da1d3aff985d Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 26 Apr 2018 22:16:21 -0700 Subject: [PATCH 192/539] agent: change connect command paths to be slices, not strings This matches other executable configuration and allows us to cleanly separate executable from arguments without trying to emulate shell parsing. --- agent/agent_endpoint_test.go | 6 +++--- agent/agent_test.go | 5 +++-- agent/config/builder.go | 10 +++++----- agent/config/config.go | 6 +++--- agent/config/runtime.go | 4 ++-- agent/config/runtime_test.go | 24 ++++++++++++------------ agent/structs/connect.go | 2 +- agent/structs/service_definition.go | 2 +- api/agent.go | 4 ++-- 9 files changed, 32 insertions(+), 31 deletions(-) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index be97bf5a4..6cff0fa59 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -63,7 +63,7 @@ func TestAgent_Services(t *testing.T) { // Add a managed proxy for that service prxy1 := &structs.ConnectManagedProxy{ ExecMode: structs.ProxyExecModeScript, - Command: "proxy.sh", + Command: []string{"proxy.sh"}, Config: map[string]interface{}{ "bind_port": 1234, "foo": "bar", @@ -1404,7 +1404,7 @@ func TestAgent_RegisterService_ManagedConnectProxy(t *testing.T) { Connect: &api.AgentServiceConnect{ Proxy: &api.AgentServiceConnectProxy{ ExecMode: "script", - Command: "proxy.sh", + Command: []string{"proxy.sh"}, Config: map[string]interface{}{ "foo": "bar", }, @@ -2354,7 +2354,7 @@ func TestAgentConnectProxyConfig_Blocking(t *testing.T) { TargetServiceName: "test", ContentHash: "84346af2031659c9", ExecMode: "daemon", - Command: "consul connect proxy", + Command: nil, Config: map[string]interface{}{ "upstreams": []interface{}{ map[string]interface{}{ diff --git a/agent/agent_test.go b/agent/agent_test.go index 730c10bc9..022c25f8e 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -2333,7 +2333,7 @@ func TestAgent_AddProxy(t *testing.T) { desc: "basic proxy adding, unregistered service", proxy: &structs.ConnectManagedProxy{ ExecMode: structs.ProxyExecModeDaemon, - Command: "consul connect proxy", + Command: []string{"consul", "connect", "proxy"}, Config: map[string]interface{}{ "foo": "bar", }, @@ -2346,7 +2346,7 @@ func TestAgent_AddProxy(t *testing.T) { desc: "basic proxy adding, unregistered service", proxy: &structs.ConnectManagedProxy{ ExecMode: structs.ProxyExecModeDaemon, - Command: "consul connect proxy", + Command: []string{"consul", "connect", "proxy"}, Config: map[string]interface{}{ "foo": "bar", }, @@ -2392,6 +2392,7 @@ func TestAgent_RemoveProxy(t *testing.T) { // Add a proxy for web pReg := &structs.ConnectManagedProxy{ TargetServiceID: "web", + Command: []string{"foo"}, } require.NoError(a.AddProxy(pReg, false)) diff --git a/agent/config/builder.go b/agent/config/builder.go index 3d9818adc..cd851aaf3 100644 --- a/agent/config/builder.go +++ b/agent/config/builder.go @@ -532,13 +532,13 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) { } proxyDefaultExecMode := "" - proxyDefaultDaemonCommand := "" - proxyDefaultScriptCommand := "" + var proxyDefaultDaemonCommand []string + var proxyDefaultScriptCommand []string proxyDefaultConfig := make(map[string]interface{}) if c.Connect != nil && c.Connect.ProxyDefaults != nil { proxyDefaultExecMode = b.stringVal(c.Connect.ProxyDefaults.ExecMode) - proxyDefaultDaemonCommand = b.stringVal(c.Connect.ProxyDefaults.DaemonCommand) - proxyDefaultScriptCommand = b.stringVal(c.Connect.ProxyDefaults.ScriptCommand) + proxyDefaultDaemonCommand = c.Connect.ProxyDefaults.DaemonCommand + proxyDefaultScriptCommand = c.Connect.ProxyDefaults.ScriptCommand proxyDefaultConfig = c.Connect.ProxyDefaults.Config } @@ -1051,7 +1051,7 @@ func (b *Builder) serviceConnectVal(v *ServiceConnect) *structs.ServiceDefinitio if v.Proxy != nil { proxy = &structs.ServiceDefinitionConnectProxy{ ExecMode: b.stringVal(v.Proxy.ExecMode), - Command: b.stringVal(v.Proxy.Command), + Command: v.Proxy.Command, Config: v.Proxy.Config, } } diff --git a/agent/config/config.go b/agent/config/config.go index 5eb231472..6dc652aed 100644 --- a/agent/config/config.go +++ b/agent/config/config.go @@ -359,7 +359,7 @@ type ServiceConnect struct { } type ServiceConnectProxy struct { - Command *string `json:"command,omitempty" hcl:"command" mapstructure:"command"` + Command []string `json:"command,omitempty" hcl:"command" mapstructure:"command"` ExecMode *string `json:"exec_mode,omitempty" hcl:"exec_mode" mapstructure:"exec_mode"` Config map[string]interface{} `json:"config,omitempty" hcl:"config" mapstructure:"config"` } @@ -386,10 +386,10 @@ type ConnectProxyDefaults struct { ExecMode *string `json:"exec_mode,omitempty" hcl:"exec_mode" mapstructure:"exec_mode"` // DaemonCommand is used to start proxy in exec_mode = daemon if not specified // at registration time. - DaemonCommand *string `json:"daemon_command,omitempty" hcl:"daemon_command" mapstructure:"daemon_command"` + DaemonCommand []string `json:"daemon_command,omitempty" hcl:"daemon_command" mapstructure:"daemon_command"` // ScriptCommand is used to start proxy in exec_mode = script if not specified // at registration time. - ScriptCommand *string `json:"script_command,omitempty" hcl:"script_command" mapstructure:"script_command"` + ScriptCommand []string `json:"script_command,omitempty" hcl:"script_command" mapstructure:"script_command"` // Config is merged into an Config specified at registration time. Config map[string]interface{} `json:"config,omitempty" hcl:"config" mapstructure:"config"` } diff --git a/agent/config/runtime.go b/agent/config/runtime.go index ea04d5aa0..8baf02ab2 100644 --- a/agent/config/runtime.go +++ b/agent/config/runtime.go @@ -637,11 +637,11 @@ type RuntimeConfig struct { // ConnectProxyDefaultDaemonCommand is used to start proxy in exec_mode = // daemon if not specified at registration time. - ConnectProxyDefaultDaemonCommand string + ConnectProxyDefaultDaemonCommand []string // ConnectProxyDefaultScriptCommand is used to start proxy in exec_mode = // script if not specified at registration time. - ConnectProxyDefaultScriptCommand string + ConnectProxyDefaultScriptCommand []string // ConnectProxyDefaultConfig is merged with any config specified at // registration time to allow global control of defaults. diff --git a/agent/config/runtime_test.go b/agent/config/runtime_test.go index 36fffe16a..3268c6c70 100644 --- a/agent/config/runtime_test.go +++ b/agent/config/runtime_test.go @@ -2364,8 +2364,8 @@ func TestFullConfig(t *testing.T) { "bind_min_port": 2000, "bind_max_port": 3000, "exec_mode": "script", - "daemon_command": "consul connect proxy", - "script_command": "proxyctl.sh", + "daemon_command": ["consul", "connect", "proxy"], + "script_command": ["proxyctl.sh"], "config": { "foo": "bar", "connect_timeout_ms": 1000, @@ -2637,7 +2637,7 @@ func TestFullConfig(t *testing.T) { "connect": { "proxy": { "exec_mode": "daemon", - "command": "awesome-proxy", + "command": ["awesome-proxy"], "config": { "foo": "qux" } @@ -2826,13 +2826,13 @@ func TestFullConfig(t *testing.T) { bind_min_port = 2000 bind_max_port = 3000 exec_mode = "script" - daemon_command = "consul connect proxy" - script_command = "proxyctl.sh" + daemon_command = ["consul", "connect", "proxy"] + script_command = ["proxyctl.sh"] config = { foo = "bar" # hack float since json parses numbers as float and we have to # assert against the same thing - connect_timeout_ms = 1000.0 + connect_timeout_ms = 1000.0 pedantic_mode = true } } @@ -3101,7 +3101,7 @@ func TestFullConfig(t *testing.T) { connect { proxy { exec_mode = "daemon" - command = "awesome-proxy" + command = ["awesome-proxy"] config = { foo = "qux" } @@ -3426,8 +3426,8 @@ func TestFullConfig(t *testing.T) { "hyMy9Oxn": "XeBp4Sis", }, ConnectProxyDefaultExecMode: "script", - ConnectProxyDefaultDaemonCommand: "consul connect proxy", - ConnectProxyDefaultScriptCommand: "proxyctl.sh", + ConnectProxyDefaultDaemonCommand: []string{"consul", "connect", "proxy"}, + ConnectProxyDefaultScriptCommand: []string{"proxyctl.sh"}, ConnectProxyDefaultConfig: map[string]interface{}{ "foo": "bar", "connect_timeout_ms": float64(1000), @@ -3608,7 +3608,7 @@ func TestFullConfig(t *testing.T) { Connect: &structs.ServiceDefinitionConnect{ Proxy: &structs.ServiceDefinitionConnectProxy{ ExecMode: "daemon", - Command: "awesome-proxy", + Command: []string{"awesome-proxy"}, Config: map[string]interface{}{ "foo": "qux", }, @@ -4109,9 +4109,9 @@ func TestSanitize(t *testing.T) { "ConnectProxyBindMaxPort": 0, "ConnectProxyBindMinPort": 0, "ConnectProxyDefaultConfig": {}, - "ConnectProxyDefaultDaemonCommand": "", + "ConnectProxyDefaultDaemonCommand": [], "ConnectProxyDefaultExecMode": "", - "ConnectProxyDefaultScriptCommand": "", + "ConnectProxyDefaultScriptCommand": [], "ConsulCoordinateUpdateBatchSize": 0, "ConsulCoordinateUpdateMaxBatches": 0, "ConsulCoordinateUpdatePeriod": "15s", diff --git a/agent/structs/connect.go b/agent/structs/connect.go index 90513ae8c..29330a652 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -64,7 +64,7 @@ type ConnectManagedProxy struct { // Command is the command to execute. Empty defaults to self-invoking the same // consul binary with proxy subcomand for ProxyExecModeDaemon and is an error // for ProxyExecModeScript. - Command string + Command []string // Config is the arbitrary configuration data provided with the registration. Config map[string]interface{} diff --git a/agent/structs/service_definition.go b/agent/structs/service_definition.go index d4dc21414..7163b5549 100644 --- a/agent/structs/service_definition.go +++ b/agent/structs/service_definition.go @@ -110,7 +110,7 @@ type ServiceDefinitionConnect struct { // registration. Note this is duplicated in config.ServiceConnectProxy and needs // to be kept in sync. type ServiceDefinitionConnectProxy struct { - Command string + Command []string ExecMode string Config map[string]interface{} } diff --git a/api/agent.go b/api/agent.go index b8125c91e..16241c6f9 100644 --- a/api/agent.go +++ b/api/agent.go @@ -76,7 +76,7 @@ type AgentServiceConnect struct { // service. type AgentServiceConnectProxy struct { ExecMode ProxyExecMode - Command string + Command []string Config map[string]interface{} } @@ -225,7 +225,7 @@ type ConnectProxyConfig struct { TargetServiceName string ContentHash string ExecMode ProxyExecMode - Command string + Command []string Config map[string]interface{} } From 93cdd3f2061de88ed86e2b4c13b115c63f4e8d35 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 27 Apr 2018 10:44:16 -0700 Subject: [PATCH 193/539] agent/proxy: clean up usage, can't be restarted --- agent/proxy/daemon.go | 80 ++++++++++++++++++++++++------------------- agent/proxy/proxy.go | 8 ++++- 2 files changed, 51 insertions(+), 37 deletions(-) diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index 3a8c1b11b..231f25cb8 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -39,8 +39,9 @@ type Daemon struct { // process is the started process lock sync.Mutex - process *os.Process + stopped bool stopCh chan struct{} + process *os.Process } // Start starts the daemon and keeps it running. @@ -50,30 +51,29 @@ func (p *Daemon) Start() error { p.lock.Lock() defer p.lock.Unlock() - // If the daemon is already started, return no error. - if p.stopCh != nil { + // A stopped proxy cannot be restarted + if p.stopped { + return fmt.Errorf("stopped") + } + + // If we're already running, that is okay + if p.process != nil { return nil } - // Start it for the first time - process, err := p.start() - if err != nil { - return err - } - - // Create the stop channel we use to notify when we've gracefully stopped + // Setup our stop channel stopCh := make(chan struct{}) p.stopCh = stopCh - // Store the process so that we can signal it later - p.process = process - + // Start the loop. go p.keepAlive(stopCh) return nil } -func (p *Daemon) keepAlive(stopCh chan struct{}) { +// keepAlive starts and keeps the configured process alive until it +// is stopped via Stop. +func (p *Daemon) keepAlive(stopCh <-chan struct{}) { p.lock.Lock() process := p.process p.lock.Unlock() @@ -106,31 +106,43 @@ func (p *Daemon) keepAlive(stopCh chan struct{}) { p.Logger.Printf( "[WARN] agent/proxy: waiting %s before restarting daemon", waitTime) - time.Sleep(waitTime) + + timer := time.NewTimer(waitTime) + select { + case <-timer.C: + // Timer is up, good! + + case <-stopCh: + // During our backoff wait, we've been signalled to + // quit, so just quit. + timer.Stop() + return + } } } p.lock.Lock() - // If we gracefully stopped (stopCh is closed) then don't restart. We - // check stopCh and not p.stopCh because the latter could reference - // a new process. - select { - case <-stopCh: + // If we gracefully stopped then don't restart. + if p.stopped { p.lock.Unlock() return - default: } - // Process isn't started currently. We're restarting. + // Process isn't started currently. We're restarting. Start it + // and save the process if we have it. var err error process, err = p.start() + if err == nil { + p.process = process + } + p.lock.Unlock() + if err != nil { p.Logger.Printf("[ERR] agent/proxy: error restarting daemon: %s", err) + continue } - p.process = process - p.lock.Unlock() } _, err := process.Wait() @@ -169,24 +181,20 @@ func (p *Daemon) Stop() error { p.lock.Lock() defer p.lock.Unlock() - // If we don't have a stopCh then we never even started yet. - if p.stopCh == nil { + // If we're already stopped or never started, then no problem. + if p.stopped || p.process == nil { + // In the case we never even started, calling Stop makes it so + // that we can't ever start in the future, either, so mark this. + p.stopped = true return nil } - // If stopCh is closed, then we're already stopped - select { - case <-p.stopCh: - return nil - default: - } + // Note that we've stopped + p.stopped = true + close(p.stopCh) err := p.process.Signal(os.Interrupt) - // This signals that we've stopped and therefore don't want to restart - close(p.stopCh) - p.stopCh = nil - return err //return p.Command.Process.Kill() } diff --git a/agent/proxy/proxy.go b/agent/proxy/proxy.go index 44228b521..a07bb5681 100644 --- a/agent/proxy/proxy.go +++ b/agent/proxy/proxy.go @@ -21,8 +21,14 @@ type Proxy interface { // proxy registration is rejected. Therefore, this should only fail if // the configuration of the proxy itself is irrecoverable, and should // retry starting for other failures. + // + // Starting an already-started proxy should not return an error. Start() error - // Stop stops the proxy. + // Stop stops the proxy and disallows it from ever being started again. + // + // If the proxy is not started yet, this should not return an error, but + // it should disallow Start from working again. If the proxy is already + // stopped, this should not return an error. Stop() error } From f64a002f68a662f584c45f4d1f71e88951335bc3 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 27 Apr 2018 11:24:49 -0700 Subject: [PATCH 194/539] agent: start/stop proxies --- agent/agent.go | 71 +++++++++++++++++++++++++++++++++++- agent/agent_endpoint_test.go | 8 ++-- agent/local/proxy.go | 22 ++++++++++- agent/local/state.go | 36 ++++++++++-------- agent/local/state_test.go | 21 +++++++---- agent/proxy/noop.go | 7 ++++ agent/proxy/noop_test.go | 9 +++++ agent/structs/connect.go | 3 ++ 8 files changed, 147 insertions(+), 30 deletions(-) create mode 100644 agent/proxy/noop.go create mode 100644 agent/proxy/noop_test.go diff --git a/agent/agent.go b/agent/agent.go index 9dfe2abea..f70c16379 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -40,6 +40,7 @@ import ( "github.com/hashicorp/memberlist" "github.com/hashicorp/raft" "github.com/hashicorp/serf/serf" + "github.com/kardianos/osext" "github.com/shirou/gopsutil/host" "golang.org/x/net/http2" ) @@ -1268,6 +1269,11 @@ func (a *Agent) ShutdownAgent() error { chk.Stop() } + // Unload all our proxies so that we stop the running processes. + if err := a.unloadProxies(); err != nil { + a.logger.Printf("[WARN] agent: error stopping managed proxies: %s", err) + } + var err error if a.delegate != nil { err = a.delegate.Shutdown() @@ -2032,19 +2038,58 @@ func (a *Agent) AddProxy(proxy *structs.ConnectManagedProxy, persist bool) error // Lookup the target service token in state if there is one. token := a.State.ServiceToken(proxy.TargetServiceID) + // Determine if we need to default the command + if proxy.ExecMode == structs.ProxyExecModeDaemon && len(proxy.Command) == 0 { + // We use the globally configured default command. If it is empty + // then we need to determine the subcommand for this agent. + cmd := a.config.ConnectProxyDefaultDaemonCommand + if len(cmd) == 0 { + var err error + cmd, err = a.defaultProxyCommand() + if err != nil { + return err + } + } + + proxy.CommandDefault = cmd + } + // Add the proxy to local state first since we may need to assign a port which // needs to be coordinate under state lock. AddProxy will generate the // NodeService for the proxy populated with the allocated (or configured) port // and an ID, but it doesn't add it to the agent directly since that could // deadlock and we may need to coordinate adding it and persisting etc. - proxyService, err := a.State.AddProxy(proxy, token) + proxyState, oldProxy, err := a.State.AddProxy(proxy, token) if err != nil { return err } + proxyService := proxyState.Proxy.ProxyService + + // If we replaced an existing proxy, stop that process. + if oldProxy != nil { + if err := oldProxy.ProxyProcess.Stop(); err != nil { + a.logger.Printf( + "[ERR] error stopping managed proxy, may still be running: %s", + err) + } + } + + // Start the proxy process + if err := proxyState.ProxyProcess.Start(); err != nil { + a.State.RemoveProxy(proxyService.ID) + return fmt.Errorf("error starting managed proxy: %s", err) + } // TODO(banks): register proxy health checks. err = a.AddService(proxyService, nil, persist, token) if err != nil { + // Stop the proxy process if it was started + if err := proxyState.ProxyProcess.Stop(); err != nil { + a.logger.Printf( + "[ERR] error stopping managed proxy, may still be running: %s", + err) + } + // Remove the state too a.State.RemoveProxy(proxyService.ID) return err @@ -2061,15 +2106,37 @@ func (a *Agent) RemoveProxy(proxyID string, persist bool) error { return fmt.Errorf("proxyID missing") } - if err := a.State.RemoveProxy(proxyID); err != nil { + // Remove the proxy from the local state + proxyState, err := a.State.RemoveProxy(proxyID) + if err != nil { return err } + // Stop the process. The proxy implementation is expected to perform + // retries so if this fails then retries have already been performed and + // the most we can do is just error. + if err := proxyState.ProxyProcess.Stop(); err != nil { + return fmt.Errorf("error stopping managed proxy process: %s", err) + } + // TODO(banks): unpersist proxy return nil } +// defaultProxyCommand returns the default Connect managed proxy command. +func (a *Agent) defaultProxyCommand() ([]string, error) { + // Get the path to the current exectuable. This is cached once by the + // library so this is effectively just a variable read. + execPath, err := osext.Executable() + if err != nil { + return nil, err + } + + // "consul connect proxy" default value for managed daemon proxy + return []string{execPath, "connect", "proxy"}, nil +} + func (a *Agent) cancelCheckMonitors(checkID types.CheckID) { // Stop any monitors delete(a.checkReapAfter, checkID) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 6cff0fa59..26a04dddd 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -70,7 +70,7 @@ func TestAgent_Services(t *testing.T) { }, TargetServiceID: "mysql", } - _, err := a.State.AddProxy(prxy1, "") + _, _, err := a.State.AddProxy(prxy1, "") require.NoError(t, err) req, _ := http.NewRequest("GET", "/v1/agent/services", nil) @@ -1435,7 +1435,7 @@ func TestAgent_RegisterService_ManagedConnectProxy(t *testing.T) { proxy := a.State.Proxy("web-proxy") require.NotNil(proxy) assert.Equal(structs.ProxyExecModeScript, proxy.Proxy.ExecMode) - assert.Equal("proxy.sh", proxy.Proxy.Command) + assert.Equal([]string{"proxy.sh"}, proxy.Proxy.Command) assert.Equal(args.Connect.Proxy.Config, proxy.Proxy.Config) // Ensure the token was configured @@ -2352,7 +2352,7 @@ func TestAgentConnectProxyConfig_Blocking(t *testing.T) { ProxyServiceID: "test-proxy", TargetServiceID: "test", TargetServiceName: "test", - ContentHash: "84346af2031659c9", + ContentHash: "365a50cbb9a748b6", ExecMode: "daemon", Command: nil, Config: map[string]interface{}{ @@ -2372,7 +2372,7 @@ func TestAgentConnectProxyConfig_Blocking(t *testing.T) { ur, err := copystructure.Copy(expectedResponse) require.NoError(t, err) updatedResponse := ur.(*api.ConnectProxyConfig) - updatedResponse.ContentHash = "e1e3395f0d00cd41" + updatedResponse.ContentHash = "b5bb0e4a0a58ca25" upstreams := updatedResponse.Config["upstreams"].([]interface{}) upstreams = append(upstreams, map[string]interface{}{ diff --git a/agent/local/proxy.go b/agent/local/proxy.go index 7f004a7ab..37484a32f 100644 --- a/agent/local/proxy.go +++ b/agent/local/proxy.go @@ -14,12 +14,32 @@ import ( func (s *State) newProxyProcess(p *structs.ConnectManagedProxy, pToken string) (proxy.Proxy, error) { switch p.ExecMode { case structs.ProxyExecModeDaemon: + command := p.Command + if len(command) == 0 { + command = p.CommandDefault + } + + // This should never happen since validation should happen upstream + // but verify it because the alternative is to panic below. + if len(command) == 0 { + return nil, fmt.Errorf("daemon mode managed proxy requires command") + } + + // Build the command to execute. + var cmd exec.Cmd + cmd.Path = command[0] + cmd.Args = command[1:] + + // Build the daemon structure return &proxy.Daemon{ - Command: exec.Command(p.Command), + Command: &cmd, ProxyToken: pToken, Logger: s.logger, }, nil + case structs.ProxyExecModeScript: + return &proxy.Noop{}, nil + default: return nil, fmt.Errorf("unsupported managed proxy type: %q", p.ExecMode) } diff --git a/agent/local/state.go b/agent/local/state.go index ccb4d77e1..ecd3299fd 100644 --- a/agent/local/state.go +++ b/agent/local/state.go @@ -127,8 +127,8 @@ type ManagedProxy struct { // use service-scoped ACL tokens distributed externally. ProxyToken string - // ManagedProxy is the managed proxy itself that is running. - ManagedProxy proxy.Proxy + // ProxyProcess is the managed proxy itself that is running. + ProxyProcess proxy.Proxy // WatchCh is a close-only chan that is closed when the proxy is removed or // updated. @@ -573,22 +573,26 @@ func (l *State) CriticalCheckStates() map[types.CheckID]*CheckState { // (since that has to do other book keeping). The token passed here is the ACL // token the service used to register itself so must have write on service // record. -func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*structs.NodeService, error) { +// +// AddProxy returns the newly added proxy, any replaced proxy, and an error. +// The second return value (replaced proxy) can be used to determine if +// the process needs to be updated or not. +func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*ManagedProxy, *ManagedProxy, error) { if proxy == nil { - return nil, fmt.Errorf("no proxy") + return nil, nil, fmt.Errorf("no proxy") } // Lookup the local service target := l.Service(proxy.TargetServiceID) if target == nil { - return nil, fmt.Errorf("target service ID %s not registered", + return nil, nil, fmt.Errorf("target service ID %s not registered", proxy.TargetServiceID) } // Get bind info from config cfg, err := proxy.ParseConfig() if err != nil { - return nil, err + return nil, nil, err } // Construct almost all of the NodeService that needs to be registered by the @@ -604,7 +608,7 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*str pToken, err := uuid.GenerateUUID() if err != nil { - return nil, err + return nil, nil, err } // Initialize the managed proxy process. This doesn't start anything, @@ -612,7 +616,7 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*str // caller should call Proxy and use the returned ManagedProxy instance. proxyProcess, err := l.newProxyProcess(proxy, pToken) if err != nil { - return nil, err + return nil, nil, err } // Lock now. We can't lock earlier as l.Service would deadlock and shouldn't @@ -646,7 +650,7 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*str } // If no ports left (or auto ports disabled) fail if svc.Port < 1 { - return nil, fmt.Errorf("no port provided for proxy bind_port and none "+ + return nil, nil, fmt.Errorf("no port provided for proxy bind_port and none "+ " left in the allocated range [%d, %d]", l.config.ProxyBindMinPort, l.config.ProxyBindMaxPort) } @@ -654,7 +658,8 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*str proxy.ProxyService = svc // All set, add the proxy and return the service - if old, ok := l.managedProxies[svc.ID]; ok { + old, ok := l.managedProxies[svc.ID] + if ok { // Notify watchers of the existing proxy config that it's changing. Note // this is safe here even before the map is updated since we still hold the // state lock and the watcher can't re-read the new config until we return @@ -664,22 +669,23 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*str l.managedProxies[svc.ID] = &ManagedProxy{ Proxy: proxy, ProxyToken: pToken, - ManagedProxy: proxyProcess, + ProxyProcess: proxyProcess, WatchCh: make(chan struct{}), } // No need to trigger sync as proxy state is local only. - return svc, nil + return l.managedProxies[svc.ID], old, nil } // RemoveProxy is used to remove a proxy entry from the local state. -func (l *State) RemoveProxy(id string) error { +// This returns the proxy that was removed. +func (l *State) RemoveProxy(id string) (*ManagedProxy, error) { l.Lock() defer l.Unlock() p := l.managedProxies[id] if p == nil { - return fmt.Errorf("Proxy %s does not exist", id) + return nil, fmt.Errorf("Proxy %s does not exist", id) } delete(l.managedProxies, id) @@ -687,7 +693,7 @@ func (l *State) RemoveProxy(id string) error { close(p.WatchCh) // No need to trigger sync as proxy state is local only. - return nil + return p, nil } // Proxy returns the local proxy state. diff --git a/agent/local/state_test.go b/agent/local/state_test.go index dd887ccb1..f79249a73 100644 --- a/agent/local/state_test.go +++ b/agent/local/state_test.go @@ -1684,7 +1684,7 @@ func TestStateProxyManagement(t *testing.T) { p1 := structs.ConnectManagedProxy{ ExecMode: structs.ProxyExecModeDaemon, - Command: "consul connect proxy", + Command: []string{"consul", "connect", "proxy"}, TargetServiceID: "web", } @@ -1710,9 +1710,10 @@ func TestStateProxyManagement(t *testing.T) { require.NoError(err) // Should work now - svc, err := state.AddProxy(&p1, "fake-token") + pstate, err := state.AddProxy(&p1, "fake-token") require.NoError(err) + svc := pstate.Proxy.ProxyService assert.Equal("web-proxy", svc.ID) assert.Equal("web-proxy", svc.Service) assert.Equal(structs.ServiceKindConnectProxy, svc.Kind) @@ -1739,8 +1740,9 @@ func TestStateProxyManagement(t *testing.T) { // Second proxy should claim other port p2 := p1 p2.TargetServiceID = "cache" - svc2, err := state.AddProxy(&p2, "fake-token") + pstate2, err := state.AddProxy(&p2, "fake-token") require.NoError(err) + svc2 := pstate2.Proxy.ProxyService assert.Contains([]int{20000, 20001}, svc2.Port) assert.NotEqual(svc.Port, svc2.Port) @@ -1758,8 +1760,9 @@ func TestStateProxyManagement(t *testing.T) { "bind_port": 1234, "bind_address": "0.0.0.0", } - svc3, err := state.AddProxy(&p3, "fake-token") + pstate3, err := state.AddProxy(&p3, "fake-token") require.NoError(err) + svc3 := pstate3.Proxy.ProxyService require.Equal("0.0.0.0", svc3.Address) require.Equal(1234, svc3.Port) @@ -1771,8 +1774,9 @@ func TestStateProxyManagement(t *testing.T) { require.NotNil(gotP3) var ws memdb.WatchSet ws.Add(gotP3.WatchCh) - svc3, err = state.AddProxy(&p3updated, "fake-token") + pstate3, err = state.AddProxy(&p3updated, "fake-token") require.NoError(err) + svc3 = pstate3.Proxy.ProxyService require.Equal("0.0.0.0", svc3.Address) require.Equal(1234, svc3.Port) gotProxy3 := state.Proxy(svc3.ID) @@ -1782,19 +1786,20 @@ func TestStateProxyManagement(t *testing.T) { "watch should have fired so ws.Watch should not timeout") // Remove one of the auto-assigned proxies - err = state.RemoveProxy(svc2.ID) + _, err = state.RemoveProxy(svc2.ID) require.NoError(err) // Should be able to create a new proxy for that service with the port (it // should have been "freed"). p4 := p2 - svc4, err := state.AddProxy(&p4, "fake-token") + pstate4, err := state.AddProxy(&p4, "fake-token") require.NoError(err) + svc4 := pstate4.Proxy.ProxyService assert.Contains([]int{20000, 20001}, svc2.Port) assert.Equal(svc4.Port, svc2.Port, "should get the same port back that we freed") // Remove a proxy that doesn't exist should error - err = state.RemoveProxy("nope") + _, err = state.RemoveProxy("nope") require.Error(err) assert.Equal(&p4, state.Proxy(p4.ProxyService.ID).Proxy, diff --git a/agent/proxy/noop.go b/agent/proxy/noop.go new file mode 100644 index 000000000..9b35a2427 --- /dev/null +++ b/agent/proxy/noop.go @@ -0,0 +1,7 @@ +package proxy + +// Noop implements Proxy and does nothing. +type Noop struct{} + +func (p *Noop) Start() error { return nil } +func (p *Noop) Stop() error { return nil } diff --git a/agent/proxy/noop_test.go b/agent/proxy/noop_test.go new file mode 100644 index 000000000..77513ad29 --- /dev/null +++ b/agent/proxy/noop_test.go @@ -0,0 +1,9 @@ +package proxy + +import ( + "testing" +) + +func TestNoop_impl(t *testing.T) { + var _ Proxy = new(Noop) +} diff --git a/agent/structs/connect.go b/agent/structs/connect.go index 29330a652..b40091adf 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -66,6 +66,9 @@ type ConnectManagedProxy struct { // for ProxyExecModeScript. Command []string + // CommandDefault is the default command to execute if Command is empty. + CommandDefault []string `json:"-" hash:"ignore"` + // Config is the arbitrary configuration data provided with the registration. Config map[string]interface{} From fae8dc895117522601202146c943b33da5a7400c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 30 Apr 2018 21:12:55 -0700 Subject: [PATCH 195/539] agent/local: add Notify mechanism for proxy changes --- agent/local/proxy.go | 46 ------------------- agent/local/state.go | 96 +++++++++++++++++++++++++-------------- agent/local/state_test.go | 35 ++++++++++++++ 3 files changed, 98 insertions(+), 79 deletions(-) delete mode 100644 agent/local/proxy.go diff --git a/agent/local/proxy.go b/agent/local/proxy.go deleted file mode 100644 index 37484a32f..000000000 --- a/agent/local/proxy.go +++ /dev/null @@ -1,46 +0,0 @@ -package local - -import ( - "fmt" - "os/exec" - - "github.com/hashicorp/consul/agent/proxy" - "github.com/hashicorp/consul/agent/structs" -) - -// newProxyProcess returns the proxy.Proxy for the given ManagedProxy -// state entry. proxy.Proxy is the actual managed process. The returned value -// is the initialized struct but isn't explicitly started. -func (s *State) newProxyProcess(p *structs.ConnectManagedProxy, pToken string) (proxy.Proxy, error) { - switch p.ExecMode { - case structs.ProxyExecModeDaemon: - command := p.Command - if len(command) == 0 { - command = p.CommandDefault - } - - // This should never happen since validation should happen upstream - // but verify it because the alternative is to panic below. - if len(command) == 0 { - return nil, fmt.Errorf("daemon mode managed proxy requires command") - } - - // Build the command to execute. - var cmd exec.Cmd - cmd.Path = command[0] - cmd.Args = command[1:] - - // Build the daemon structure - return &proxy.Daemon{ - Command: &cmd, - ProxyToken: pToken, - Logger: s.logger, - }, nil - - case structs.ProxyExecModeScript: - return &proxy.Noop{}, nil - - default: - return nil, fmt.Errorf("unsupported managed proxy type: %q", p.ExecMode) - } -} diff --git a/agent/local/state.go b/agent/local/state.go index ecd3299fd..03dbbd96c 100644 --- a/agent/local/state.go +++ b/agent/local/state.go @@ -14,7 +14,6 @@ import ( "github.com/hashicorp/go-uuid" "github.com/hashicorp/consul/acl" - "github.com/hashicorp/consul/agent/proxy" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/agent/token" "github.com/hashicorp/consul/api" @@ -127,9 +126,6 @@ type ManagedProxy struct { // use service-scoped ACL tokens distributed externally. ProxyToken string - // ProxyProcess is the managed proxy itself that is running. - ProxyProcess proxy.Proxy - // WatchCh is a close-only chan that is closed when the proxy is removed or // updated. WatchCh chan struct{} @@ -187,19 +183,24 @@ type State struct { // registration) do not appear here as the agent doesn't need to manage their // process nor config. The _do_ still exist in services above though as // services with Kind == connect-proxy. - managedProxies map[string]*ManagedProxy + // + // managedProxyHandlers is a map of registered channel listeners that + // are sent a message each time a proxy changes via Add or RemoveProxy. + managedProxies map[string]*ManagedProxy + managedProxyHandlers map[chan<- struct{}]struct{} } // NewState creates a new local state for the agent. func NewState(c Config, lg *log.Logger, tokens *token.Store) *State { l := &State{ - config: c, - logger: lg, - services: make(map[string]*ServiceState), - checks: make(map[types.CheckID]*CheckState), - metadata: make(map[string]string), - tokens: tokens, - managedProxies: make(map[string]*ManagedProxy), + config: c, + logger: lg, + services: make(map[string]*ServiceState), + checks: make(map[types.CheckID]*CheckState), + metadata: make(map[string]string), + tokens: tokens, + managedProxies: make(map[string]*ManagedProxy), + managedProxyHandlers: make(map[chan<- struct{}]struct{}), } l.SetDiscardCheckOutput(c.DiscardCheckOutput) return l @@ -577,22 +578,22 @@ func (l *State) CriticalCheckStates() map[types.CheckID]*CheckState { // AddProxy returns the newly added proxy, any replaced proxy, and an error. // The second return value (replaced proxy) can be used to determine if // the process needs to be updated or not. -func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*ManagedProxy, *ManagedProxy, error) { +func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*ManagedProxy, error) { if proxy == nil { - return nil, nil, fmt.Errorf("no proxy") + return nil, fmt.Errorf("no proxy") } // Lookup the local service target := l.Service(proxy.TargetServiceID) if target == nil { - return nil, nil, fmt.Errorf("target service ID %s not registered", + return nil, fmt.Errorf("target service ID %s not registered", proxy.TargetServiceID) } // Get bind info from config cfg, err := proxy.ParseConfig() if err != nil { - return nil, nil, err + return nil, err } // Construct almost all of the NodeService that needs to be registered by the @@ -608,15 +609,7 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*Man pToken, err := uuid.GenerateUUID() if err != nil { - return nil, nil, err - } - - // Initialize the managed proxy process. This doesn't start anything, - // it only sets up the structures we'll use. To start the proxy, the - // caller should call Proxy and use the returned ManagedProxy instance. - proxyProcess, err := l.newProxyProcess(proxy, pToken) - if err != nil { - return nil, nil, err + return nil, err } // Lock now. We can't lock earlier as l.Service would deadlock and shouldn't @@ -650,7 +643,7 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*Man } // If no ports left (or auto ports disabled) fail if svc.Port < 1 { - return nil, nil, fmt.Errorf("no port provided for proxy bind_port and none "+ + return nil, fmt.Errorf("no port provided for proxy bind_port and none "+ " left in the allocated range [%d, %d]", l.config.ProxyBindMinPort, l.config.ProxyBindMaxPort) } @@ -658,8 +651,7 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*Man proxy.ProxyService = svc // All set, add the proxy and return the service - old, ok := l.managedProxies[svc.ID] - if ok { + if old, ok := l.managedProxies[svc.ID]; ok { // Notify watchers of the existing proxy config that it's changing. Note // this is safe here even before the map is updated since we still hold the // state lock and the watcher can't re-read the new config until we return @@ -667,14 +659,22 @@ func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*Man close(old.WatchCh) } l.managedProxies[svc.ID] = &ManagedProxy{ - Proxy: proxy, - ProxyToken: pToken, - ProxyProcess: proxyProcess, - WatchCh: make(chan struct{}), + Proxy: proxy, + ProxyToken: pToken, + WatchCh: make(chan struct{}), + } + + // Notify + for ch := range l.managedProxyHandlers { + // Do not block + select { + case ch <- struct{}{}: + default: + } } // No need to trigger sync as proxy state is local only. - return l.managedProxies[svc.ID], old, nil + return l.managedProxies[svc.ID], nil } // RemoveProxy is used to remove a proxy entry from the local state. @@ -692,6 +692,15 @@ func (l *State) RemoveProxy(id string) (*ManagedProxy, error) { // Notify watchers of the existing proxy config that it's changed. close(p.WatchCh) + // Notify + for ch := range l.managedProxyHandlers { + // Do not block + select { + case ch <- struct{}{}: + default: + } + } + // No need to trigger sync as proxy state is local only. return p, nil } @@ -715,6 +724,27 @@ func (l *State) Proxies() map[string]*ManagedProxy { return m } +// NotifyProxy will register a channel to receive messages when the +// configuration or set of proxies changes. This will not block on +// channel send so ensure the channel has a large enough buffer. +// +// NOTE(mitchellh): This could be more generalized but for my use case I +// only needed proxy events. In the future if it were to be generalized I +// would add a new Notify method and remove the proxy-specific ones. +func (l *State) NotifyProxy(ch chan<- struct{}) { + l.Lock() + defer l.Unlock() + l.managedProxyHandlers[ch] = struct{}{} +} + +// StopNotifyProxy will deregister a channel receiving proxy notifications. +// Pair this with all calls to NotifyProxy to clean up state. +func (l *State) StopNotifyProxy(ch chan<- struct{}) { + l.Lock() + defer l.Unlock() + delete(l.managedProxyHandlers, ch) +} + // Metadata returns the local node metadata fields that the // agent is aware of and are being kept in sync with the server func (l *State) Metadata() map[string]string { diff --git a/agent/local/state_test.go b/agent/local/state_test.go index f79249a73..800c017d6 100644 --- a/agent/local/state_test.go +++ b/agent/local/state_test.go @@ -1737,6 +1737,13 @@ func TestStateProxyManagement(t *testing.T) { assert.Equal(svc.Port, svcDup.Port) } + // Let's register a notifier now + notifyCh := make(chan struct{}, 1) + state.NotifyProxy(notifyCh) + defer state.StopNotifyProxy(notifyCh) + assert.Empty(notifyCh) + drainCh(notifyCh) + // Second proxy should claim other port p2 := p1 p2.TargetServiceID = "cache" @@ -1746,6 +1753,10 @@ func TestStateProxyManagement(t *testing.T) { assert.Contains([]int{20000, 20001}, svc2.Port) assert.NotEqual(svc.Port, svc2.Port) + // Should have a notification + assert.NotEmpty(notifyCh) + drainCh(notifyCh) + // Store this for later p2token := state.Proxy(svc2.ID).ProxyToken @@ -1755,6 +1766,9 @@ func TestStateProxyManagement(t *testing.T) { _, err = state.AddProxy(&p3, "fake-token") require.Error(err) + // Should have a notification but we'll do nothing so that the next + // receive should block (we set cap == 1 above) + // But if we set a port explicitly it should be OK p3.Config = map[string]interface{}{ "bind_port": 1234, @@ -1766,6 +1780,10 @@ func TestStateProxyManagement(t *testing.T) { require.Equal("0.0.0.0", svc3.Address) require.Equal(1234, svc3.Port) + // Should have a notification + assert.NotEmpty(notifyCh) + drainCh(notifyCh) + // Update config of an already registered proxy should work p3updated := p3 p3updated.Config["foo"] = "bar" @@ -1785,10 +1803,16 @@ func TestStateProxyManagement(t *testing.T) { assert.False(ws.Watch(time.After(500*time.Millisecond)), "watch should have fired so ws.Watch should not timeout") + drainCh(notifyCh) + // Remove one of the auto-assigned proxies _, err = state.RemoveProxy(svc2.ID) require.NoError(err) + // Should have a notification + assert.NotEmpty(notifyCh) + drainCh(notifyCh) + // Should be able to create a new proxy for that service with the port (it // should have been "freed"). p4 := p2 @@ -1829,3 +1853,14 @@ func TestStateProxyManagement(t *testing.T) { } } } + +// drainCh drains a channel by reading messages until it would block. +func drainCh(ch chan struct{}) { + for { + select { + case <-ch: + default: + return + } + } +} From a2167a7fd10ff20490109ef6989c5ac5bdab9919 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Mon, 30 Apr 2018 23:35:23 -0700 Subject: [PATCH 196/539] agent/proxy: manager and basic tests, not great coverage yet coming soon --- agent/agent.go | 62 ++------ agent/local/testing.go | 19 +++ agent/proxy/daemon_test.go | 4 + agent/proxy/manager.go | 300 ++++++++++++++++++++++++++++++++++++ agent/proxy/manager_test.go | 79 ++++++++++ agent/proxy/proxy_test.go | 24 ++- agent/proxy/test.go | 13 ++ agent/structs/connect.go | 5 + 8 files changed, 448 insertions(+), 58 deletions(-) create mode 100644 agent/local/testing.go create mode 100644 agent/proxy/manager.go create mode 100644 agent/proxy/manager_test.go create mode 100644 agent/proxy/test.go diff --git a/agent/agent.go b/agent/agent.go index f70c16379..365a76af0 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2038,58 +2038,38 @@ func (a *Agent) AddProxy(proxy *structs.ConnectManagedProxy, persist bool) error // Lookup the target service token in state if there is one. token := a.State.ServiceToken(proxy.TargetServiceID) - // Determine if we need to default the command - if proxy.ExecMode == structs.ProxyExecModeDaemon && len(proxy.Command) == 0 { - // We use the globally configured default command. If it is empty - // then we need to determine the subcommand for this agent. - cmd := a.config.ConnectProxyDefaultDaemonCommand - if len(cmd) == 0 { - var err error - cmd, err = a.defaultProxyCommand() - if err != nil { - return err + /* + // Determine if we need to default the command + if proxy.ExecMode == structs.ProxyExecModeDaemon && len(proxy.Command) == 0 { + // We use the globally configured default command. If it is empty + // then we need to determine the subcommand for this agent. + cmd := a.config.ConnectProxyDefaultDaemonCommand + if len(cmd) == 0 { + var err error + cmd, err = a.defaultProxyCommand() + if err != nil { + return err + } } - } - proxy.CommandDefault = cmd - } + proxy.CommandDefault = cmd + } + */ // Add the proxy to local state first since we may need to assign a port which // needs to be coordinate under state lock. AddProxy will generate the // NodeService for the proxy populated with the allocated (or configured) port // and an ID, but it doesn't add it to the agent directly since that could // deadlock and we may need to coordinate adding it and persisting etc. - proxyState, oldProxy, err := a.State.AddProxy(proxy, token) + proxyState, err := a.State.AddProxy(proxy, token) if err != nil { return err } proxyService := proxyState.Proxy.ProxyService - // If we replaced an existing proxy, stop that process. - if oldProxy != nil { - if err := oldProxy.ProxyProcess.Stop(); err != nil { - a.logger.Printf( - "[ERR] error stopping managed proxy, may still be running: %s", - err) - } - } - - // Start the proxy process - if err := proxyState.ProxyProcess.Start(); err != nil { - a.State.RemoveProxy(proxyService.ID) - return fmt.Errorf("error starting managed proxy: %s", err) - } - // TODO(banks): register proxy health checks. err = a.AddService(proxyService, nil, persist, token) if err != nil { - // Stop the proxy process if it was started - if err := proxyState.ProxyProcess.Stop(); err != nil { - a.logger.Printf( - "[ERR] error stopping managed proxy, may still be running: %s", - err) - } - // Remove the state too a.State.RemoveProxy(proxyService.ID) return err @@ -2107,18 +2087,10 @@ func (a *Agent) RemoveProxy(proxyID string, persist bool) error { } // Remove the proxy from the local state - proxyState, err := a.State.RemoveProxy(proxyID) - if err != nil { + if _, err := a.State.RemoveProxy(proxyID); err != nil { return err } - // Stop the process. The proxy implementation is expected to perform - // retries so if this fails then retries have already been performed and - // the most we can do is just error. - if err := proxyState.ProxyProcess.Stop(); err != nil { - return fmt.Errorf("error stopping managed proxy process: %s", err) - } - // TODO(banks): unpersist proxy return nil diff --git a/agent/local/testing.go b/agent/local/testing.go new file mode 100644 index 000000000..6ca9d12ae --- /dev/null +++ b/agent/local/testing.go @@ -0,0 +1,19 @@ +package local + +import ( + "log" + "os" + + "github.com/hashicorp/consul/agent/token" + "github.com/mitchellh/go-testing-interface" +) + +// TestState returns a configured *State for testing. +func TestState(t testing.T) *State { + result := NewState(Config{ + ProxyBindMinPort: 20000, + ProxyBindMaxPort: 20500, + }, log.New(os.Stderr, "", log.LstdFlags), &token.Store{}) + result.TriggerSyncChanges = func() {} + return result +} diff --git a/agent/proxy/daemon_test.go b/agent/proxy/daemon_test.go index dbd2099bc..22948bdaf 100644 --- a/agent/proxy/daemon_test.go +++ b/agent/proxy/daemon_test.go @@ -16,6 +16,8 @@ func TestDaemon_impl(t *testing.T) { } func TestDaemonStartStop(t *testing.T) { + t.Parallel() + require := require.New(t) td, closer := testTempDir(t) defer closer() @@ -63,6 +65,8 @@ func TestDaemonStartStop(t *testing.T) { } func TestDaemonRestart(t *testing.T) { + t.Parallel() + require := require.New(t) td, closer := testTempDir(t) defer closer() diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go new file mode 100644 index 000000000..05445d93a --- /dev/null +++ b/agent/proxy/manager.go @@ -0,0 +1,300 @@ +package proxy + +import ( + "fmt" + "log" + "os" + "os/exec" + "sync" + + "github.com/hashicorp/consul/agent/local" + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/go-multierror" +) + +// Manager starts, stops, snapshots, and restores managed proxies. +// +// The manager will not start or stop any processes until Start is called. +// Prior to this, any configuration, snapshot loading, etc. can be done. +// Even if a process is no longer running after loading the snapshot, it +// will not be restarted until Start is called. +// +// The Manager works by subscribing to change notifications on a local.State +// structure. Whenever a change is detected, the Manager syncs its internal +// state with the local.State and starts/stops any necessary proxies. The +// manager never holds a lock on local.State (except to read the proxies) +// and state updates may occur while the Manger is syncing. This is okay, +// since a change notification will be queued to trigger another sync. +// +// NOTE(mitchellh): Change notifications are not coalesced currently. Under +// conditions where managed proxy configurations are changing in a hot +// loop, it is possible for the manager to constantly attempt to sync. This +// is unlikely, but its also easy to introduce basic coalescing (even over +// millisecond intervals) to prevent total waste compute cycles. +type Manager struct { + // State is the local state that is the source of truth for all + // configured managed proxies. + State *local.State + + // Logger is the logger for information about manager behavior. + // Output for proxies will not go here generally but varies by proxy + // implementation type. + Logger *log.Logger + + // lock is held while reading/writing any internal state of the manager. + // cond is a condition variable on lock that is broadcasted for runState + // changes. + lock *sync.Mutex + cond *sync.Cond + + // runState is the current state of the manager. To read this the + // lock must be held. The condition variable cond can be waited on + // for changes to this value. + runState managerRunState + + proxies map[string]Proxy +} + +// defaultLogger is the defaultLogger for NewManager so there it is never nil +var defaultLogger = log.New(os.Stderr, "", log.LstdFlags) + +// NewManager initializes a Manager. After initialization, the exported +// fields should be configured as desired. To start the Manager, execute +// Run in a goroutine. +func NewManager() *Manager { + var lock sync.Mutex + return &Manager{ + Logger: defaultLogger, + lock: &lock, + cond: sync.NewCond(&lock), + proxies: make(map[string]Proxy), + } +} + +// managerRunState is the state of the Manager. +// +// This is a basic state machine with the following transitions: +// +// * idle => running, stopped +// * running => stopping, stopped +// * stopping => stopped +// * stopped => <> +// +type managerRunState uint8 + +const ( + managerStateIdle managerRunState = iota + managerStateRunning + managerStateStopping + managerStateStopped +) + +// Close stops the manager. Managed processes are NOT stopped. +func (m *Manager) Close() error { + m.lock.Lock() + defer m.lock.Unlock() + + for { + switch m.runState { + case managerStateIdle: + // Idle so just set it to stopped and return. We notify + // the condition variable in case others are waiting. + m.runState = managerStateStopped + m.cond.Broadcast() + return nil + + case managerStateRunning: + // Set the state to stopping and broadcast to all waiters, + // since Run is sitting on cond.Wait. + m.runState = managerStateStopping + m.cond.Broadcast() + m.cond.Wait() // Wait on the stopping event + + case managerStateStopping: + // Still stopping, wait... + m.cond.Wait() + + case managerStateStopped: + // Stopped, target state reached + return nil + } + } +} + +// Kill will Close the manager and Kill all proxies that were being managed. +// +// This is safe to call with Close already called since Close is idempotent. +func (m *Manager) Kill() error { + // Close first so that we aren't getting changes in proxies + if err := m.Close(); err != nil { + return err + } + + m.lock.Lock() + defer m.lock.Unlock() + + var err error + for id, proxy := range m.proxies { + if err := proxy.Stop(); err != nil { + err = multierror.Append( + err, fmt.Errorf("failed to stop proxy %q: %s", id, err)) + continue + } + + // Remove it since it is already stopped successfully + delete(m.proxies, id) + } + + return err +} + +// Run syncs with the local state and supervises existing proxies. +// +// This blocks and should be run in a goroutine. If another Run is already +// executing, this will do nothing and return. +func (m *Manager) Run() { + m.lock.Lock() + if m.runState != managerStateIdle { + m.lock.Unlock() + return + } + + // Set the state to running + m.runState = managerStateRunning + m.lock.Unlock() + + // Start a goroutine that just waits for a stop request + stopCh := make(chan struct{}) + go func() { + defer close(stopCh) + m.lock.Lock() + defer m.lock.Unlock() + + // We wait for anything not running, just so we're more resilient + // in the face of state machine issues. Basically any state change + // will cause us to quit. + for m.runState != managerStateRunning { + m.cond.Wait() + } + }() + + // When we exit, we set the state to stopped and broadcast to any + // waiting Close functions that they can return. + defer func() { + m.lock.Lock() + m.runState = managerStateStopped + m.cond.Broadcast() + m.lock.Unlock() + }() + + // Register for proxy catalog change notifications + notifyCh := make(chan struct{}, 1) + m.State.NotifyProxy(notifyCh) + defer m.State.StopNotifyProxy(notifyCh) + + for { + // Sync first, before waiting on further notifications so that + // we can start with a known-current state. + m.sync() + + select { + case <-notifyCh: + // Changes exit select so we can reloop and reconfigure proxies + + case <-stopCh: + // Stop immediately, no cleanup + return + } + } +} + +// sync syncs data with the local state store to update the current manager +// state and start/stop necessary proxies. +func (m *Manager) sync() { + m.lock.Lock() + defer m.lock.Unlock() + + // Get the current set of proxies + state := m.State.Proxies() + + // Go through our existing proxies that we're currently managing to + // determine if they're still in the state or not. If they're in the + // state, we need to diff to determine if we're starting a new proxy + // If they're not in the state, then we need to stop the proxy since it + // is now orphaned. + for id, proxy := range m.proxies { + // Get the proxy. + stateProxy, ok := state[id] + if !ok { + // Proxy is deregistered. Remove it from our map and stop it + delete(m.proxies, id) + if err := proxy.Stop(); err != nil { + m.Logger.Printf("[ERROR] agent/proxy: failed to stop deregistered proxy for %q: %s", id, err) + } + + continue + } + + // Proxy is in the state. Always delete it so that the remainder + // are NEW proxies that we start after this loop. + delete(state, id) + + // TODO: diff and restart if necessary + println(stateProxy) + } + + // Remaining entries in state are new proxies. Start them! + for id, stateProxy := range state { + proxy, err := m.newProxy(stateProxy) + if err != nil { + m.Logger.Printf("[ERROR] agent/proxy: failed to initialize proxy for %q: %s", id, err) + continue + } + + if err := proxy.Start(); err != nil { + m.Logger.Printf("[ERROR] agent/proxy: failed to start proxy for %q: %s", id, err) + continue + } + + m.proxies[id] = proxy + } +} + +// newProxy creates the proper Proxy implementation for the configured +// local managed proxy. +func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { + // Defensive because the alternative is to panic which is not desired + if mp == nil || mp.Proxy == nil { + return nil, fmt.Errorf("internal error: nil *local.ManagedProxy or Proxy field") + } + + p := mp.Proxy + switch p.ExecMode { + case structs.ProxyExecModeDaemon: + command := p.Command + if len(command) == 0 { + command = p.CommandDefault + } + + // This should never happen since validation should happen upstream + // but verify it because the alternative is to panic below. + if len(command) == 0 { + return nil, fmt.Errorf("daemon mode managed proxy requires command") + } + + // Build the command to execute. + var cmd exec.Cmd + cmd.Path = command[0] + cmd.Args = command[1:] + + // Build the daemon structure + return &Daemon{ + Command: &cmd, + ProxyToken: mp.ProxyToken, + Logger: m.Logger, + }, nil + + default: + return nil, fmt.Errorf("unsupported managed proxy type: %q", p.ExecMode) + } +} diff --git a/agent/proxy/manager_test.go b/agent/proxy/manager_test.go new file mode 100644 index 000000000..13a57d7b7 --- /dev/null +++ b/agent/proxy/manager_test.go @@ -0,0 +1,79 @@ +package proxy + +import ( + "os" + "os/exec" + "path/filepath" + "testing" + + "github.com/hashicorp/consul/agent/local" + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/testutil/retry" + "github.com/stretchr/testify/require" +) + +func TestManagerClose_noRun(t *testing.T) { + t.Parallel() + + // Really we're testing that it doesn't deadlock here. + m := NewManager() + require.NoError(t, m.Close()) + + // Close again for sanity + require.NoError(t, m.Close()) +} + +// Test that Run performs an initial sync (if local.State is already set) +// rather than waiting for a notification from the local state. +func TestManagerRun_initialSync(t *testing.T) { + t.Parallel() + + state := testState(t) + m := NewManager() + m.State = state + defer m.Kill() + + // Add the proxy before we start the manager to verify initial sync + td, closer := testTempDir(t) + defer closer() + path := filepath.Join(td, "file") + testStateProxy(t, state, helperProcess("restart", path)) + + // Start the manager + go m.Run() + + // We should see the path appear shortly + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + r.Fatalf("error waiting for path: %s", err) + }) +} + +func testState(t *testing.T) *local.State { + state := local.TestState(t) + require.NoError(t, state.AddService(&structs.NodeService{ + Service: "web", + }, "web")) + + return state +} + +// testStateProxy registers a proxy with the given local state and the command +// (expected to be from the helperProcess function call). It returns the +// ID for deregistration. +func testStateProxy(t *testing.T, state *local.State, cmd *exec.Cmd) string { + command := []string{cmd.Path} + command = append(command, cmd.Args...) + + p, err := state.AddProxy(&structs.ConnectManagedProxy{ + ExecMode: structs.ProxyExecModeDaemon, + Command: command, + TargetServiceID: "web", + }, "web") + require.NoError(t, err) + + return p.Proxy.ProxyService.ID +} diff --git a/agent/proxy/proxy_test.go b/agent/proxy/proxy_test.go index fa8eef128..1d6df99ef 100644 --- a/agent/proxy/proxy_test.go +++ b/agent/proxy/proxy_test.go @@ -31,16 +31,19 @@ func testTempDir(t *testing.T) (string, func()) { } } +// helperProcessSentinel is a sentinel value that is put as the first +// argument following "--" and is used to determine if TestHelperProcess +// should run. +const helperProcessSentinel = "WANT_HELPER_PROCESS" + // helperProcess returns an *exec.Cmd that can be used to execute the // TestHelperProcess function below. This can be used to test multi-process // interactions. func helperProcess(s ...string) *exec.Cmd { - cs := []string{"-test.run=TestHelperProcess", "--"} + cs := []string{"-test.run=TestHelperProcess", "--", helperProcessSentinel} cs = append(cs, s...) - env := []string{"GO_WANT_HELPER_PROCESS=1"} cmd := exec.Command(os.Args[0], cs...) - cmd.Env = append(env, os.Environ()...) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr return cmd @@ -49,12 +52,6 @@ func helperProcess(s ...string) *exec.Cmd { // This is not a real test. This is just a helper process kicked off by tests // using the helperProcess helper function. func TestHelperProcess(t *testing.T) { - if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" { - return - } - - defer os.Exit(0) - args := os.Args for len(args) > 0 { if args[0] == "--" { @@ -65,15 +62,16 @@ func TestHelperProcess(t *testing.T) { args = args[1:] } - if len(args) == 0 { - fmt.Fprintf(os.Stderr, "No command\n") - os.Exit(2) + if len(args) == 0 || args[0] != helperProcessSentinel { + return } + defer os.Exit(0) + args = args[1:] // strip sentinel value cmd, args := args[0], args[1:] switch cmd { // While running, this creates a file in the given directory (args[0]) - // and deletes it only whe nit is stopped. + // and deletes it only when it is stopped. case "start-stop": ch := make(chan os.Signal, 1) signal.Notify(ch, os.Interrupt) diff --git a/agent/proxy/test.go b/agent/proxy/test.go new file mode 100644 index 000000000..b6b35bb04 --- /dev/null +++ b/agent/proxy/test.go @@ -0,0 +1,13 @@ +package proxy + +// defaultTestProxy is the test proxy that is instantiated for proxies with +// an execution mode of ProxyExecModeTest. +var defaultTestProxy = testProxy{} + +// testProxy is a Proxy implementation that stores state in-memory and +// is only used for unit testing. It is in a non _test.go file because the +// factory for initializing it is exported (newProxy). +type testProxy struct { + Start uint32 + Stop uint32 +} diff --git a/agent/structs/connect.go b/agent/structs/connect.go index b40091adf..02b5ba1fa 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -33,6 +33,11 @@ const ( // ProxyExecModeScript executes a proxy config script on each change to it's // config. ProxyExecModeScript + + // ProxyExecModeTest tracks the start/stop of the proxy in-memory + // and is only used for tests. This shouldn't be set outside of tests, + // but even if it is it has no external effect. + ProxyExecModeTest ) // String implements Stringer From 8ce3deac5d1bee9424dd36fc0b3eeea421f09bb6 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 09:57:06 -0700 Subject: [PATCH 197/539] agent/proxy: test removing proxies and stopping them --- agent/proxy/manager.go | 4 +- agent/proxy/manager_test.go | 99 ++++++++++++++++++++++++++++++++----- agent/proxy/proxy_test.go | 13 +++++ 3 files changed, 103 insertions(+), 13 deletions(-) diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index 05445d93a..765e9f022 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -173,7 +173,7 @@ func (m *Manager) Run() { // We wait for anything not running, just so we're more resilient // in the face of state machine issues. Basically any state change // will cause us to quit. - for m.runState != managerStateRunning { + for m.runState == managerStateRunning { m.cond.Wait() } }() @@ -240,7 +240,7 @@ func (m *Manager) sync() { delete(state, id) // TODO: diff and restart if necessary - println(stateProxy) + println("DIFF", id, stateProxy) } // Remaining entries in state are new proxies. Start them! diff --git a/agent/proxy/manager_test.go b/agent/proxy/manager_test.go index 13a57d7b7..eba7a674a 100644 --- a/agent/proxy/manager_test.go +++ b/agent/proxy/manager_test.go @@ -5,6 +5,7 @@ import ( "os/exec" "path/filepath" "testing" + "time" "github.com/hashicorp/consul/agent/local" "github.com/hashicorp/consul/agent/structs" @@ -28,7 +29,7 @@ func TestManagerClose_noRun(t *testing.T) { func TestManagerRun_initialSync(t *testing.T) { t.Parallel() - state := testState(t) + state := local.TestState(t) m := NewManager() m.State = state defer m.Kill() @@ -37,7 +38,7 @@ func TestManagerRun_initialSync(t *testing.T) { td, closer := testTempDir(t) defer closer() path := filepath.Join(td, "file") - testStateProxy(t, state, helperProcess("restart", path)) + testStateProxy(t, state, "web", helperProcess("restart", path)) // Start the manager go m.Run() @@ -52,27 +53,103 @@ func TestManagerRun_initialSync(t *testing.T) { }) } -func testState(t *testing.T) *local.State { - state := local.TestState(t) - require.NoError(t, state.AddService(&structs.NodeService{ - Service: "web", - }, "web")) +func TestManagerRun_syncNew(t *testing.T) { + t.Parallel() - return state + state := local.TestState(t) + m := NewManager() + m.State = state + defer m.Kill() + + // Start the manager + go m.Run() + + // Sleep a bit, this is just an attempt for Run to already be running. + // Its not a big deal if this sleep doesn't happen (slow CI). + time.Sleep(100 * time.Millisecond) + + // Add the first proxy + td, closer := testTempDir(t) + defer closer() + path := filepath.Join(td, "file") + testStateProxy(t, state, "web", helperProcess("restart", path)) + + // We should see the path appear shortly + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + r.Fatalf("error waiting for path: %s", err) + }) + + // Add another proxy + path = path + "2" + testStateProxy(t, state, "db", helperProcess("restart", path)) + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + r.Fatalf("error waiting for path: %s", err) + }) +} + +func TestManagerRun_syncDelete(t *testing.T) { + t.Parallel() + + state := local.TestState(t) + m := NewManager() + m.State = state + defer m.Kill() + + // Start the manager + go m.Run() + + // Add the first proxy + td, closer := testTempDir(t) + defer closer() + path := filepath.Join(td, "file") + id := testStateProxy(t, state, "web", helperProcess("restart", path)) + + // We should see the path appear shortly + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + r.Fatalf("error waiting for path: %s", err) + }) + + // Remove the proxy + _, err := state.RemoveProxy(id) + require.NoError(t, err) + + // File should disappear as process is killed + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + r.Fatalf("path exists") + } + }) } // testStateProxy registers a proxy with the given local state and the command // (expected to be from the helperProcess function call). It returns the // ID for deregistration. -func testStateProxy(t *testing.T, state *local.State, cmd *exec.Cmd) string { +func testStateProxy(t *testing.T, state *local.State, service string, cmd *exec.Cmd) string { command := []string{cmd.Path} command = append(command, cmd.Args...) + require.NoError(t, state.AddService(&structs.NodeService{ + Service: service, + }, "token")) + p, err := state.AddProxy(&structs.ConnectManagedProxy{ ExecMode: structs.ProxyExecModeDaemon, Command: command, - TargetServiceID: "web", - }, "web") + TargetServiceID: service, + }, "token") require.NoError(t, err) return p.Proxy.ProxyService.ID diff --git a/agent/proxy/proxy_test.go b/agent/proxy/proxy_test.go index 1d6df99ef..11994b1bf 100644 --- a/agent/proxy/proxy_test.go +++ b/agent/proxy/proxy_test.go @@ -91,6 +91,10 @@ func TestHelperProcess(t *testing.T) { // exists. When that file is removed, this process exits. This can be // used to test restarting. case "restart": + ch := make(chan os.Signal, 1) + signal.Notify(ch, os.Interrupt) + defer signal.Stop(ch) + // Write the file path := args[0] if err := ioutil.WriteFile(path, []byte("hello"), 0644); err != nil { @@ -105,6 +109,15 @@ func TestHelperProcess(t *testing.T) { if _, err := os.Stat(path); os.IsNotExist(err) { break } + + select { + case <-ch: + // We received an interrupt, clean exit + os.Remove(path) + break + + default: + } } default: From 6884654c9da8515044701341d0ab90d09e27f9df Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 11:02:58 -0700 Subject: [PATCH 198/539] agent/proxy: detect config change to stop/start proxies --- agent/proxy/daemon.go | 16 +++++++ agent/proxy/daemon_test.go | 91 +++++++++++++++++++++++++++++++++++++ agent/proxy/manager.go | 37 ++++++++++----- agent/proxy/manager_test.go | 47 +++++++++++++++++++ agent/proxy/noop.go | 5 +- agent/proxy/proxy.go | 8 ++++ 6 files changed, 190 insertions(+), 14 deletions(-) diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index 231f25cb8..c43eb48a1 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -5,6 +5,7 @@ import ( "log" "os" "os/exec" + "reflect" "sync" "time" ) @@ -198,3 +199,18 @@ func (p *Daemon) Stop() error { return err //return p.Command.Process.Kill() } + +// Equal implements Proxy to check for equality. +func (p *Daemon) Equal(raw Proxy) bool { + p2, ok := raw.(*Daemon) + if !ok { + return false + } + + // We compare equality on a subset of the command configuration + return p.ProxyToken == p2.ProxyToken && + p.Command.Path == p2.Command.Path && + p.Command.Dir == p2.Command.Dir && + reflect.DeepEqual(p.Command.Args, p2.Command.Args) && + reflect.DeepEqual(p.Command.Env, p2.Command.Env) +} diff --git a/agent/proxy/daemon_test.go b/agent/proxy/daemon_test.go index 22948bdaf..a1638b266 100644 --- a/agent/proxy/daemon_test.go +++ b/agent/proxy/daemon_test.go @@ -3,6 +3,7 @@ package proxy import ( "io/ioutil" "os" + "os/exec" "path/filepath" "testing" @@ -97,3 +98,93 @@ func TestDaemonRestart(t *testing.T) { // File should re-appear because the process is restart waitFile() } + +func TestDaemonEqual(t *testing.T) { + cases := []struct { + Name string + D1, D2 Proxy + Expected bool + }{ + { + "Different type", + &Daemon{ + Command: &exec.Cmd{}, + }, + &Noop{}, + false, + }, + + { + "Nil", + &Daemon{ + Command: &exec.Cmd{}, + }, + nil, + false, + }, + + { + "Equal", + &Daemon{ + Command: &exec.Cmd{}, + }, + &Daemon{ + Command: &exec.Cmd{}, + }, + true, + }, + + { + "Different path", + &Daemon{ + Command: &exec.Cmd{Path: "/foo"}, + }, + &Daemon{ + Command: &exec.Cmd{Path: "/bar"}, + }, + false, + }, + + { + "Different dir", + &Daemon{ + Command: &exec.Cmd{Dir: "/foo"}, + }, + &Daemon{ + Command: &exec.Cmd{Dir: "/bar"}, + }, + false, + }, + + { + "Different args", + &Daemon{ + Command: &exec.Cmd{Args: []string{"foo"}}, + }, + &Daemon{ + Command: &exec.Cmd{Args: []string{"bar"}}, + }, + false, + }, + + { + "Different token", + &Daemon{ + Command: &exec.Cmd{}, + ProxyToken: "one", + }, + &Daemon{ + Command: &exec.Cmd{}, + ProxyToken: "two", + }, + false, + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + actual := tc.D1.Equal(tc.D2) + require.Equal(t, tc.Expected, actual) + }) + } +} diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index 765e9f022..d2ab8a106 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -225,22 +225,35 @@ func (m *Manager) sync() { for id, proxy := range m.proxies { // Get the proxy. stateProxy, ok := state[id] - if !ok { - // Proxy is deregistered. Remove it from our map and stop it - delete(m.proxies, id) - if err := proxy.Stop(); err != nil { - m.Logger.Printf("[ERROR] agent/proxy: failed to stop deregistered proxy for %q: %s", id, err) + if ok { + // Remove the proxy from the state so we don't start it new. + delete(state, id) + + // Make the proxy so we can compare. This does not start it. + proxy2, err := m.newProxy(stateProxy) + if err != nil { + m.Logger.Printf("[ERROR] agent/proxy: failed to initialize proxy for %q: %s", id, err) + continue } - continue + // If the proxies are equal, then do nothing + if proxy.Equal(proxy2) { + continue + } + + // Proxies are not equal, so we should stop it. We add it + // back to the state here (unlikely case) so the loop below starts + // the new one. + state[id] = stateProxy + + // Continue out of `if` as if proxy didn't exist so we stop it } - // Proxy is in the state. Always delete it so that the remainder - // are NEW proxies that we start after this loop. - delete(state, id) - - // TODO: diff and restart if necessary - println("DIFF", id, stateProxy) + // Proxy is deregistered. Remove it from our map and stop it + delete(m.proxies, id) + if err := proxy.Stop(); err != nil { + m.Logger.Printf("[ERROR] agent/proxy: failed to stop deregistered proxy for %q: %s", id, err) + } } // Remaining entries in state are new proxies. Start them! diff --git a/agent/proxy/manager_test.go b/agent/proxy/manager_test.go index eba7a674a..97086d491 100644 --- a/agent/proxy/manager_test.go +++ b/agent/proxy/manager_test.go @@ -134,6 +134,53 @@ func TestManagerRun_syncDelete(t *testing.T) { }) } +func TestManagerRun_syncUpdate(t *testing.T) { + t.Parallel() + + state := local.TestState(t) + m := NewManager() + m.State = state + defer m.Kill() + + // Start the manager + go m.Run() + + // Add the first proxy + td, closer := testTempDir(t) + defer closer() + path := filepath.Join(td, "file") + testStateProxy(t, state, "web", helperProcess("restart", path)) + + // We should see the path appear shortly + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + r.Fatalf("error waiting for path: %s", err) + }) + + // Update the proxy with a new path + oldPath := path + path = path + "2" + testStateProxy(t, state, "web", helperProcess("restart", path)) + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + r.Fatalf("error waiting for path: %s", err) + }) + + // Old path should be gone + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(oldPath) + if err == nil { + r.Fatalf("old path exists") + } + }) +} + // testStateProxy registers a proxy with the given local state and the command // (expected to be from the helperProcess function call). It returns the // ID for deregistration. diff --git a/agent/proxy/noop.go b/agent/proxy/noop.go index 9b35a2427..9ce013554 100644 --- a/agent/proxy/noop.go +++ b/agent/proxy/noop.go @@ -3,5 +3,6 @@ package proxy // Noop implements Proxy and does nothing. type Noop struct{} -func (p *Noop) Start() error { return nil } -func (p *Noop) Stop() error { return nil } +func (p *Noop) Start() error { return nil } +func (p *Noop) Stop() error { return nil } +func (p *Noop) Equal(Proxy) bool { return true } diff --git a/agent/proxy/proxy.go b/agent/proxy/proxy.go index a07bb5681..549a6ee26 100644 --- a/agent/proxy/proxy.go +++ b/agent/proxy/proxy.go @@ -31,4 +31,12 @@ type Proxy interface { // it should disallow Start from working again. If the proxy is already // stopped, this should not return an error. Stop() error + + // Equal returns true if the argument is equal to the proxy being called. + // This is called by the manager to determine if a change in configuration + // results in a proxy that needs to be restarted or not. If Equal returns + // false, then the manager will stop the old proxy and start the new one. + // If Equal returns true, the old proxy will remain running and the new + // one will be ignored. + Equal(Proxy) bool } From 669268f85c670027b991c7262362b1a5e2bae3f1 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 11:38:18 -0700 Subject: [PATCH 199/539] agent: start proxy manager --- agent/agent.go | 48 ++++++++++++++++++++++-------------- agent/agent_endpoint_test.go | 2 +- agent/proxy/daemon.go | 16 ++++++++++-- agent/proxy/manager.go | 4 ++- agent/structs/connect.go | 2 ++ 5 files changed, 50 insertions(+), 22 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 365a76af0..79c9eb112 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -27,6 +27,7 @@ import ( "github.com/hashicorp/consul/agent/config" "github.com/hashicorp/consul/agent/consul" "github.com/hashicorp/consul/agent/local" + "github.com/hashicorp/consul/agent/proxy" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/agent/systemd" "github.com/hashicorp/consul/agent/token" @@ -200,6 +201,9 @@ type Agent struct { // be updated at runtime, so should always be used instead of going to // the configuration directly. tokens *token.Store + + // proxyManager is the proxy process manager for managed Connect proxies. + proxyManager *proxy.Manager } func New(c *config.RuntimeConfig) (*Agent, error) { @@ -353,6 +357,14 @@ func (a *Agent) Start() error { return err } + // create the proxy process manager and start it. This is purposely + // done here after the local state above is loaded in so we can have + // a more accurate initial state view. + a.proxyManager = proxy.NewManager() + a.proxyManager.State = a.State + a.proxyManager.Logger = a.logger + go a.proxyManager.Run() + // Start watching for critical services to deregister, based on their // checks. go a.reapServices() @@ -1269,9 +1281,11 @@ func (a *Agent) ShutdownAgent() error { chk.Stop() } - // Unload all our proxies so that we stop the running processes. - if err := a.unloadProxies(); err != nil { - a.logger.Printf("[WARN] agent: error stopping managed proxies: %s", err) + // Stop the proxy manager + // NOTE(mitchellh): we use Kill for now to kill the processes since + // snapshotting isn't implemented. This should change to Close later. + if err := a.proxyManager.Kill(); err != nil { + a.logger.Printf("[WARN] agent: error shutting down proxy manager: %s", err) } var err error @@ -2038,23 +2052,21 @@ func (a *Agent) AddProxy(proxy *structs.ConnectManagedProxy, persist bool) error // Lookup the target service token in state if there is one. token := a.State.ServiceToken(proxy.TargetServiceID) - /* - // Determine if we need to default the command - if proxy.ExecMode == structs.ProxyExecModeDaemon && len(proxy.Command) == 0 { - // We use the globally configured default command. If it is empty - // then we need to determine the subcommand for this agent. - cmd := a.config.ConnectProxyDefaultDaemonCommand - if len(cmd) == 0 { - var err error - cmd, err = a.defaultProxyCommand() - if err != nil { - return err - } + // Determine if we need to default the command + if proxy.ExecMode == structs.ProxyExecModeDaemon && len(proxy.Command) == 0 { + // We use the globally configured default command. If it is empty + // then we need to determine the subcommand for this agent. + cmd := a.config.ConnectProxyDefaultDaemonCommand + if len(cmd) == 0 { + var err error + cmd, err = a.defaultProxyCommand() + if err != nil { + return err } - - proxy.CommandDefault = cmd } - */ + + proxy.CommandDefault = cmd + } // Add the proxy to local state first since we may need to assign a port which // needs to be coordinate under state lock. AddProxy will generate the diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 26a04dddd..e6a47cbaa 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -70,7 +70,7 @@ func TestAgent_Services(t *testing.T) { }, TargetServiceID: "mysql", } - _, _, err := a.State.AddProxy(prxy1, "") + _, err := a.State.AddProxy(prxy1, "") require.NoError(t, err) req, _ := http.NewRequest("GET", "/v1/agent/services", nil) diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index c43eb48a1..1d716950b 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -7,6 +7,7 @@ import ( "os/exec" "reflect" "sync" + "syscall" "time" ) @@ -146,9 +147,15 @@ func (p *Daemon) keepAlive(stopCh <-chan struct{}) { } - _, err := process.Wait() + ps, err := process.Wait() process = nil - p.Logger.Printf("[INFO] agent/proxy: daemon exited: %s", err) + if err != nil { + p.Logger.Printf("[INFO] agent/proxy: daemon exited with error: %s", err) + } else if status, ok := ps.Sys().(syscall.WaitStatus); ok { + p.Logger.Printf( + "[INFO] agent/proxy: daemon exited with exit code: %d", + status.ExitStatus()) + } } } @@ -165,7 +172,12 @@ func (p *Daemon) start() (*os.Process, error) { copy(cmd.Env, p.Command.Env) cmd.Env = append(cmd.Env, fmt.Sprintf("%s=%s", EnvProxyToken, p.ProxyToken)) + // TODO(mitchellh): temporary until we introduce the file based logging + cmd.Stdout = os.Stdout + cmd.Stderr = os.Stderr + // Start it + p.Logger.Printf("[DEBUG] agent/proxy: starting proxy: %q %#v", cmd.Path, cmd.Args) err := cmd.Start() return cmd.Process, err } diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index d2ab8a106..dedc6f737 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -192,6 +192,7 @@ func (m *Manager) Run() { m.State.NotifyProxy(notifyCh) defer m.State.StopNotifyProxy(notifyCh) + m.Logger.Println("[DEBUG] agent/proxy: managed Connect proxy manager started") for { // Sync first, before waiting on further notifications so that // we can start with a known-current state. @@ -203,6 +204,7 @@ func (m *Manager) Run() { case <-stopCh: // Stop immediately, no cleanup + m.Logger.Println("[DEBUG] agent/proxy: Stopping managed Connect proxy manager") return } } @@ -298,7 +300,7 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { // Build the command to execute. var cmd exec.Cmd cmd.Path = command[0] - cmd.Args = command[1:] + cmd.Args = command // idx 0 is path but preserved since it should be // Build the daemon structure return &Daemon{ diff --git a/agent/structs/connect.go b/agent/structs/connect.go index 02b5ba1fa..aca9764fa 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -49,6 +49,8 @@ func (m ProxyExecMode) String() string { return "daemon" case ProxyExecModeScript: return "script" + case ProxyExecModeTest: + return "test" default: return "unknown" } From 10fe87bd4ac1361dca26c0f2098e63de49ddafb0 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 11:47:57 -0700 Subject: [PATCH 200/539] agent/proxy: pull exit status extraction to constrained file --- agent/proxy/daemon.go | 16 ++++++++++------ agent/proxy/exitstatus_other.go | 10 ++++++++++ agent/proxy/exitstatus_syscall.go | 18 ++++++++++++++++++ 3 files changed, 38 insertions(+), 6 deletions(-) create mode 100644 agent/proxy/exitstatus_other.go create mode 100644 agent/proxy/exitstatus_syscall.go diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index 1d716950b..d5fd30256 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -7,7 +7,6 @@ import ( "os/exec" "reflect" "sync" - "syscall" "time" ) @@ -151,10 +150,8 @@ func (p *Daemon) keepAlive(stopCh <-chan struct{}) { process = nil if err != nil { p.Logger.Printf("[INFO] agent/proxy: daemon exited with error: %s", err) - } else if status, ok := ps.Sys().(syscall.WaitStatus); ok { - p.Logger.Printf( - "[INFO] agent/proxy: daemon exited with exit code: %d", - status.ExitStatus()) + } else if status, ok := exitStatus(ps); ok { + p.Logger.Printf("[INFO] agent/proxy: daemon exited with exit code: %d", status) } } } @@ -176,8 +173,15 @@ func (p *Daemon) start() (*os.Process, error) { cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr + // Args must always contain a 0 entry which is usually the executed binary. + // To be safe and a bit more robust we default this, but only to prevent + // a panic below. + if len(cmd.Args) == 0 { + cmd.Args = []string{cmd.Path} + } + // Start it - p.Logger.Printf("[DEBUG] agent/proxy: starting proxy: %q %#v", cmd.Path, cmd.Args) + p.Logger.Printf("[DEBUG] agent/proxy: starting proxy: %q %#v", cmd.Path, cmd.Args[1:]) err := cmd.Start() return cmd.Process, err } diff --git a/agent/proxy/exitstatus_other.go b/agent/proxy/exitstatus_other.go new file mode 100644 index 000000000..84dd88867 --- /dev/null +++ b/agent/proxy/exitstatus_other.go @@ -0,0 +1,10 @@ +// +build !darwin,!linux,!windows + +package proxy + +import "os" + +// exitStatus for other platforms where we don't know how to extract it. +func exitStatus(ps *os.ProcessState) (int, bool) { + return 0, false +} diff --git a/agent/proxy/exitstatus_syscall.go b/agent/proxy/exitstatus_syscall.go new file mode 100644 index 000000000..1caeda4bf --- /dev/null +++ b/agent/proxy/exitstatus_syscall.go @@ -0,0 +1,18 @@ +// +build darwin linux windows + +package proxy + +import ( + "os" + "syscall" +) + +// exitStatus for platforms with syscall.WaitStatus which are listed +// at the top of this file in the build constraints. +func exitStatus(ps *os.ProcessState) (int, bool) { + if status, ok := ps.Sys().(syscall.WaitStatus); ok { + return status.ExitStatus(), true + } + + return 0, false +} From 4722e3ef763ad1f6a124e378c7fb7b74e65313b5 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 11:51:47 -0700 Subject: [PATCH 201/539] agent: fix crash that could happen if proxy was nil on load --- agent/agent.go | 3 +++ agent/agent_test.go | 24 ++++++++++++++++++++++-- 2 files changed, 25 insertions(+), 2 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 79c9eb112..212672ea9 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2523,6 +2523,9 @@ func (a *Agent) loadProxies(conf *config.RuntimeConfig) error { if err != nil { return fmt.Errorf("failed adding proxy: %s", err) } + if proxy == nil { + continue + } if err := a.AddProxy(proxy, false); err != nil { return fmt.Errorf("failed adding proxy: %s", err) } diff --git a/agent/agent_test.go b/agent/agent_test.go index 022c25f8e..768d1c951 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -15,8 +15,6 @@ import ( "testing" "time" - "github.com/stretchr/testify/require" - "github.com/hashicorp/consul/agent/checks" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/api" @@ -25,6 +23,7 @@ import ( "github.com/hashicorp/consul/types" uuid "github.com/hashicorp/go-uuid" "github.com/pascaldekloe/goe/verify" + "github.com/stretchr/testify/require" ) func externalIP() (string, error) { @@ -1669,6 +1668,27 @@ func TestAgent_loadProxies(t *testing.T) { } } +func TestAgent_loadProxies_nilProxy(t *testing.T) { + t.Parallel() + a := NewTestAgent(t.Name(), ` + service = { + id = "rabbitmq" + name = "rabbitmq" + port = 5672 + token = "abc123" + connect { + } + } + `) + defer a.Shutdown() + + services := a.State.Services() + require.Contains(t, services, "rabbitmq") + require.Equal(t, "abc123", a.State.ServiceToken("rabbitmq")) + require.NotContains(t, services, "rabbitme-proxy") + require.Empty(t, a.State.Proxies()) +} + func TestAgent_unloadProxies(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), ` From 16f529a13c13bee9c296e90720b49f022323e5cc Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 12:25:00 -0700 Subject: [PATCH 202/539] agent/proxy: implement force kill of unresponsive proxy process --- agent/proxy/daemon.go | 47 ++++++++++++++++++++++++++++++-------- agent/proxy/daemon_test.go | 43 ++++++++++++++++++++++++++++++++++ agent/proxy/proxy_test.go | 18 +++++++++++++++ 3 files changed, 98 insertions(+), 10 deletions(-) diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index d5fd30256..a930c978b 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -38,11 +38,16 @@ type Daemon struct { // a file. Logger *log.Logger + // For tests, they can set this to change the default duration to wait + // for a graceful quit. + gracefulWait time.Duration + // process is the started process - lock sync.Mutex - stopped bool - stopCh chan struct{} - process *os.Process + lock sync.Mutex + stopped bool + stopCh chan struct{} + exitedCh chan struct{} + process *os.Process } // Start starts the daemon and keeps it running. @@ -64,17 +69,21 @@ func (p *Daemon) Start() error { // Setup our stop channel stopCh := make(chan struct{}) + exitedCh := make(chan struct{}) p.stopCh = stopCh + p.exitedCh = exitedCh // Start the loop. - go p.keepAlive(stopCh) + go p.keepAlive(stopCh, exitedCh) return nil } // keepAlive starts and keeps the configured process alive until it // is stopped via Stop. -func (p *Daemon) keepAlive(stopCh <-chan struct{}) { +func (p *Daemon) keepAlive(stopCh <-chan struct{}, exitedCh chan<- struct{}) { + defer close(exitedCh) + p.lock.Lock() process := p.process p.lock.Unlock() @@ -196,24 +205,42 @@ func (p *Daemon) start() (*os.Process, error) { // then this returns no error. func (p *Daemon) Stop() error { p.lock.Lock() - defer p.lock.Unlock() // If we're already stopped or never started, then no problem. if p.stopped || p.process == nil { // In the case we never even started, calling Stop makes it so // that we can't ever start in the future, either, so mark this. p.stopped = true + p.lock.Unlock() return nil } // Note that we've stopped p.stopped = true close(p.stopCh) + process := p.process + p.lock.Unlock() - err := p.process.Signal(os.Interrupt) + gracefulWait := p.gracefulWait + if gracefulWait == 0 { + gracefulWait = 5 * time.Second + } - return err - //return p.Command.Process.Kill() + // First, try a graceful stop + err := process.Signal(os.Interrupt) + if err == nil { + select { + case <-p.exitedCh: + // Success! + return nil + + case <-time.After(gracefulWait): + // Interrupt didn't work + } + } + + // Graceful didn't work, forcibly kill + return process.Kill() } // Equal implements Proxy to check for equality. diff --git a/agent/proxy/daemon_test.go b/agent/proxy/daemon_test.go index a1638b266..32acde636 100644 --- a/agent/proxy/daemon_test.go +++ b/agent/proxy/daemon_test.go @@ -6,6 +6,7 @@ import ( "os/exec" "path/filepath" "testing" + "time" "github.com/hashicorp/consul/testutil/retry" "github.com/hashicorp/go-uuid" @@ -99,6 +100,48 @@ func TestDaemonRestart(t *testing.T) { waitFile() } +func TestDaemonStop_kill(t *testing.T) { + t.Parallel() + + require := require.New(t) + td, closer := testTempDir(t) + defer closer() + + path := filepath.Join(td, "file") + + d := &Daemon{ + Command: helperProcess("stop-kill", path), + ProxyToken: "hello", + Logger: testLogger, + gracefulWait: 200 * time.Millisecond, + } + require.NoError(d.Start()) + + // Wait for the file to exist + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + + r.Fatalf("error: %s", err) + }) + + // Stop the process + require.NoError(d.Stop()) + + // State the file so that we can get the mtime + fi, err := os.Stat(path) + require.NoError(err) + mtime := fi.ModTime() + + // The mtime shouldn't change + time.Sleep(100 * time.Millisecond) + fi, err = os.Stat(path) + require.NoError(err) + require.Equal(mtime, fi.ModTime()) +} + func TestDaemonEqual(t *testing.T) { cases := []struct { Name string diff --git a/agent/proxy/proxy_test.go b/agent/proxy/proxy_test.go index 11994b1bf..71cfd4ebc 100644 --- a/agent/proxy/proxy_test.go +++ b/agent/proxy/proxy_test.go @@ -120,6 +120,24 @@ func TestHelperProcess(t *testing.T) { } } + case "stop-kill": + // Setup listeners so it is ignored + ch := make(chan os.Signal, 1) + signal.Notify(ch, os.Interrupt) + defer signal.Stop(ch) + + path := args[0] + data := []byte(os.Getenv(EnvProxyToken)) + for { + if err := ioutil.WriteFile(path, data, 0644); err != nil { + t.Fatalf("err: %s", err) + } + time.Sleep(25 * time.Millisecond) + } + + // Run forever + <-make(chan struct{}) + default: fmt.Fprintf(os.Stderr, "Unknown command: %q\n", cmd) os.Exit(2) From 6a78ecea57465becc809b7b9bf6780ce8f808a96 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 13:38:24 -0700 Subject: [PATCH 203/539] agent/proxy: local state event coalescing --- agent/proxy/manager.go | 83 ++++++++++++++++++++++++++++--------- agent/proxy/manager_test.go | 20 ++++++--- 2 files changed, 79 insertions(+), 24 deletions(-) diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index dedc6f737..4e45d22a8 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -6,12 +6,26 @@ import ( "os" "os/exec" "sync" + "time" "github.com/hashicorp/consul/agent/local" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-multierror" ) +const ( + // ManagerCoalescePeriod and ManagerQuiescentPeriod relate to how + // notifications in updates from the local state are colaesced to prevent + // lots of churn in the manager. + // + // When the local state updates, the manager will wait for quiescence. + // For each update, the quiscence timer is reset. If the coalesce period + // is reached, the manager will update proxies regardless of the frequent + // changes. Then the whole cycle resets. + ManagerCoalescePeriod = 5 * time.Second + ManagerQuiescentPeriod = 500 * time.Millisecond +) + // Manager starts, stops, snapshots, and restores managed proxies. // // The manager will not start or stop any processes until Start is called. @@ -26,11 +40,9 @@ import ( // and state updates may occur while the Manger is syncing. This is okay, // since a change notification will be queued to trigger another sync. // -// NOTE(mitchellh): Change notifications are not coalesced currently. Under -// conditions where managed proxy configurations are changing in a hot -// loop, it is possible for the manager to constantly attempt to sync. This -// is unlikely, but its also easy to introduce basic coalescing (even over -// millisecond intervals) to prevent total waste compute cycles. +// The change notifications from the local state are coalesced (see +// ManagerCoalescePeriod) so that frequent changes within the local state +// do not trigger dozens of proxy resyncs. type Manager struct { // State is the local state that is the source of truth for all // configured managed proxies. @@ -41,6 +53,13 @@ type Manager struct { // implementation type. Logger *log.Logger + // CoalescePeriod and QuiescencePeriod control the timers for coalescing + // updates from the local state. See the defaults at the top of this + // file for more documentation. These will be set to those defaults + // by NewManager. + CoalescePeriod time.Duration + QuiescentPeriod time.Duration + // lock is held while reading/writing any internal state of the manager. // cond is a condition variable on lock that is broadcasted for runState // changes. @@ -55,22 +74,24 @@ type Manager struct { proxies map[string]Proxy } -// defaultLogger is the defaultLogger for NewManager so there it is never nil -var defaultLogger = log.New(os.Stderr, "", log.LstdFlags) - // NewManager initializes a Manager. After initialization, the exported // fields should be configured as desired. To start the Manager, execute // Run in a goroutine. func NewManager() *Manager { var lock sync.Mutex return &Manager{ - Logger: defaultLogger, - lock: &lock, - cond: sync.NewCond(&lock), - proxies: make(map[string]Proxy), + Logger: defaultLogger, + CoalescePeriod: ManagerCoalescePeriod, + QuiescentPeriod: ManagerQuiescentPeriod, + lock: &lock, + cond: sync.NewCond(&lock), + proxies: make(map[string]Proxy), } } +// defaultLogger is the defaultLogger for NewManager so there it is never nil +var defaultLogger = log.New(os.Stderr, "", log.LstdFlags) + // managerRunState is the state of the Manager. // // This is a basic state machine with the following transitions: @@ -193,19 +214,43 @@ func (m *Manager) Run() { defer m.State.StopNotifyProxy(notifyCh) m.Logger.Println("[DEBUG] agent/proxy: managed Connect proxy manager started") +SYNC: for { // Sync first, before waiting on further notifications so that // we can start with a known-current state. m.sync() - select { - case <-notifyCh: - // Changes exit select so we can reloop and reconfigure proxies + // Note for these variables we don't use a time.Timer because both + // periods are relatively short anyways so they end up being eligible + // for GC very quickly, so overhead is not a concern. + var quiescent, quantum <-chan time.Time - case <-stopCh: - // Stop immediately, no cleanup - m.Logger.Println("[DEBUG] agent/proxy: Stopping managed Connect proxy manager") - return + // Start a loop waiting for events from the local state store. This + // loops rather than just `select` so we can coalesce many state + // updates over a period of time. + for { + select { + case <-notifyCh: + // If this is our first notification since the last sync, + // reset the quantum timer which is the max time we'll wait. + if quantum == nil { + quantum = time.After(m.CoalescePeriod) + } + + // Always reset the quiescent timer + quiescent = time.After(m.QuiescentPeriod) + + case <-quantum: + continue SYNC + + case <-quiescent: + continue SYNC + + case <-stopCh: + // Stop immediately, no cleanup + m.Logger.Println("[DEBUG] agent/proxy: Stopping managed Connect proxy manager") + return + } } } } diff --git a/agent/proxy/manager_test.go b/agent/proxy/manager_test.go index 97086d491..4ee84f56a 100644 --- a/agent/proxy/manager_test.go +++ b/agent/proxy/manager_test.go @@ -17,7 +17,7 @@ func TestManagerClose_noRun(t *testing.T) { t.Parallel() // Really we're testing that it doesn't deadlock here. - m := NewManager() + m := testManager(t) require.NoError(t, m.Close()) // Close again for sanity @@ -30,7 +30,7 @@ func TestManagerRun_initialSync(t *testing.T) { t.Parallel() state := local.TestState(t) - m := NewManager() + m := testManager(t) m.State = state defer m.Kill() @@ -57,7 +57,7 @@ func TestManagerRun_syncNew(t *testing.T) { t.Parallel() state := local.TestState(t) - m := NewManager() + m := testManager(t) m.State = state defer m.Kill() @@ -99,7 +99,7 @@ func TestManagerRun_syncDelete(t *testing.T) { t.Parallel() state := local.TestState(t) - m := NewManager() + m := testManager(t) m.State = state defer m.Kill() @@ -138,7 +138,7 @@ func TestManagerRun_syncUpdate(t *testing.T) { t.Parallel() state := local.TestState(t) - m := NewManager() + m := testManager(t) m.State = state defer m.Kill() @@ -181,6 +181,16 @@ func TestManagerRun_syncUpdate(t *testing.T) { }) } +func testManager(t *testing.T) *Manager { + m := NewManager() + + // Set these periods low to speed up tests + m.CoalescePeriod = 1 * time.Millisecond + m.QuiescentPeriod = 1 * time.Millisecond + + return m +} + // testStateProxy registers a proxy with the given local state and the command // (expected to be from the helperProcess function call). It returns the // ID for deregistration. From bae428326ac45ae8fa3146db51db190a0c5c25ae Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 14:05:43 -0700 Subject: [PATCH 204/539] agent: use os.Executable --- agent/agent.go | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 212672ea9..fb4d6c4a7 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -41,7 +41,6 @@ import ( "github.com/hashicorp/memberlist" "github.com/hashicorp/raft" "github.com/hashicorp/serf/serf" - "github.com/kardianos/osext" "github.com/shirou/gopsutil/host" "golang.org/x/net/http2" ) @@ -2112,7 +2111,7 @@ func (a *Agent) RemoveProxy(proxyID string, persist bool) error { func (a *Agent) defaultProxyCommand() ([]string, error) { // Get the path to the current exectuable. This is cached once by the // library so this is effectively just a variable read. - execPath, err := osext.Executable() + execPath, err := os.Executable() if err != nil { return nil, err } From 31b09c0674110abde42f7a7ef268e2a6275c7f4c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 14:28:29 -0700 Subject: [PATCH 205/539] agent/local: remove outdated comment --- agent/local/state.go | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/agent/local/state.go b/agent/local/state.go index 03dbbd96c..a9c3ebade 100644 --- a/agent/local/state.go +++ b/agent/local/state.go @@ -573,11 +573,7 @@ func (l *State) CriticalCheckStates() map[types.CheckID]*CheckState { // assumes the proxy's NodeService is already registered via Agent.AddService // (since that has to do other book keeping). The token passed here is the ACL // token the service used to register itself so must have write on service -// record. -// -// AddProxy returns the newly added proxy, any replaced proxy, and an error. -// The second return value (replaced proxy) can be used to determine if -// the process needs to be updated or not. +// record. AddProxy returns the newly added proxy and an error. func (l *State) AddProxy(proxy *structs.ConnectManagedProxy, token string) (*ManagedProxy, error) { if proxy == nil { return nil, fmt.Errorf("no proxy") From 657c09133ae2413cd672e3da0975b9c20eaee929 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 14:31:03 -0700 Subject: [PATCH 206/539] agent/local: clarify the non-risk of a full buffer --- agent/local/state.go | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/agent/local/state.go b/agent/local/state.go index a9c3ebade..22d654af4 100644 --- a/agent/local/state.go +++ b/agent/local/state.go @@ -722,7 +722,11 @@ func (l *State) Proxies() map[string]*ManagedProxy { // NotifyProxy will register a channel to receive messages when the // configuration or set of proxies changes. This will not block on -// channel send so ensure the channel has a large enough buffer. +// channel send so ensure the channel has a buffer. Note that any buffer +// size is generally fine since actual data is not sent over the channel, +// so a dropped send due to a full buffer does not result in any loss of +// data. The fact that a buffer already contains a notification means that +// the receiver will still be notified that changes occurred. // // NOTE(mitchellh): This could be more generalized but for my use case I // only needed proxy events. In the future if it were to be generalized I From ed14e9edf8cac2d0e38d3763add0e1a798703038 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 08:52:36 -0700 Subject: [PATCH 207/539] agent: resolve some conflicts and fix tests --- agent/agent_endpoint.go | 6 +++--- agent/agent_endpoint_test.go | 26 +++++++++++++------------- agent/local/state_test.go | 3 ++- 3 files changed, 18 insertions(+), 17 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index fde7ca5e2..ea81ba0b1 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -1021,7 +1021,7 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http // done deeper though as it will be needed for actually managing proxy // lifecycle. command := proxy.Proxy.Command - if command == "" { + if len(command) == 0 { if execMode == "daemon" { command = s.agent.config.ConnectProxyDefaultDaemonCommand } @@ -1030,8 +1030,8 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http } } // No global defaults set either... - if command == "" { - command = "consul connect proxy" + if len(command) == 0 { + command = []string{"consul", "connect", "proxy"} } // Set defaults for anything that is still not specified but required. diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index e6a47cbaa..98f95e69e 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2354,7 +2354,7 @@ func TestAgentConnectProxyConfig_Blocking(t *testing.T) { TargetServiceName: "test", ContentHash: "365a50cbb9a748b6", ExecMode: "daemon", - Command: nil, + Command: []string{"consul", "connect", "proxy"}, Config: map[string]interface{}{ "upstreams": []interface{}{ map[string]interface{}{ @@ -2372,7 +2372,7 @@ func TestAgentConnectProxyConfig_Blocking(t *testing.T) { ur, err := copystructure.Copy(expectedResponse) require.NoError(t, err) updatedResponse := ur.(*api.ConnectProxyConfig) - updatedResponse.ContentHash = "b5bb0e4a0a58ca25" + updatedResponse.ContentHash = "538d0366b7b1dc3e" upstreams := updatedResponse.Config["upstreams"].([]interface{}) upstreams = append(upstreams, map[string]interface{}{ @@ -2538,7 +2538,7 @@ func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { globalConfig string proxy structs.ServiceDefinitionConnectProxy wantMode api.ProxyExecMode - wantCommand string + wantCommand []string wantConfig map[string]interface{} }{ { @@ -2555,7 +2555,7 @@ func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { `, proxy: structs.ServiceDefinitionConnectProxy{}, wantMode: api.ProxyExecModeDaemon, - wantCommand: "consul connect proxy", + wantCommand: []string{"consul", "connect", "proxy"}, wantConfig: map[string]interface{}{ "bind_address": "0.0.0.0", "bind_port": 10000, // "randomly" chosen from our range of 1 @@ -2572,13 +2572,13 @@ func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { bind_min_port = 10000 bind_max_port = 10000 exec_mode = "script" - script_command = "script.sh" + script_command = ["script.sh"] } } `, proxy: structs.ServiceDefinitionConnectProxy{}, wantMode: api.ProxyExecModeScript, - wantCommand: "script.sh", + wantCommand: []string{"script.sh"}, wantConfig: map[string]interface{}{ "bind_address": "0.0.0.0", "bind_port": 10000, // "randomly" chosen from our range of 1 @@ -2595,13 +2595,13 @@ func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { bind_min_port = 10000 bind_max_port = 10000 exec_mode = "daemon" - daemon_command = "daemon.sh" + daemon_command = ["daemon.sh"] } } `, proxy: structs.ServiceDefinitionConnectProxy{}, wantMode: api.ProxyExecModeDaemon, - wantCommand: "daemon.sh", + wantCommand: []string{"daemon.sh"}, wantConfig: map[string]interface{}{ "bind_address": "0.0.0.0", "bind_port": 10000, // "randomly" chosen from our range of 1 @@ -2629,7 +2629,7 @@ func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { }, }, wantMode: api.ProxyExecModeDaemon, - wantCommand: "consul connect proxy", + wantCommand: []string{"consul", "connect", "proxy"}, wantConfig: map[string]interface{}{ "bind_address": "0.0.0.0", "bind_port": 10000, // "randomly" chosen from our range of 1 @@ -2648,8 +2648,8 @@ func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { bind_min_port = 10000 bind_max_port = 10000 exec_mode = "daemon" - daemon_command = "daemon.sh" - script_command = "script.sh" + daemon_command = ["daemon.sh"] + script_command = ["script.sh"] config = { connect_timeout_ms = 1000 } @@ -2658,7 +2658,7 @@ func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { `, proxy: structs.ServiceDefinitionConnectProxy{ ExecMode: "script", - Command: "foo.sh", + Command: []string{"foo.sh"}, Config: map[string]interface{}{ "connect_timeout_ms": 2000, "bind_address": "127.0.0.1", @@ -2667,7 +2667,7 @@ func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { }, }, wantMode: api.ProxyExecModeScript, - wantCommand: "foo.sh", + wantCommand: []string{"foo.sh"}, wantConfig: map[string]interface{}{ "bind_address": "127.0.0.1", "bind_port": float64(1024), diff --git a/agent/local/state_test.go b/agent/local/state_test.go index 800c017d6..3f416fbb3 100644 --- a/agent/local/state_test.go +++ b/agent/local/state_test.go @@ -1725,8 +1725,9 @@ func TestStateProxyManagement(t *testing.T) { { // Re-registering same proxy again should not pick a random port but re-use // the assigned one. - svcDup, err := state.AddProxy(&p1, "fake-token") + pstateDup, err := state.AddProxy(&p1, "fake-token") require.NoError(err) + svcDup := pstateDup.Proxy.ProxyService assert.Equal("web-proxy", svcDup.ID) assert.Equal("web-proxy", svcDup.Service) From 917db9770da409e1e6c708c2dfe51bdba1c7c31d Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 09:24:20 -0700 Subject: [PATCH 208/539] Remove temporary hacks from Makefile --- GNUmakefile | 3 --- 1 file changed, 3 deletions(-) diff --git a/GNUmakefile b/GNUmakefile index 660a82725..40366e317 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -40,9 +40,6 @@ bin: tools dev: changelogfmt vendorfmt dev-build dev-build: - @echo "--> TEMPORARY HACK: installing hashstructure to make CI pass until we vendor it upstream" - go get github.com/mitchellh/hashstructure - go get github.com/stretchr/testify/mock @echo "--> Building consul" mkdir -p pkg/$(GOOS)_$(GOARCH)/ bin/ go install -ldflags '$(GOLDFLAGS)' -tags '$(GOTAGS)' From 52665f7d2366de632894c5b56e4e1ba825fadd91 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 10:44:10 -0700 Subject: [PATCH 209/539] agent: clean up defaulting of proxy configuration This cleans up and unifies how proxy settings defaults are applied. --- agent/agent.go | 86 ++++++++++++++++++++--------- agent/agent_endpoint.go | 31 +---------- agent/agent_endpoint_test.go | 15 +++-- agent/agent_test.go | 3 + agent/proxy/manager.go | 3 - agent/structs/connect.go | 19 ++++++- agent/structs/service_definition.go | 18 +----- 7 files changed, 93 insertions(+), 82 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index fb4d6c4a7..6f77547e6 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2051,20 +2051,11 @@ func (a *Agent) AddProxy(proxy *structs.ConnectManagedProxy, persist bool) error // Lookup the target service token in state if there is one. token := a.State.ServiceToken(proxy.TargetServiceID) - // Determine if we need to default the command - if proxy.ExecMode == structs.ProxyExecModeDaemon && len(proxy.Command) == 0 { - // We use the globally configured default command. If it is empty - // then we need to determine the subcommand for this agent. - cmd := a.config.ConnectProxyDefaultDaemonCommand - if len(cmd) == 0 { - var err error - cmd, err = a.defaultProxyCommand() - if err != nil { - return err - } - } - - proxy.CommandDefault = cmd + // Copy the basic proxy structure so it isn't modified w/ defaults + proxyCopy := *proxy + proxy = &proxyCopy + if err := a.applyProxyDefaults(proxy); err != nil { + return err } // Add the proxy to local state first since we may need to assign a port which @@ -2090,6 +2081,47 @@ func (a *Agent) AddProxy(proxy *structs.ConnectManagedProxy, persist bool) error return nil } +// applyProxyDefaults modifies the given proxy by applying any configured +// defaults, such as the default execution mode, command, etc. +func (a *Agent) applyProxyDefaults(proxy *structs.ConnectManagedProxy) error { + // Set the default exec mode + if proxy.ExecMode == structs.ProxyExecModeUnspecified { + mode, err := structs.NewProxyExecMode(a.config.ConnectProxyDefaultExecMode) + if err != nil { + return err + } + + proxy.ExecMode = mode + } + if proxy.ExecMode == structs.ProxyExecModeUnspecified { + proxy.ExecMode = structs.ProxyExecModeDaemon + } + + // Set the default command to the globally configured default + if len(proxy.Command) == 0 { + switch proxy.ExecMode { + case structs.ProxyExecModeDaemon: + proxy.Command = a.config.ConnectProxyDefaultDaemonCommand + + case structs.ProxyExecModeScript: + proxy.Command = a.config.ConnectProxyDefaultScriptCommand + } + } + + // If there is no globally configured default we need to get the + // default command so we can do "consul connect proxy" + if len(proxy.Command) == 0 { + command, err := defaultProxyCommand() + if err != nil { + return err + } + + proxy.Command = command + } + + return nil +} + // RemoveProxy stops and removes a local proxy instance. func (a *Agent) RemoveProxy(proxyID string, persist bool) error { // Validate proxyID @@ -2107,19 +2139,6 @@ func (a *Agent) RemoveProxy(proxyID string, persist bool) error { return nil } -// defaultProxyCommand returns the default Connect managed proxy command. -func (a *Agent) defaultProxyCommand() ([]string, error) { - // Get the path to the current exectuable. This is cached once by the - // library so this is effectively just a variable read. - execPath, err := os.Executable() - if err != nil { - return nil, err - } - - // "consul connect proxy" default value for managed daemon proxy - return []string{execPath, "connect", "proxy"}, nil -} - func (a *Agent) cancelCheckMonitors(checkID types.CheckID) { // Stop any monitors delete(a.checkReapAfter, checkID) @@ -2751,3 +2770,16 @@ func (a *Agent) registerCache() { RefreshTimeout: 10 * time.Minute, }) } + +// defaultProxyCommand returns the default Connect managed proxy command. +func defaultProxyCommand() ([]string, error) { + // Get the path to the current exectuable. This is cached once by the + // library so this is effectively just a variable read. + execPath, err := os.Executable() + if err != nil { + return nil, err + } + + // "consul connect proxy" default value for managed daemon proxy + return []string{execPath, "connect", "proxy"}, nil +} diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index ea81ba0b1..8f080ea7a 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -1007,33 +1007,6 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http } } - execMode := "daemon" - // If there is a global default mode use that instead - if s.agent.config.ConnectProxyDefaultExecMode != "" { - execMode = s.agent.config.ConnectProxyDefaultExecMode - } - // If it's actually set though, use the one set - if proxy.Proxy.ExecMode != structs.ProxyExecModeUnspecified { - execMode = proxy.Proxy.ExecMode.String() - } - - // TODO(banks): default the binary to current binary. Probably needs to be - // done deeper though as it will be needed for actually managing proxy - // lifecycle. - command := proxy.Proxy.Command - if len(command) == 0 { - if execMode == "daemon" { - command = s.agent.config.ConnectProxyDefaultDaemonCommand - } - if execMode == "script" { - command = s.agent.config.ConnectProxyDefaultScriptCommand - } - } - // No global defaults set either... - if len(command) == 0 { - command = []string{"consul", "connect", "proxy"} - } - // Set defaults for anything that is still not specified but required. // Note that these are not included in the content hash. Since we expect // them to be static in general but some like the default target service @@ -1061,8 +1034,8 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http TargetServiceID: target.ID, TargetServiceName: target.Service, ContentHash: contentHash, - ExecMode: api.ProxyExecMode(execMode), - Command: command, + ExecMode: api.ProxyExecMode(proxy.Proxy.ExecMode.String()), + Command: proxy.Proxy.Command, Config: config, } return contentHash, reply, nil diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 98f95e69e..fa01eab89 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2334,6 +2334,7 @@ func TestAgentConnectProxyConfig_Blocking(t *testing.T) { }, Connect: &structs.ServiceDefinitionConnect{ Proxy: &structs.ServiceDefinitionConnectProxy{ + Command: []string{"tubes.sh"}, Config: map[string]interface{}{ "bind_port": 1234, "connect_timeout_ms": 500, @@ -2352,9 +2353,9 @@ func TestAgentConnectProxyConfig_Blocking(t *testing.T) { ProxyServiceID: "test-proxy", TargetServiceID: "test", TargetServiceName: "test", - ContentHash: "365a50cbb9a748b6", + ContentHash: "4662e51e78609569", ExecMode: "daemon", - Command: []string{"consul", "connect", "proxy"}, + Command: []string{"tubes.sh"}, Config: map[string]interface{}{ "upstreams": []interface{}{ map[string]interface{}{ @@ -2372,7 +2373,7 @@ func TestAgentConnectProxyConfig_Blocking(t *testing.T) { ur, err := copystructure.Copy(expectedResponse) require.NoError(t, err) updatedResponse := ur.(*api.ConnectProxyConfig) - updatedResponse.ContentHash = "538d0366b7b1dc3e" + updatedResponse.ContentHash = "23b5b6b3767601e1" upstreams := updatedResponse.Config["upstreams"].([]interface{}) upstreams = append(upstreams, map[string]interface{}{ @@ -2519,6 +2520,10 @@ func TestAgentConnectProxyConfig_Blocking(t *testing.T) { func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { t.Parallel() + // Get the default command to compare below + defaultCommand, err := defaultProxyCommand() + require.NoError(t, err) + // Define a local service with a managed proxy. It's registered in the test // loop to make sure agent state is predictable whatever order tests execute // since some alter this service config. @@ -2555,7 +2560,7 @@ func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { `, proxy: structs.ServiceDefinitionConnectProxy{}, wantMode: api.ProxyExecModeDaemon, - wantCommand: []string{"consul", "connect", "proxy"}, + wantCommand: defaultCommand, wantConfig: map[string]interface{}{ "bind_address": "0.0.0.0", "bind_port": 10000, // "randomly" chosen from our range of 1 @@ -2629,7 +2634,7 @@ func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { }, }, wantMode: api.ProxyExecModeDaemon, - wantCommand: []string{"consul", "connect", "proxy"}, + wantCommand: defaultCommand, wantConfig: map[string]interface{}{ "bind_address": "0.0.0.0", "bind_port": 10000, // "randomly" chosen from our range of 1 diff --git a/agent/agent_test.go b/agent/agent_test.go index 768d1c951..d0fcbff5e 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -2389,6 +2389,7 @@ func TestAgent_AddProxy(t *testing.T) { // Test the ID was created as we expect. got := a.State.Proxy("web-proxy") + tt.proxy.ProxyService = got.Proxy.ProxyService require.Equal(tt.proxy, got.Proxy) }) } @@ -2412,12 +2413,14 @@ func TestAgent_RemoveProxy(t *testing.T) { // Add a proxy for web pReg := &structs.ConnectManagedProxy{ TargetServiceID: "web", + ExecMode: structs.ProxyExecModeDaemon, Command: []string{"foo"}, } require.NoError(a.AddProxy(pReg, false)) // Test the ID was created as we expect. gotProxy := a.State.Proxy("web-proxy") + gotProxy.Proxy.ProxyService = nil require.Equal(pReg, gotProxy.Proxy) err := a.RemoveProxy("web-proxy", false) diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index 4e45d22a8..4b9971d4c 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -332,9 +332,6 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { switch p.ExecMode { case structs.ProxyExecModeDaemon: command := p.Command - if len(command) == 0 { - command = p.CommandDefault - } // This should never happen since validation should happen upstream // but verify it because the alternative is to panic below. diff --git a/agent/structs/connect.go b/agent/structs/connect.go index aca9764fa..e08d01646 100644 --- a/agent/structs/connect.go +++ b/agent/structs/connect.go @@ -1,6 +1,8 @@ package structs import ( + "fmt" + "github.com/mitchellh/mapstructure" ) @@ -40,6 +42,20 @@ const ( ProxyExecModeTest ) +// NewProxyExecMode returns the proper ProxyExecMode for the given string value. +func NewProxyExecMode(raw string) (ProxyExecMode, error) { + switch raw { + case "": + return ProxyExecModeUnspecified, nil + case "daemon": + return ProxyExecModeDaemon, nil + case "script": + return ProxyExecModeScript, nil + default: + return 0, fmt.Errorf("invalid exec mode: %s", raw) + } +} + // String implements Stringer func (m ProxyExecMode) String() string { switch m { @@ -73,9 +89,6 @@ type ConnectManagedProxy struct { // for ProxyExecModeScript. Command []string - // CommandDefault is the default command to execute if Command is empty. - CommandDefault []string `json:"-" hash:"ignore"` - // Config is the arbitrary configuration data provided with the registration. Config map[string]interface{} diff --git a/agent/structs/service_definition.go b/agent/structs/service_definition.go index 7163b5549..be69a7b57 100644 --- a/agent/structs/service_definition.go +++ b/agent/structs/service_definition.go @@ -1,9 +1,5 @@ package structs -import ( - "fmt" -) - // ServiceDefinition is used to JSON decode the Service definitions. For // documentation on specific fields see NodeService which is better documented. type ServiceDefinition struct { @@ -55,17 +51,9 @@ func (s *ServiceDefinition) ConnectManagedProxy() (*ConnectManagedProxy, error) // which we shouldn't hard code ourselves here... ns := s.NodeService() - execMode := ProxyExecModeUnspecified - switch s.Connect.Proxy.ExecMode { - case "": - // Use default - break - case "daemon": - execMode = ProxyExecModeDaemon - case "script": - execMode = ProxyExecModeScript - default: - return nil, fmt.Errorf("invalid exec mode: %s", s.Connect.Proxy.ExecMode) + execMode, err := NewProxyExecMode(s.Connect.Proxy.ExecMode) + if err != nil { + return nil, err } p := &ConnectManagedProxy{ From 515c47be7d82cdbaecf08ae98d016a7d5c815568 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 12:51:47 -0700 Subject: [PATCH 210/539] agent: add additional tests for defaulting in AddProxy --- agent/agent_test.go | 61 +++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 56 insertions(+), 5 deletions(-) diff --git a/agent/agent_test.go b/agent/agent_test.go index d0fcbff5e..6219b70da 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -2334,6 +2334,14 @@ func TestAgent_AddProxy(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), ` node_name = "node1" + + connect { + proxy_defaults { + exec_mode = "script" + daemon_command = ["foo", "bar"] + script_command = ["bar", "foo"] + } + } `) defer a.Shutdown() @@ -2345,9 +2353,9 @@ func TestAgent_AddProxy(t *testing.T) { require.NoError(t, a.AddService(reg, nil, false, "")) tests := []struct { - desc string - proxy *structs.ConnectManagedProxy - wantErr bool + desc string + proxy, wantProxy *structs.ConnectManagedProxy + wantErr bool }{ { desc: "basic proxy adding, unregistered service", @@ -2374,6 +2382,45 @@ func TestAgent_AddProxy(t *testing.T) { }, wantErr: false, }, + { + desc: "default global exec mode", + proxy: &structs.ConnectManagedProxy{ + Command: []string{"consul", "connect", "proxy"}, + TargetServiceID: "web", + }, + wantProxy: &structs.ConnectManagedProxy{ + ExecMode: structs.ProxyExecModeScript, + Command: []string{"consul", "connect", "proxy"}, + TargetServiceID: "web", + }, + wantErr: false, + }, + { + desc: "default daemon command", + proxy: &structs.ConnectManagedProxy{ + ExecMode: structs.ProxyExecModeDaemon, + TargetServiceID: "web", + }, + wantProxy: &structs.ConnectManagedProxy{ + ExecMode: structs.ProxyExecModeDaemon, + Command: []string{"foo", "bar"}, + TargetServiceID: "web", + }, + wantErr: false, + }, + { + desc: "default script command", + proxy: &structs.ConnectManagedProxy{ + ExecMode: structs.ProxyExecModeScript, + TargetServiceID: "web", + }, + wantProxy: &structs.ConnectManagedProxy{ + ExecMode: structs.ProxyExecModeScript, + Command: []string{"bar", "foo"}, + TargetServiceID: "web", + }, + wantErr: false, + }, } for _, tt := range tests { @@ -2389,8 +2436,12 @@ func TestAgent_AddProxy(t *testing.T) { // Test the ID was created as we expect. got := a.State.Proxy("web-proxy") - tt.proxy.ProxyService = got.Proxy.ProxyService - require.Equal(tt.proxy, got.Proxy) + wantProxy := tt.wantProxy + if wantProxy == nil { + wantProxy = tt.proxy + } + wantProxy.ProxyService = got.Proxy.ProxyService + require.Equal(wantProxy, got.Proxy) }) } } From 49bc7181a4b94997e97a0a0821184d95604a1083 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 20:11:58 -0700 Subject: [PATCH 211/539] agent/proxy: send logs to the correct location for daemon proxies --- agent/agent.go | 1 + agent/proxy/daemon.go | 4 --- agent/proxy/manager.go | 56 ++++++++++++++++++++++++++++++ agent/proxy/manager_test.go | 69 +++++++++++++++++++++++++++++++++---- agent/proxy/proxy_test.go | 5 +++ 5 files changed, 124 insertions(+), 11 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 6f77547e6..128a40a4a 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -362,6 +362,7 @@ func (a *Agent) Start() error { a.proxyManager = proxy.NewManager() a.proxyManager.State = a.State a.proxyManager.Logger = a.logger + a.proxyManager.LogDir = filepath.Join(a.config.DataDir, "proxy", "logs") go a.proxyManager.Run() // Start watching for critical services to deregister, based on their diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index a930c978b..d6b68ad3a 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -178,10 +178,6 @@ func (p *Daemon) start() (*os.Process, error) { copy(cmd.Env, p.Command.Env) cmd.Env = append(cmd.Env, fmt.Sprintf("%s=%s", EnvProxyToken, p.ProxyToken)) - // TODO(mitchellh): temporary until we introduce the file based logging - cmd.Stdout = os.Stdout - cmd.Stderr = os.Stderr - // Args must always contain a 0 entry which is usually the executed binary. // To be safe and a bit more robust we default this, but only to prevent // a panic below. diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index 4b9971d4c..01c0b142c 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -5,6 +5,7 @@ import ( "log" "os" "os/exec" + "path/filepath" "sync" "time" @@ -53,6 +54,12 @@ type Manager struct { // implementation type. Logger *log.Logger + // LogDir is the path to the directory where logs will be written + // for daemon mode proxies. This directory will be created if it does + // not exist. If this is empty then logs will be dumped into the + // working directory. + LogDir string + // CoalescePeriod and QuiescencePeriod control the timers for coalescing // updates from the local state. See the defaults at the top of this // file for more documentation. These will be set to those defaults @@ -328,6 +335,13 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { return nil, fmt.Errorf("internal error: nil *local.ManagedProxy or Proxy field") } + // Attempt to create the log directory now that we have a proxy + if m.LogDir != "" { + if err := os.MkdirAll(m.LogDir, 0700); err != nil { + m.Logger.Printf("[ERROR] agent/proxy: failed to create log directory: %s", err) + } + } + p := mp.Proxy switch p.ExecMode { case structs.ProxyExecModeDaemon: @@ -343,6 +357,9 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { var cmd exec.Cmd cmd.Path = command[0] cmd.Args = command // idx 0 is path but preserved since it should be + if err := m.configureLogDir(p.ProxyService.ID, &cmd); err != nil { + return nil, fmt.Errorf("error configuring proxy logs: %s", err) + } // Build the daemon structure return &Daemon{ @@ -355,3 +372,42 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { return nil, fmt.Errorf("unsupported managed proxy type: %q", p.ExecMode) } } + +// configureLogDir sets up the file descriptors to stdout/stderr so that +// they log to the proper file path for the given service ID. +func (m *Manager) configureLogDir(id string, cmd *exec.Cmd) error { + // Create the log directory + if m.LogDir != "" { + if err := os.MkdirAll(m.LogDir, 0700); err != nil { + return err + } + } + + // Configure the stdout, stderr paths + stdoutPath := logPath(m.LogDir, id, "stdout") + stderrPath := logPath(m.LogDir, id, "stderr") + + // Open the files. We want to append to each. We expect these files + // to be rotated by some external process. + stdoutF, err := os.OpenFile(stdoutPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) + if err != nil { + return fmt.Errorf("error creating stdout file: %s", err) + } + stderrF, err := os.OpenFile(stderrPath, os.O_APPEND|os.O_WRONLY|os.O_CREATE, 0600) + if err != nil { + // Don't forget to close stdoutF which successfully opened + stdoutF.Close() + + return fmt.Errorf("error creating stderr file: %s", err) + } + + cmd.Stdout = stdoutF + cmd.Stderr = stderrF + return nil +} + +// logPath is a helper to return the path to the log file for the given +// directory, service ID, and stream type (stdout or stderr). +func logPath(dir, id, stream string) string { + return filepath.Join(dir, fmt.Sprintf("%s-%s.log", id, stream)) +} diff --git a/agent/proxy/manager_test.go b/agent/proxy/manager_test.go index 4ee84f56a..0b5640241 100644 --- a/agent/proxy/manager_test.go +++ b/agent/proxy/manager_test.go @@ -1,6 +1,7 @@ package proxy import ( + "io/ioutil" "os" "os/exec" "path/filepath" @@ -17,7 +18,8 @@ func TestManagerClose_noRun(t *testing.T) { t.Parallel() // Really we're testing that it doesn't deadlock here. - m := testManager(t) + m, closer := testManager(t) + defer closer() require.NoError(t, m.Close()) // Close again for sanity @@ -30,7 +32,8 @@ func TestManagerRun_initialSync(t *testing.T) { t.Parallel() state := local.TestState(t) - m := testManager(t) + m, closer := testManager(t) + defer closer() m.State = state defer m.Kill() @@ -57,7 +60,8 @@ func TestManagerRun_syncNew(t *testing.T) { t.Parallel() state := local.TestState(t) - m := testManager(t) + m, closer := testManager(t) + defer closer() m.State = state defer m.Kill() @@ -99,7 +103,8 @@ func TestManagerRun_syncDelete(t *testing.T) { t.Parallel() state := local.TestState(t) - m := testManager(t) + m, closer := testManager(t) + defer closer() m.State = state defer m.Kill() @@ -138,7 +143,8 @@ func TestManagerRun_syncUpdate(t *testing.T) { t.Parallel() state := local.TestState(t) - m := testManager(t) + m, closer := testManager(t) + defer closer() m.State = state defer m.Kill() @@ -181,14 +187,63 @@ func TestManagerRun_syncUpdate(t *testing.T) { }) } -func testManager(t *testing.T) *Manager { +func TestManagerRun_daemonLogs(t *testing.T) { + t.Parallel() + + require := require.New(t) + state := local.TestState(t) + m, closer := testManager(t) + defer closer() + m.State = state + defer m.Kill() + + // Configure a log dir so that we can read the logs + td, closer := testTempDir(t) + defer closer() + m.LogDir = filepath.Join(td, "logs") + + // Create the service and calculate the log paths + id := testStateProxy(t, state, "web", helperProcess("output")) + stdoutPath := logPath(m.LogDir, id, "stdout") + stderrPath := logPath(m.LogDir, id, "stderr") + + // Start the manager + go m.Run() + + // We should see the path appear shortly + retry.Run(t, func(r *retry.R) { + if _, err := os.Stat(stdoutPath); err != nil { + r.Fatalf("error waiting for stdout path: %s", err) + } + + if _, err := os.Stat(stderrPath); err != nil { + r.Fatalf("error waiting for stderr path: %s", err) + } + }) + + expectedOut := "hello stdout\n" + actual, err := ioutil.ReadFile(stdoutPath) + require.NoError(err) + require.Equal([]byte(expectedOut), actual) + + expectedErr := "hello stderr\n" + actual, err = ioutil.ReadFile(stderrPath) + require.NoError(err) + require.Equal([]byte(expectedErr), actual) +} + +func testManager(t *testing.T) (*Manager, func()) { m := NewManager() // Set these periods low to speed up tests m.CoalescePeriod = 1 * time.Millisecond m.QuiescentPeriod = 1 * time.Millisecond - return m + // Setup a temporary directory for logs + td, closer := testTempDir(t) + m.LogDir = td + + return m, func() { closer() } } // testStateProxy registers a proxy with the given local state and the command diff --git a/agent/proxy/proxy_test.go b/agent/proxy/proxy_test.go index 71cfd4ebc..d79ea5bc8 100644 --- a/agent/proxy/proxy_test.go +++ b/agent/proxy/proxy_test.go @@ -138,6 +138,11 @@ func TestHelperProcess(t *testing.T) { // Run forever <-make(chan struct{}) + case "output": + fmt.Fprintf(os.Stdout, "hello stdout\n") + fmt.Fprintf(os.Stderr, "hello stderr\n") + <-make(chan struct{}) + default: fmt.Fprintf(os.Stderr, "Unknown command: %q\n", cmd) os.Exit(2) From d019d33bc61ce4f6e749f5b3ebc08f9da7143ea0 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 20:13:40 -0700 Subject: [PATCH 212/539] agent/proxy: don't create the directory in newProxy --- agent/proxy/manager.go | 7 ------- 1 file changed, 7 deletions(-) diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index 01c0b142c..bc2a8e728 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -335,13 +335,6 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { return nil, fmt.Errorf("internal error: nil *local.ManagedProxy or Proxy field") } - // Attempt to create the log directory now that we have a proxy - if m.LogDir != "" { - if err := os.MkdirAll(m.LogDir, 0700); err != nil { - m.Logger.Printf("[ERROR] agent/proxy: failed to create log directory: %s", err) - } - } - p := mp.Proxy switch p.ExecMode { case structs.ProxyExecModeDaemon: From e2133fd39140a622dfabf0bb2045a077350b0175 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 20:16:29 -0700 Subject: [PATCH 213/539] agent/proxy: make the logs test a bit more robust by waiting for file --- agent/proxy/manager_test.go | 9 +++------ agent/proxy/proxy_test.go | 12 ++++++++++++ 2 files changed, 15 insertions(+), 6 deletions(-) diff --git a/agent/proxy/manager_test.go b/agent/proxy/manager_test.go index 0b5640241..3bf200805 100644 --- a/agent/proxy/manager_test.go +++ b/agent/proxy/manager_test.go @@ -203,7 +203,8 @@ func TestManagerRun_daemonLogs(t *testing.T) { m.LogDir = filepath.Join(td, "logs") // Create the service and calculate the log paths - id := testStateProxy(t, state, "web", helperProcess("output")) + path := filepath.Join(td, "notify") + id := testStateProxy(t, state, "web", helperProcess("output", path)) stdoutPath := logPath(m.LogDir, id, "stdout") stderrPath := logPath(m.LogDir, id, "stderr") @@ -212,13 +213,9 @@ func TestManagerRun_daemonLogs(t *testing.T) { // We should see the path appear shortly retry.Run(t, func(r *retry.R) { - if _, err := os.Stat(stdoutPath); err != nil { + if _, err := os.Stat(path); err != nil { r.Fatalf("error waiting for stdout path: %s", err) } - - if _, err := os.Stat(stderrPath); err != nil { - r.Fatalf("error waiting for stderr path: %s", err) - } }) expectedOut := "hello stdout\n" diff --git a/agent/proxy/proxy_test.go b/agent/proxy/proxy_test.go index d79ea5bc8..d0812fc07 100644 --- a/agent/proxy/proxy_test.go +++ b/agent/proxy/proxy_test.go @@ -141,6 +141,18 @@ func TestHelperProcess(t *testing.T) { case "output": fmt.Fprintf(os.Stdout, "hello stdout\n") fmt.Fprintf(os.Stderr, "hello stderr\n") + + // Sync to be sure it is written out of buffers + os.Stdout.Sync() + os.Stderr.Sync() + + // Output a file to signal we've written to stdout/err + path := args[0] + if err := ioutil.WriteFile(path, []byte("hello"), 0644); err != nil { + fmt.Fprintf(os.Stderr, "Error: %s\n", err) + os.Exit(1) + } + <-make(chan struct{}) default: From 09093a1a1aaa15893b71767b689989ee3613457f Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 2 May 2018 20:34:23 -0700 Subject: [PATCH 214/539] agent/proxy: change LogDir to DataDir to reuse for other things --- agent/proxy/manager.go | 28 +++++++++++++++++++--------- agent/proxy/manager_test.go | 12 +++++------- 2 files changed, 24 insertions(+), 16 deletions(-) diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index bc2a8e728..7ed278037 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -54,11 +54,19 @@ type Manager struct { // implementation type. Logger *log.Logger - // LogDir is the path to the directory where logs will be written - // for daemon mode proxies. This directory will be created if it does - // not exist. If this is empty then logs will be dumped into the - // working directory. - LogDir string + // DataDir is the path to the directory where data for proxies is + // written, including snapshots for any state changes in the manager. + // Within the data dir, files will be written in the following locatins: + // + // * logs/ - log files named -std{out|err}.log + // * pids/ - pid files for daemons named .pid + // * state.ext - the state of the manager + // + DataDir string + + // SnapshotDir is the path to the directory where snapshots will + // be written + SnapshotDir string // CoalescePeriod and QuiescencePeriod control the timers for coalescing // updates from the local state. See the defaults at the top of this @@ -370,15 +378,17 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { // they log to the proper file path for the given service ID. func (m *Manager) configureLogDir(id string, cmd *exec.Cmd) error { // Create the log directory - if m.LogDir != "" { - if err := os.MkdirAll(m.LogDir, 0700); err != nil { + logDir := "" + if m.DataDir != "" { + logDir = filepath.Join(m.DataDir, "logs") + if err := os.MkdirAll(logDir, 0700); err != nil { return err } } // Configure the stdout, stderr paths - stdoutPath := logPath(m.LogDir, id, "stdout") - stderrPath := logPath(m.LogDir, id, "stderr") + stdoutPath := logPath(logDir, id, "stdout") + stderrPath := logPath(logDir, id, "stderr") // Open the files. We want to append to each. We expect these files // to be rotated by some external process. diff --git a/agent/proxy/manager_test.go b/agent/proxy/manager_test.go index 3bf200805..d14b71afa 100644 --- a/agent/proxy/manager_test.go +++ b/agent/proxy/manager_test.go @@ -198,15 +198,13 @@ func TestManagerRun_daemonLogs(t *testing.T) { defer m.Kill() // Configure a log dir so that we can read the logs - td, closer := testTempDir(t) - defer closer() - m.LogDir = filepath.Join(td, "logs") + logDir := filepath.Join(m.DataDir, "logs") // Create the service and calculate the log paths - path := filepath.Join(td, "notify") + path := filepath.Join(m.DataDir, "notify") id := testStateProxy(t, state, "web", helperProcess("output", path)) - stdoutPath := logPath(m.LogDir, id, "stdout") - stderrPath := logPath(m.LogDir, id, "stderr") + stdoutPath := logPath(logDir, id, "stdout") + stderrPath := logPath(logDir, id, "stderr") // Start the manager go m.Run() @@ -238,7 +236,7 @@ func testManager(t *testing.T) (*Manager, func()) { // Setup a temporary directory for logs td, closer := testTempDir(t) - m.LogDir = td + m.DataDir = td return m, func() { closer() } } From 5e0f0ba1785f1a67ccea7d010785180a266fda13 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 13:56:42 -0700 Subject: [PATCH 215/539] agent/proxy: write pid file whenever the daemon process changes --- agent/agent.go | 44 ++------------------ agent/proxy/daemon.go | 26 +++++++++++- agent/proxy/daemon_test.go | 85 ++++++++++++++++++++++++++++++++++++++ lib/file/atomic.go | 46 +++++++++++++++++++++ 4 files changed, 159 insertions(+), 42 deletions(-) create mode 100644 lib/file/atomic.go diff --git a/agent/agent.go b/agent/agent.go index 128a40a4a..2c18c08d3 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -34,6 +34,7 @@ import ( "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/ipaddr" "github.com/hashicorp/consul/lib" + "github.com/hashicorp/consul/lib/file" "github.com/hashicorp/consul/logger" "github.com/hashicorp/consul/types" "github.com/hashicorp/consul/watch" @@ -362,7 +363,7 @@ func (a *Agent) Start() error { a.proxyManager = proxy.NewManager() a.proxyManager.State = a.State a.proxyManager.Logger = a.logger - a.proxyManager.LogDir = filepath.Join(a.config.DataDir, "proxy", "logs") + a.proxyManager.DataDir = filepath.Join(a.config.DataDir, "proxy") go a.proxyManager.Run() // Start watching for critical services to deregister, based on their @@ -1557,7 +1558,7 @@ func (a *Agent) persistService(service *structs.NodeService) error { return err } - return writeFileAtomic(svcPath, encoded) + return file.WriteAtomic(svcPath, encoded) } // purgeService removes a persisted service definition file from the data dir @@ -1585,7 +1586,7 @@ func (a *Agent) persistCheck(check *structs.HealthCheck, chkType *structs.CheckT return err } - return writeFileAtomic(checkPath, encoded) + return file.WriteAtomic(checkPath, encoded) } // purgeCheck removes a persisted check definition file from the data dir @@ -1597,43 +1598,6 @@ func (a *Agent) purgeCheck(checkID types.CheckID) error { return nil } -// writeFileAtomic writes the given contents to a temporary file in the same -// directory, does an fsync and then renames the file to its real path -func writeFileAtomic(path string, contents []byte) error { - uuid, err := uuid.GenerateUUID() - if err != nil { - return err - } - tempPath := fmt.Sprintf("%s-%s.tmp", path, uuid) - - if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil { - return err - } - fh, err := os.OpenFile(tempPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0600) - if err != nil { - return err - } - if _, err := fh.Write(contents); err != nil { - fh.Close() - os.Remove(tempPath) - return err - } - if err := fh.Sync(); err != nil { - fh.Close() - os.Remove(tempPath) - return err - } - if err := fh.Close(); err != nil { - os.Remove(tempPath) - return err - } - if err := os.Rename(tempPath, path); err != nil { - os.Remove(tempPath) - return err - } - return nil -} - // AddService is used to add a service entry. // This entry is persistent and the agent will make a best effort to // ensure it is registered diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index d6b68ad3a..e3b376c05 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -6,8 +6,11 @@ import ( "os" "os/exec" "reflect" + "strconv" "sync" "time" + + "github.com/hashicorp/consul/lib/file" ) // Constants related to restart timers with the daemon mode proxies. At some @@ -38,6 +41,12 @@ type Daemon struct { // a file. Logger *log.Logger + // PidPath is the path where a pid file will be created storing the + // pid of the active process. If this is empty then a pid-file won't + // be created. Under erroneous conditions, the pid file may not be + // created but the error will be logged to the Logger. + PidPath string + // For tests, they can set this to change the default duration to wait // for a graceful quit. gracefulWait time.Duration @@ -187,8 +196,21 @@ func (p *Daemon) start() (*os.Process, error) { // Start it p.Logger.Printf("[DEBUG] agent/proxy: starting proxy: %q %#v", cmd.Path, cmd.Args[1:]) - err := cmd.Start() - return cmd.Process, err + if err := cmd.Start(); err != nil { + return nil, err + } + + // Write the pid file. This might error and that's okay. + if p.PidPath != "" { + pid := strconv.FormatInt(int64(cmd.Process.Pid), 10) + if err := file.WriteAtomic(p.PidPath, []byte(pid)); err != nil { + p.Logger.Printf( + "[DEBUG] agent/proxy: error writing pid file %q: %s", + p.PidPath, err) + } + } + + return cmd.Process, nil } // Stop stops the daemon. diff --git a/agent/proxy/daemon_test.go b/agent/proxy/daemon_test.go index 32acde636..652364c5e 100644 --- a/agent/proxy/daemon_test.go +++ b/agent/proxy/daemon_test.go @@ -142,6 +142,91 @@ func TestDaemonStop_kill(t *testing.T) { require.Equal(mtime, fi.ModTime()) } +func TestDaemonStart_pidFile(t *testing.T) { + t.Parallel() + + require := require.New(t) + td, closer := testTempDir(t) + defer closer() + + path := filepath.Join(td, "file") + pidPath := filepath.Join(td, "pid") + uuid, err := uuid.GenerateUUID() + require.NoError(err) + + d := &Daemon{ + Command: helperProcess("start-once", path), + ProxyToken: uuid, + Logger: testLogger, + PidPath: pidPath, + } + require.NoError(d.Start()) + defer d.Stop() + + // Wait for the file to exist + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(pidPath) + if err == nil { + return + } + + r.Fatalf("error: %s", err) + }) + + // Check the pid file + pidRaw, err := ioutil.ReadFile(pidPath) + require.NoError(err) + require.NotEmpty(pidRaw) +} + +// Verify the pid file changes on restart +func TestDaemonRestart_pidFile(t *testing.T) { + t.Parallel() + + require := require.New(t) + td, closer := testTempDir(t) + defer closer() + path := filepath.Join(td, "file") + pidPath := filepath.Join(td, "pid") + + d := &Daemon{ + Command: helperProcess("restart", path), + Logger: testLogger, + PidPath: pidPath, + } + require.NoError(d.Start()) + defer d.Stop() + + // Wait for the file to exist. We save the func so we can reuse the test. + waitFile := func() { + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + r.Fatalf("error waiting for path: %s", err) + }) + } + waitFile() + + // Check the pid file + pidRaw, err := ioutil.ReadFile(pidPath) + require.NoError(err) + require.NotEmpty(pidRaw) + + // Delete the file + require.NoError(os.Remove(path)) + + // File should re-appear because the process is restart + waitFile() + + // Check the pid file and it should not equal + pidRaw2, err := ioutil.ReadFile(pidPath) + require.NoError(err) + require.NotEmpty(pidRaw2) + require.NotEqual(pidRaw, pidRaw2) +} + func TestDaemonEqual(t *testing.T) { cases := []struct { Name string diff --git a/lib/file/atomic.go b/lib/file/atomic.go new file mode 100644 index 000000000..e1d6e6693 --- /dev/null +++ b/lib/file/atomic.go @@ -0,0 +1,46 @@ +package file + +import ( + "fmt" + "os" + "path/filepath" + + "github.com/hashicorp/go-uuid" +) + +// WriteAtomic writes the given contents to a temporary file in the same +// directory, does an fsync and then renames the file to its real path +func WriteAtomic(path string, contents []byte) error { + uuid, err := uuid.GenerateUUID() + if err != nil { + return err + } + tempPath := fmt.Sprintf("%s-%s.tmp", path, uuid) + + if err := os.MkdirAll(filepath.Dir(path), 0700); err != nil { + return err + } + fh, err := os.OpenFile(tempPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0600) + if err != nil { + return err + } + if _, err := fh.Write(contents); err != nil { + fh.Close() + os.Remove(tempPath) + return err + } + if err := fh.Sync(); err != nil { + fh.Close() + os.Remove(tempPath) + return err + } + if err := fh.Close(); err != nil { + os.Remove(tempPath) + return err + } + if err := os.Rename(tempPath, path); err != nil { + os.Remove(tempPath) + return err + } + return nil +} From 9675ed626d9fdbc308f977d132b39faf8add5e69 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 14:09:30 -0700 Subject: [PATCH 216/539] agent/proxy: manager configures the daemon pid path to write pids --- agent/proxy/manager.go | 17 ++++++++++++++++- agent/proxy/manager_test.go | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 50 insertions(+), 1 deletion(-) diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index 7ed278037..f5c2d996e 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -354,11 +354,14 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { return nil, fmt.Errorf("daemon mode managed proxy requires command") } + // We reuse the service ID a few times + id := p.ProxyService.ID + // Build the command to execute. var cmd exec.Cmd cmd.Path = command[0] cmd.Args = command // idx 0 is path but preserved since it should be - if err := m.configureLogDir(p.ProxyService.ID, &cmd); err != nil { + if err := m.configureLogDir(id, &cmd); err != nil { return nil, fmt.Errorf("error configuring proxy logs: %s", err) } @@ -367,6 +370,7 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { Command: &cmd, ProxyToken: mp.ProxyToken, Logger: m.Logger, + PidPath: pidPath(filepath.Join(m.DataDir, "pids"), id), }, nil default: @@ -414,3 +418,14 @@ func (m *Manager) configureLogDir(id string, cmd *exec.Cmd) error { func logPath(dir, id, stream string) string { return filepath.Join(dir, fmt.Sprintf("%s-%s.log", id, stream)) } + +// pidPath is a helper to return the path to the pid file for the given +// directory and service ID. +func pidPath(dir, id string) string { + // If no directory is given we do not write a pid + if dir == "" { + return "" + } + + return filepath.Join(dir, fmt.Sprintf("%s.pid", id)) +} diff --git a/agent/proxy/manager_test.go b/agent/proxy/manager_test.go index d14b71afa..d9a817af3 100644 --- a/agent/proxy/manager_test.go +++ b/agent/proxy/manager_test.go @@ -227,6 +227,40 @@ func TestManagerRun_daemonLogs(t *testing.T) { require.Equal([]byte(expectedErr), actual) } +func TestManagerRun_daemonPid(t *testing.T) { + t.Parallel() + + require := require.New(t) + state := local.TestState(t) + m, closer := testManager(t) + defer closer() + m.State = state + defer m.Kill() + + // Configure a log dir so that we can read the logs + pidDir := filepath.Join(m.DataDir, "pids") + + // Create the service and calculate the log paths + path := filepath.Join(m.DataDir, "notify") + id := testStateProxy(t, state, "web", helperProcess("output", path)) + pidPath := pidPath(pidDir, id) + + // Start the manager + go m.Run() + + // We should see the path appear shortly + retry.Run(t, func(r *retry.R) { + if _, err := os.Stat(path); err != nil { + r.Fatalf("error waiting for stdout path: %s", err) + } + }) + + // Verify the pid file is not empty + pidRaw, err := ioutil.ReadFile(pidPath) + require.NoError(err) + require.NotEmpty(pidRaw) +} + func testManager(t *testing.T) (*Manager, func()) { m := NewManager() From a3a0bc7b13358a0b842d7d62bdd8148a1f6c1e13 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 15:46:00 -0700 Subject: [PATCH 217/539] agent/proxy: implement snapshotting for daemons --- agent/proxy/daemon.go | 99 ++++++++++++++++++++++++++++++++++++++ agent/proxy/daemon_test.go | 92 +++++++++++++++++++++++++++++++++++ agent/proxy/manager.go | 2 +- agent/proxy/noop.go | 8 +-- agent/proxy/proxy.go | 16 ++++++ agent/proxy/snapshot.go | 31 ++++++++++++ 6 files changed, 244 insertions(+), 4 deletions(-) create mode 100644 agent/proxy/snapshot.go diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index e3b376c05..e1ec2e1b0 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -11,6 +11,7 @@ import ( "time" "github.com/hashicorp/consul/lib/file" + "github.com/mitchellh/mapstructure" ) // Constants related to restart timers with the daemon mode proxies. At some @@ -261,6 +262,26 @@ func (p *Daemon) Stop() error { return process.Kill() } +// stopKeepAlive is like Stop but keeps the process running. This is +// used only for tests. +func (p *Daemon) stopKeepAlive() error { + p.lock.Lock() + + // If we're already stopped or never started, then no problem. + if p.stopped || p.process == nil { + p.stopped = true + p.lock.Unlock() + return nil + } + + // Note that we've stopped + p.stopped = true + close(p.stopCh) + p.lock.Unlock() + + return nil +} + // Equal implements Proxy to check for equality. func (p *Daemon) Equal(raw Proxy) bool { p2, ok := raw.(*Daemon) @@ -275,3 +296,81 @@ func (p *Daemon) Equal(raw Proxy) bool { reflect.DeepEqual(p.Command.Args, p2.Command.Args) && reflect.DeepEqual(p.Command.Env, p2.Command.Env) } + +// MarshalSnapshot implements Proxy +func (p *Daemon) MarshalSnapshot() map[string]interface{} { + p.lock.Lock() + defer p.lock.Unlock() + + // If we're stopped or have no process, then nothing to snapshot. + if p.stopped || p.process == nil { + return nil + } + + return map[string]interface{}{ + "Pid": p.process.Pid, + "CommandPath": p.Command.Path, + "CommandArgs": p.Command.Args, + "CommandDir": p.Command.Dir, + "CommandEnv": p.Command.Env, + "ProxyToken": p.ProxyToken, + } +} + +// UnmarshalSnapshot implements Proxy +func (p *Daemon) UnmarshalSnapshot(m map[string]interface{}) error { + var s daemonSnapshot + if err := mapstructure.Decode(m, &s); err != nil { + return err + } + + p.lock.Lock() + defer p.lock.Unlock() + + // Set the basic fields + p.ProxyToken = s.ProxyToken + p.Command = &exec.Cmd{ + Path: s.CommandPath, + Args: s.CommandArgs, + Dir: s.CommandDir, + Env: s.CommandEnv, + } + + // For the pid, we want to find the process. + proc, err := os.FindProcess(s.Pid) + if err != nil { + return err + } + + // TODO(mitchellh): we should check if proc refers to a process that + // is currently alive. If not, we should return here and not manage the + // process. + + // "Start it" + stopCh := make(chan struct{}) + exitedCh := make(chan struct{}) + p.stopCh = stopCh + p.exitedCh = exitedCh + p.process = proc + go p.keepAlive(stopCh, exitedCh) + + return nil +} + +// daemonSnapshot is the structure of the marshalled data for snapshotting. +type daemonSnapshot struct { + // Pid of the process. This is the only value actually required to + // regain mangement control. The remainder values are for Equal. + Pid int + + // Command information + CommandPath string + CommandArgs []string + CommandDir string + CommandEnv []string + + // NOTE(mitchellh): longer term there are discussions/plans to only + // store the hash of the token but for now we need the full token in + // case the process dies and has to be restarted. + ProxyToken string +} diff --git a/agent/proxy/daemon_test.go b/agent/proxy/daemon_test.go index 652364c5e..6e74cdf88 100644 --- a/agent/proxy/daemon_test.go +++ b/agent/proxy/daemon_test.go @@ -316,3 +316,95 @@ func TestDaemonEqual(t *testing.T) { }) } } + +func TestDaemonMarshalSnapshot(t *testing.T) { + cases := []struct { + Name string + Proxy Proxy + Expected map[string]interface{} + }{ + { + "stopped daemon", + &Daemon{ + Command: &exec.Cmd{Path: "/foo"}, + }, + nil, + }, + + { + "basic", + &Daemon{ + Command: &exec.Cmd{Path: "/foo"}, + process: &os.Process{Pid: 42}, + }, + map[string]interface{}{ + "Pid": 42, + "CommandPath": "/foo", + "CommandArgs": []string(nil), + "CommandDir": "", + "CommandEnv": []string(nil), + "ProxyToken": "", + }, + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + actual := tc.Proxy.MarshalSnapshot() + require.Equal(t, tc.Expected, actual) + }) + } +} + +func TestDaemonUnmarshalSnapshot(t *testing.T) { + t.Parallel() + + require := require.New(t) + td, closer := testTempDir(t) + defer closer() + + path := filepath.Join(td, "file") + uuid, err := uuid.GenerateUUID() + require.NoError(err) + + d := &Daemon{ + Command: helperProcess("start-stop", path), + ProxyToken: uuid, + Logger: testLogger, + } + require.NoError(d.Start()) + + // Wait for the file to exist + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + + r.Fatalf("error: %s", err) + }) + + // Snapshot + snap := d.MarshalSnapshot() + + // Stop the original daemon but keep it alive + require.NoError(d.stopKeepAlive()) + + // Restore the second daemon + d2 := &Daemon{Logger: testLogger} + require.NoError(d2.UnmarshalSnapshot(snap)) + + // Stop the process + require.NoError(d2.Stop()) + + // File should no longer exist. + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if os.IsNotExist(err) { + return + } + + // err might be nil here but that's okay + r.Fatalf("should not exist: %s", err) + }) +} diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index f5c2d996e..c9c63311f 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -60,7 +60,7 @@ type Manager struct { // // * logs/ - log files named -std{out|err}.log // * pids/ - pid files for daemons named .pid - // * state.ext - the state of the manager + // * snapshot.json - the state of the manager // DataDir string diff --git a/agent/proxy/noop.go b/agent/proxy/noop.go index 9ce013554..a96425d84 100644 --- a/agent/proxy/noop.go +++ b/agent/proxy/noop.go @@ -3,6 +3,8 @@ package proxy // Noop implements Proxy and does nothing. type Noop struct{} -func (p *Noop) Start() error { return nil } -func (p *Noop) Stop() error { return nil } -func (p *Noop) Equal(Proxy) bool { return true } +func (p *Noop) Start() error { return nil } +func (p *Noop) Stop() error { return nil } +func (p *Noop) Equal(Proxy) bool { return true } +func (p *Noop) MarshalSnapshot() map[string]interface{} { return nil } +func (p *Noop) UnmarshalSnapshot(map[string]interface{}) error { return nil } diff --git a/agent/proxy/proxy.go b/agent/proxy/proxy.go index 549a6ee26..e1bad92c0 100644 --- a/agent/proxy/proxy.go +++ b/agent/proxy/proxy.go @@ -39,4 +39,20 @@ type Proxy interface { // If Equal returns true, the old proxy will remain running and the new // one will be ignored. Equal(Proxy) bool + + // MarshalSnapshot returns the state that will be stored in a snapshot + // so that Consul can recover the proxy process after a restart. The + // result should only contain primitive values and containers (lists/maps). + // + // UnmarshalSnapshot is called to restore the receiving Proxy from its + // marshalled state. If UnmarshalSnapshot returns an error, the snapshot + // is ignored and the marshalled snapshot will be lost. The manager will + // log. + // + // This should save/restore enough state to be able to regain management + // of a proxy process as well as to perform the Equal method above. The + // Equal method will be called when a local state sync happens to determine + // if the recovered process should be restarted or not. + MarshalSnapshot() map[string]interface{} + UnmarshalSnapshot(map[string]interface{}) error } diff --git a/agent/proxy/snapshot.go b/agent/proxy/snapshot.go new file mode 100644 index 000000000..b119bfddf --- /dev/null +++ b/agent/proxy/snapshot.go @@ -0,0 +1,31 @@ +package proxy + +import ( + "github.com/hashicorp/consul/agent/structs" +) + +// snapshot is the structure of the snapshot file. This is unexported because +// we don't want this being a public API. +// +// The snapshot doesn't contain any configuration for the manager. We only +// want to restore the proxies that we're managing, and we use the config +// set at runtime to sync and reconcile what proxies we should start, +// restart, stop, or have already running. +type snapshot struct { + // Version is the version of the snapshot format and can be used + // to safely update the format in the future if necessary. + Version int + + // Proxies are the set of proxies that the manager has. + Proxies []snapshotProxy +} + +// snapshotProxy represents a single proxy. +type snapshotProxy struct { + // Mode corresponds to the type of proxy running. + Mode structs.ProxyExecMode + + // Config is an opaque mapping of primitive values that the proxy + // implementation uses to restore state. + Config map[string]interface{} +} From 64fc9e021824ba540a6e92f78a43f8f76d0bc3fc Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 15:55:49 -0700 Subject: [PATCH 218/539] agent/proxy: check if process is alive --- agent/proxy/daemon.go | 8 ++++--- agent/proxy/daemon_test.go | 41 ++++++++++++++++++++++++++++++++++ agent/proxy/process_unix.go | 23 +++++++++++++++++++ agent/proxy/process_windows.go | 19 ++++++++++++++++ 4 files changed, 88 insertions(+), 3 deletions(-) create mode 100644 agent/proxy/process_unix.go create mode 100644 agent/proxy/process_windows.go diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index e1ec2e1b0..15950c196 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -342,9 +342,11 @@ func (p *Daemon) UnmarshalSnapshot(m map[string]interface{}) error { return err } - // TODO(mitchellh): we should check if proc refers to a process that - // is currently alive. If not, we should return here and not manage the - // process. + // FindProcess on many systems returns no error even if the process + // is now dead. We perform an extra check that the process is alive. + if err := processAlive(proc); err != nil { + return err + } // "Start it" stopCh := make(chan struct{}) diff --git a/agent/proxy/daemon_test.go b/agent/proxy/daemon_test.go index 6e74cdf88..a96e90699 100644 --- a/agent/proxy/daemon_test.go +++ b/agent/proxy/daemon_test.go @@ -372,6 +372,7 @@ func TestDaemonUnmarshalSnapshot(t *testing.T) { ProxyToken: uuid, Logger: testLogger, } + defer d.Stop() require.NoError(d.Start()) // Wait for the file to exist @@ -408,3 +409,43 @@ func TestDaemonUnmarshalSnapshot(t *testing.T) { r.Fatalf("should not exist: %s", err) }) } + +func TestDaemonUnmarshalSnapshot_notRunning(t *testing.T) { + t.Parallel() + + require := require.New(t) + td, closer := testTempDir(t) + defer closer() + + path := filepath.Join(td, "file") + uuid, err := uuid.GenerateUUID() + require.NoError(err) + + d := &Daemon{ + Command: helperProcess("start-stop", path), + ProxyToken: uuid, + Logger: testLogger, + } + defer d.Stop() + require.NoError(d.Start()) + + // Wait for the file to exist + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + + r.Fatalf("error: %s", err) + }) + + // Snapshot + snap := d.MarshalSnapshot() + + // Stop the original daemon + require.NoError(d.Stop()) + + // Restore the second daemon + d2 := &Daemon{Logger: testLogger} + require.Error(d2.UnmarshalSnapshot(snap)) +} diff --git a/agent/proxy/process_unix.go b/agent/proxy/process_unix.go new file mode 100644 index 000000000..6db64c59c --- /dev/null +++ b/agent/proxy/process_unix.go @@ -0,0 +1,23 @@ +// +build !windows + +package proxy + +import ( + "os" + "syscall" +) + +// processAlive for non-Windows. Note that this very likely doesn't +// work for all non-Windows platforms Go supports and we should expand +// support as we experience it. +func processAlive(p *os.Process) error { + // On Unix-like systems, we can verify a process is alive by sending + // a 0 signal. This will do nothing to the process but will still + // return errors if the process is gone. + err := p.Signal(syscall.Signal(0)) + if err == nil || err == syscall.EPERM { + return nil + } + + return err +} diff --git a/agent/proxy/process_windows.go b/agent/proxy/process_windows.go new file mode 100644 index 000000000..0268958e9 --- /dev/null +++ b/agent/proxy/process_windows.go @@ -0,0 +1,19 @@ +// +build windows + +package proxy + +import ( + "fmt" + "os" +) + +func processAlive(p *os.Process) error { + // On Windows, os.FindProcess will error if the process is not alive, + // so we don't have to do any further checking. The nature of it being + // non-nil means it seems to be healthy. + if p == nil { + return fmt.Errof("process no longer alive") + } + + return nil +} From eb31827fac680cf60187e24f624bea3995ad9efd Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 17:44:54 -0700 Subject: [PATCH 219/539] agent/proxy: implement periodic snapshotting in the manager --- agent/proxy/manager.go | 83 +++++++++++++++++----- agent/proxy/manager_test.go | 89 ++++++++++++++++++++++++ agent/proxy/proxy.go | 21 ++++++ agent/proxy/snapshot.go | 134 +++++++++++++++++++++++++++++++++++- 4 files changed, 310 insertions(+), 17 deletions(-) diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index c9c63311f..7a8cd5d4c 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -25,6 +25,11 @@ const ( // changes. Then the whole cycle resets. ManagerCoalescePeriod = 5 * time.Second ManagerQuiescentPeriod = 500 * time.Millisecond + + // ManagerSnapshotPeriod is the interval that snapshots are taken. + // The last snapshot state is preserved and if it matches a file isn't + // written, so its safe for this to be reasonably frequent. + ManagerSnapshotPeriod = 1 * time.Second ) // Manager starts, stops, snapshots, and restores managed proxies. @@ -64,9 +69,14 @@ type Manager struct { // DataDir string - // SnapshotDir is the path to the directory where snapshots will - // be written - SnapshotDir string + // SnapshotPeriod is the duration between snapshots. This can be set + // relatively low to ensure accuracy, because if the new snapshot matches + // the last snapshot taken, no file will be written. Therefore, setting + // this low causes only slight CPU/memory usage but doesn't result in + // disk IO. If this isn't set, ManagerSnapshotPeriod will be the default. + // + // This only has an effect if snapshots are enabled (DataDir is set). + SnapshotPeriod time.Duration // CoalescePeriod and QuiescencePeriod control the timers for coalescing // updates from the local state. See the defaults at the top of this @@ -86,7 +96,8 @@ type Manager struct { // for changes to this value. runState managerRunState - proxies map[string]Proxy + proxies map[string]Proxy + lastSnapshot *snapshot } // NewManager initializes a Manager. After initialization, the exported @@ -96,6 +107,7 @@ func NewManager() *Manager { var lock sync.Mutex return &Manager{ Logger: defaultLogger, + SnapshotPeriod: ManagerSnapshotPeriod, CoalescePeriod: ManagerCoalescePeriod, QuiescentPeriod: ManagerQuiescentPeriod, lock: &lock, @@ -228,6 +240,12 @@ func (m *Manager) Run() { m.State.NotifyProxy(notifyCh) defer m.State.StopNotifyProxy(notifyCh) + // Start the timer for snapshots. We don't use a ticker because disk + // IO can be slow and we don't want overlapping notifications. So we only + // reset the timer once the snapshot is complete rather than continously. + snapshotTimer := time.NewTimer(m.SnapshotPeriod) + defer snapshotTimer.Stop() + m.Logger.Println("[DEBUG] agent/proxy: managed Connect proxy manager started") SYNC: for { @@ -261,6 +279,17 @@ SYNC: case <-quiescent: continue SYNC + case <-snapshotTimer.C: + // Perform a snapshot + if path := m.SnapshotPath(); path != "" { + if err := m.snapshot(path, true); err != nil { + m.Logger.Printf("[WARN] agent/proxy: failed to snapshot state: %s", err) + } + } + + // Reset + snapshotTimer.Reset(m.SnapshotPeriod) + case <-stopCh: // Stop immediately, no cleanup m.Logger.Println("[DEBUG] agent/proxy: Stopping managed Connect proxy manager") @@ -342,10 +371,22 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { if mp == nil || mp.Proxy == nil { return nil, fmt.Errorf("internal error: nil *local.ManagedProxy or Proxy field") } - p := mp.Proxy - switch p.ExecMode { - case structs.ProxyExecModeDaemon: + + // We reuse the service ID a few times + id := p.ProxyService.ID + + // Create the Proxy. We could just as easily switch on p.ExecMode + // but I wanted there to be only location where ExecMode => Proxy so + // it lowers the chance that is wrong. + proxy, err := m.newProxyFromMode(p.ExecMode, id) + if err != nil { + return nil, err + } + + // Depending on the proxy type we configure the rest from our ManagedProxy + switch proxy := proxy.(type) { + case *Daemon: command := p.Command // This should never happen since validation should happen upstream @@ -354,9 +395,6 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { return nil, fmt.Errorf("daemon mode managed proxy requires command") } - // We reuse the service ID a few times - id := p.ProxyService.ID - // Build the command to execute. var cmd exec.Cmd cmd.Path = command[0] @@ -366,18 +404,31 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { } // Build the daemon structure - return &Daemon{ - Command: &cmd, - ProxyToken: mp.ProxyToken, - Logger: m.Logger, - PidPath: pidPath(filepath.Join(m.DataDir, "pids"), id), - }, nil + proxy.Command = &cmd + proxy.ProxyToken = mp.ProxyToken + return proxy, nil default: return nil, fmt.Errorf("unsupported managed proxy type: %q", p.ExecMode) } } +// newProxyFromMode just initializes the proxy structure from only the mode +// and the service ID. This is a shared method between newProxy and Restore +// so that we only have one location where we turn ExecMode into a Proxy. +func (m *Manager) newProxyFromMode(mode structs.ProxyExecMode, id string) (Proxy, error) { + switch mode { + case structs.ProxyExecModeDaemon: + return &Daemon{ + Logger: m.Logger, + PidPath: pidPath(filepath.Join(m.DataDir, "pids"), id), + }, nil + + default: + return nil, fmt.Errorf("unsupported managed proxy type: %q", mode) + } +} + // configureLogDir sets up the file descriptors to stdout/stderr so that // they log to the proper file path for the given service ID. func (m *Manager) configureLogDir(id string, cmd *exec.Cmd) error { diff --git a/agent/proxy/manager_test.go b/agent/proxy/manager_test.go index d9a817af3..28922cbfa 100644 --- a/agent/proxy/manager_test.go +++ b/agent/proxy/manager_test.go @@ -261,9 +261,98 @@ func TestManagerRun_daemonPid(t *testing.T) { require.NotEmpty(pidRaw) } +// Test the Snapshot/Restore works. +func TestManagerRun_snapshotRestore(t *testing.T) { + t.Parallel() + + require := require.New(t) + state := local.TestState(t) + m, closer := testManager(t) + defer closer() + m.State = state + defer m.Kill() + + // Add the proxy + td, closer := testTempDir(t) + defer closer() + path := filepath.Join(td, "file") + testStateProxy(t, state, "web", helperProcess("start-stop", path)) + + // Set a low snapshot period so we get a snapshot + m.SnapshotPeriod = 10 * time.Millisecond + + // Start the manager + go m.Run() + + // We should see the path appear shortly + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err == nil { + return + } + r.Fatalf("error waiting for path: %s", err) + }) + + // Wait for the snapshot + snapPath := m.SnapshotPath() + retry.Run(t, func(r *retry.R) { + raw, err := ioutil.ReadFile(snapPath) + if err != nil { + r.Fatalf("error waiting for path: %s", err) + } + if len(raw) < 30 { + r.Fatalf("snapshot too small") + } + }) + + // Stop the sync + require.NoError(m.Close()) + + // File should still exist + _, err := os.Stat(path) + require.NoError(err) + + // Restore a manager from a snapshot + m2, closer := testManager(t) + m2.State = state + defer closer() + defer m2.Kill() + require.NoError(m2.Restore(snapPath)) + + // Start + go m2.Run() + + // Add a second proxy so that we can determine when we're up + // and running. + path2 := filepath.Join(td, "file") + testStateProxy(t, state, "db", helperProcess("start-stop", path2)) + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path2) + if err == nil { + return + } + r.Fatalf("error waiting for path: %s", err) + }) + + // Kill m2, which should kill our main process + require.NoError(m2.Kill()) + + // File should no longer exist + retry.Run(t, func(r *retry.R) { + _, err := os.Stat(path) + if err != nil { + return + } + r.Fatalf("file still exists") + }) +} + func testManager(t *testing.T) (*Manager, func()) { m := NewManager() + // Setup a default state + m.State = local.TestState(t) + // Set these periods low to speed up tests m.CoalescePeriod = 1 * time.Millisecond m.QuiescentPeriod = 1 * time.Millisecond diff --git a/agent/proxy/proxy.go b/agent/proxy/proxy.go index e1bad92c0..1bb88da8e 100644 --- a/agent/proxy/proxy.go +++ b/agent/proxy/proxy.go @@ -7,6 +7,10 @@ // for that is available in the "connect/proxy" package. package proxy +import ( + "github.com/hashicorp/consul/agent/structs" +) + // EnvProxyToken is the name of the environment variable that is passed // to managed proxies containing the proxy token. const EnvProxyToken = "CONNECT_PROXY_TOKEN" @@ -16,6 +20,9 @@ const EnvProxyToken = "CONNECT_PROXY_TOKEN" // Calls to all the functions on this interface must be concurrency safe. // Please read the documentation carefully on top of each function for expected // behavior. +// +// Whenever a new proxy type is implemented, please also update proxyExecMode +// and newProxyFromMode and newProxy to support the new proxy. type Proxy interface { // Start starts the proxy. If an error is returned then the managed // proxy registration is rejected. Therefore, this should only fail if @@ -56,3 +63,17 @@ type Proxy interface { MarshalSnapshot() map[string]interface{} UnmarshalSnapshot(map[string]interface{}) error } + +// proxyExecMode returns the ProxyExecMode for a Proxy instance. +func proxyExecMode(p Proxy) structs.ProxyExecMode { + switch p.(type) { + case *Daemon: + return structs.ProxyExecModeDaemon + + case *Noop: + return structs.ProxyExecModeTest + + default: + return structs.ProxyExecModeUnspecified + } +} diff --git a/agent/proxy/snapshot.go b/agent/proxy/snapshot.go index b119bfddf..973a2f083 100644 --- a/agent/proxy/snapshot.go +++ b/agent/proxy/snapshot.go @@ -1,7 +1,15 @@ package proxy import ( + "encoding/json" + "fmt" + "io/ioutil" + "os" + "path/filepath" + "reflect" + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/lib/file" ) // snapshot is the structure of the snapshot file. This is unexported because @@ -17,7 +25,7 @@ type snapshot struct { Version int // Proxies are the set of proxies that the manager has. - Proxies []snapshotProxy + Proxies map[string]snapshotProxy } // snapshotProxy represents a single proxy. @@ -29,3 +37,127 @@ type snapshotProxy struct { // implementation uses to restore state. Config map[string]interface{} } + +// snapshotVersion is the current version to encode within the snapshot. +const snapshotVersion = 1 + +// SnapshotPath returns the default snapshot path for this manager. This +// will return empty if DataDir is not set. This file may not exist yet. +func (m *Manager) SnapshotPath() string { + if m.DataDir == "" { + return "" + } + + return filepath.Join(m.DataDir, "snapshot.json") +} + +// Snapshot will persist a snapshot of the proxy manager state that +// can be restored with Restore. +// +// If DataDir is non-empty, then the Manager will automatically snapshot +// whenever the set of managed proxies changes. This method generally doesn't +// need to be called manually. +func (m *Manager) Snapshot(path string) error { + m.lock.Lock() + defer m.lock.Unlock() + return m.snapshot(path, false) +} + +// snapshot is the internal function analogous to Snapshot but expects +// a lock to already be held. +// +// checkDup when set will store the snapshot on lastSnapshot and use +// reflect.DeepEqual to verify that its not writing an identical snapshot. +func (m *Manager) snapshot(path string, checkDup bool) error { + // Build the snapshot + s := snapshot{ + Version: snapshotVersion, + Proxies: make(map[string]snapshotProxy, len(m.proxies)), + } + for id, p := range m.proxies { + // Get the snapshot configuration. If the configuration is nil or + // empty then we don't persist this proxy. + config := p.MarshalSnapshot() + if len(config) == 0 { + continue + } + + s.Proxies[id] = snapshotProxy{ + Mode: proxyExecMode(p), + Config: config, + } + } + + // Dup detection, if the snapshot is identical to the last, do nothing + if checkDup && reflect.DeepEqual(m.lastSnapshot, &s) { + return nil + } + + // Encode as JSON + encoded, err := json.Marshal(&s) + if err != nil { + return err + } + + // Write the file + err = file.WriteAtomic(path, encoded) + if err == nil && checkDup { + m.lastSnapshot = &s + } + return err +} + +// Restore restores the manager state from a snapshot at path. If path +// doesn't exist, this does nothing and no error is returned. +// +// This restores proxy state but does not restore any Manager configuration +// such as DataDir, Logger, etc. All of those should be set _before_ Restore +// is called. +// +// Restore must be called before Run. Restore will immediately start +// supervising the restored processes but will not sync with the local +// state store until Run is called. +// +// If an error is returned the manager state is left untouched. +func (m *Manager) Restore(path string) error { + buf, err := ioutil.ReadFile(path) + if err != nil { + if os.IsNotExist(err) { + return nil + } + + return err + } + + var s snapshot + if err := json.Unmarshal(buf, &s); err != nil { + return err + } + + // Verify the version matches so we can be more confident that we're + // decoding a structure that we expect. + if s.Version != snapshotVersion { + return fmt.Errorf("unknown snapshot version, expecting %d", snapshotVersion) + } + + // Build the proxies from the snapshot + proxies := make(map[string]Proxy, len(s.Proxies)) + for id, sp := range s.Proxies { + p, err := m.newProxyFromMode(sp.Mode, id) + if err != nil { + return err + } + + if err := p.UnmarshalSnapshot(sp.Config); err != nil { + return err + } + + proxies[id] = p + } + + // Overwrite the proxies. The documentation notes that this will happen. + m.lock.Lock() + defer m.lock.Unlock() + m.proxies = proxies + return nil +} From e3be9f7a029ca9f402fa07a84fdc8650dcd911d3 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 17:47:38 -0700 Subject: [PATCH 220/539] agent/proxy: improve comments on snapshotting --- agent/proxy/manager.go | 8 +++++++- agent/proxy/snapshot.go | 5 ++++- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index 7a8cd5d4c..5bb871c65 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -96,8 +96,14 @@ type Manager struct { // for changes to this value. runState managerRunState - proxies map[string]Proxy + // lastSnapshot stores a pointer to the last snapshot that successfully + // wrote to disk. This is used for dup detection to prevent rewriting + // the same snapshot multiple times. snapshots should never be that + // large so keeping it in-memory should be cheap even for thousands of + // proxies (unlikely scenario). lastSnapshot *snapshot + + proxies map[string]Proxy } // NewManager initializes a Manager. After initialization, the exported diff --git a/agent/proxy/snapshot.go b/agent/proxy/snapshot.go index 973a2f083..8f81a182a 100644 --- a/agent/proxy/snapshot.go +++ b/agent/proxy/snapshot.go @@ -101,7 +101,10 @@ func (m *Manager) snapshot(path string, checkDup bool) error { // Write the file err = file.WriteAtomic(path, encoded) - if err == nil && checkDup { + + // If we are checking for dups and we had a successful write, store + // it so we don't rewrite the same value. + if checkDup && err == nil { m.lastSnapshot = &s } return err From 4301f7f1f53ec412b5235c4796aba3e2edf30327 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 17:51:49 -0700 Subject: [PATCH 221/539] agent: only set the proxy manager data dir if its set --- agent/agent.go | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/agent/agent.go b/agent/agent.go index 2c18c08d3..6f7531ed9 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -363,7 +363,12 @@ func (a *Agent) Start() error { a.proxyManager = proxy.NewManager() a.proxyManager.State = a.State a.proxyManager.Logger = a.logger - a.proxyManager.DataDir = filepath.Join(a.config.DataDir, "proxy") + if a.config.DataDir != "" { + // DataDir is required for all non-dev mode agents, but we want + // to allow setting the data dir for demos and so on for the agent, + // so do the check above instead. + a.proxyManager.DataDir = filepath.Join(a.config.DataDir, "proxy") + } go a.proxyManager.Run() // Start watching for critical services to deregister, based on their From 1d24df38276204b732d92c47b1c2ac5da56f63da Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 18:25:11 -0700 Subject: [PATCH 222/539] agent/proxy: check if process is alive in addition to Wait --- agent/proxy/daemon.go | 27 ++++++++++++++++++++------- agent/proxy/process_unix.go | 17 ++++++++++++----- agent/proxy/process_windows.go | 9 ++------- agent/proxy/snapshot.go | 7 ++++++- 4 files changed, 40 insertions(+), 20 deletions(-) diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index 15950c196..e9bf318a2 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -165,7 +165,25 @@ func (p *Daemon) keepAlive(stopCh <-chan struct{}, exitedCh chan<- struct{}) { } + // Wait for the process to exit. Note that if we restored this proxy + // then Wait will always fail because we likely aren't the parent + // process. Therefore, we do an extra sanity check after to use other + // syscalls to verify the process is truly dead. ps, err := process.Wait() + if _, err := findProcess(process.Pid); err == nil { + select { + case <-time.After(1 * time.Second): + // We want a busy loop, but not too busy. 1 second between + // detecting a process death seems reasonable. + + case <-stopCh: + // If we receive a stop request we want to exit immediately. + return + } + + continue + } + process = nil if err != nil { p.Logger.Printf("[INFO] agent/proxy: daemon exited with error: %s", err) @@ -336,15 +354,10 @@ func (p *Daemon) UnmarshalSnapshot(m map[string]interface{}) error { Env: s.CommandEnv, } - // For the pid, we want to find the process. - proc, err := os.FindProcess(s.Pid) - if err != nil { - return err - } - // FindProcess on many systems returns no error even if the process // is now dead. We perform an extra check that the process is alive. - if err := processAlive(proc); err != nil { + proc, err := findProcess(s.Pid) + if err != nil { return err } diff --git a/agent/proxy/process_unix.go b/agent/proxy/process_unix.go index 6db64c59c..deb45d080 100644 --- a/agent/proxy/process_unix.go +++ b/agent/proxy/process_unix.go @@ -3,21 +3,28 @@ package proxy import ( + "fmt" "os" "syscall" ) -// processAlive for non-Windows. Note that this very likely doesn't +// findProcess for non-Windows. Note that this very likely doesn't // work for all non-Windows platforms Go supports and we should expand // support as we experience it. -func processAlive(p *os.Process) error { +func findProcess(pid int) (*os.Process, error) { + // FindProcess never fails on unix-like systems. + p, err := os.FindProcess(pid) + if err != nil { + return nil, err + } + // On Unix-like systems, we can verify a process is alive by sending // a 0 signal. This will do nothing to the process but will still // return errors if the process is gone. - err := p.Signal(syscall.Signal(0)) + err = p.Signal(syscall.Signal(0)) if err == nil || err == syscall.EPERM { - return nil + return p, nil } - return err + return nil, fmt.Errorf("process is dead") } diff --git a/agent/proxy/process_windows.go b/agent/proxy/process_windows.go index 0268958e9..0a00d81ee 100644 --- a/agent/proxy/process_windows.go +++ b/agent/proxy/process_windows.go @@ -3,17 +3,12 @@ package proxy import ( - "fmt" "os" ) -func processAlive(p *os.Process) error { +func findProcess(pid int) (*os.Process, error) { // On Windows, os.FindProcess will error if the process is not alive, // so we don't have to do any further checking. The nature of it being // non-nil means it seems to be healthy. - if p == nil { - return fmt.Errof("process no longer alive") - } - - return nil + return os.FindProcess(pid) } diff --git a/agent/proxy/snapshot.go b/agent/proxy/snapshot.go index 8f81a182a..dbe03fd83 100644 --- a/agent/proxy/snapshot.go +++ b/agent/proxy/snapshot.go @@ -151,8 +151,13 @@ func (m *Manager) Restore(path string) error { return err } + // Unmarshal the proxy. If there is an error we just continue on and + // ignore it. Errors restoring proxies should be exceptionally rare + // and only under scenarios where the proxy isn't running anymore or + // we won't have permission to access it. We log and continue. if err := p.UnmarshalSnapshot(sp.Config); err != nil { - return err + m.Logger.Printf("[WARN] agent/proxy: error restoring proxy %q: %s", id, err) + continue } proxies[id] = p From 147b066c676e14525fdf467acbe211939653ac6a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 18:25:32 -0700 Subject: [PATCH 223/539] agent: restore proxy snapshot but still Kill proxies --- agent/agent.go | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/agent/agent.go b/agent/agent.go index 6f7531ed9..2d431be92 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -368,6 +368,11 @@ func (a *Agent) Start() error { // to allow setting the data dir for demos and so on for the agent, // so do the check above instead. a.proxyManager.DataDir = filepath.Join(a.config.DataDir, "proxy") + + // Restore from our snapshot (if it exists) + if err := a.proxyManager.Restore(a.proxyManager.SnapshotPath()); err != nil { + a.logger.Printf("[WARN] agent: error restoring proxy state: %s", err) + } } go a.proxyManager.Run() @@ -1289,7 +1294,7 @@ func (a *Agent) ShutdownAgent() error { // Stop the proxy manager // NOTE(mitchellh): we use Kill for now to kill the processes since - // snapshotting isn't implemented. This should change to Close later. + // there isn't a clean way to cleanup the managed proxies. This is coming if err := a.proxyManager.Kill(); err != nil { a.logger.Printf("[WARN] agent: error shutting down proxy manager: %s", err) } From 1cb9046ad5d05842f3d33c6d43e0d30ddeaa0d6c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 21:58:07 -0700 Subject: [PATCH 224/539] lib/file: add tests for WriteAtomic --- lib/file/atomic_test.go | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) create mode 100644 lib/file/atomic_test.go diff --git a/lib/file/atomic_test.go b/lib/file/atomic_test.go new file mode 100644 index 000000000..5be41f245 --- /dev/null +++ b/lib/file/atomic_test.go @@ -0,0 +1,32 @@ +package file + +import ( + "io/ioutil" + "os" + "path/filepath" + "testing" + + "github.com/stretchr/testify/require" +) + +// This doesn't really test the "atomic" part of this function. It really +// tests that it just writes the file properly. I would love to test this +// better but I'm not sure how. -mitchellh +func TestWriteAtomic(t *testing.T) { + require := require.New(t) + td, err := ioutil.TempDir("", "lib-file") + require.NoError(err) + defer os.RemoveAll(td) + + // Create a subdir that doesn't exist to test that it is created + path := filepath.Join(td, "subdir", "file") + + // Write + expected := []byte("hello") + require.NoError(WriteAtomic(path, expected)) + + // Read and verify + actual, err := ioutil.ReadFile(path) + require.NoError(err) + require.Equal(expected, actual) +} From 7bb13246a8be175c227258a440c8a659e0fe8cef Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 3 May 2018 22:13:18 -0700 Subject: [PATCH 225/539] agent: clarify why we Kill still --- agent/agent.go | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/agent/agent.go b/agent/agent.go index 2d431be92..e5c35c94e 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -1294,7 +1294,8 @@ func (a *Agent) ShutdownAgent() error { // Stop the proxy manager // NOTE(mitchellh): we use Kill for now to kill the processes since - // there isn't a clean way to cleanup the managed proxies. This is coming + // the local state isn't snapshotting meaning the proxy tokens are + // regenerated each time forcing the processes to restart anyways. if err := a.proxyManager.Kill(); err != nil { a.logger.Printf("[WARN] agent: error shutting down proxy manager: %s", err) } From a89238a9d360639dedf11d5ae0d9752b24307aec Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 4 May 2018 08:09:44 -0700 Subject: [PATCH 226/539] agent/proxy: address PR feedback --- agent/proxy/daemon.go | 2 +- agent/proxy/process_unix.go | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index e9bf318a2..7a15b12ab 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -284,6 +284,7 @@ func (p *Daemon) Stop() error { // used only for tests. func (p *Daemon) stopKeepAlive() error { p.lock.Lock() + defer p.lock.Unlock() // If we're already stopped or never started, then no problem. if p.stopped || p.process == nil { @@ -295,7 +296,6 @@ func (p *Daemon) stopKeepAlive() error { // Note that we've stopped p.stopped = true close(p.stopCh) - p.lock.Unlock() return nil } diff --git a/agent/proxy/process_unix.go b/agent/proxy/process_unix.go index deb45d080..9bca07c2b 100644 --- a/agent/proxy/process_unix.go +++ b/agent/proxy/process_unix.go @@ -22,9 +22,9 @@ func findProcess(pid int) (*os.Process, error) { // a 0 signal. This will do nothing to the process but will still // return errors if the process is gone. err = p.Signal(syscall.Signal(0)) - if err == nil || err == syscall.EPERM { + if err == nil { return p, nil } - return nil, fmt.Errorf("process is dead") + return nil, fmt.Errorf("process %d is dead or running as another user", pid) } From 1dfb4762f59e7a574f99620d6d5030d45a080d6f Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 4 May 2018 09:14:40 -0700 Subject: [PATCH 227/539] agent: increase timer for blocking cache endpoints --- agent/agent.go | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index e5c35c94e..9c74b3760 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2723,7 +2723,7 @@ func (a *Agent) registerCache() { }, &cache.RegisterOptions{ // Maintain a blocking query, retry dropped connections quickly Refresh: true, - RefreshTimer: 0, + RefreshTimer: 3 * time.Second, RefreshTimeout: 10 * time.Minute, }) @@ -2733,7 +2733,7 @@ func (a *Agent) registerCache() { }, &cache.RegisterOptions{ // Maintain a blocking query, retry dropped connections quickly Refresh: true, - RefreshTimer: 0, + RefreshTimer: 3 * time.Second, RefreshTimeout: 10 * time.Minute, }) @@ -2742,7 +2742,7 @@ func (a *Agent) registerCache() { }, &cache.RegisterOptions{ // Maintain a blocking query, retry dropped connections quickly Refresh: true, - RefreshTimer: 0, + RefreshTimer: 3 * time.Second, RefreshTimeout: 10 * time.Minute, }) } From 257d31e319d79445697b8fb3525b1bd42fa553be Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 4 May 2018 09:44:59 -0700 Subject: [PATCH 228/539] agent/proxy: delete pid file on Stop --- agent/proxy/daemon.go | 14 ++++++++++++++ agent/proxy/daemon_test.go | 7 +++++++ 2 files changed, 21 insertions(+) diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index 7a15b12ab..c0ae7fdee 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -263,6 +263,20 @@ func (p *Daemon) Stop() error { gracefulWait = 5 * time.Second } + // Defer removing the pid file. Even under error conditions we + // delete the pid file since Stop means that the manager is no + // longer managing this proxy and therefore nothing else will ever + // clean it up. + if p.PidPath != "" { + defer func() { + if err := os.Remove(p.PidPath); err != nil && !os.IsNotExist(err) { + p.Logger.Printf( + "[DEBUG] agent/proxy: error removing pid file %q: %s", + p.PidPath, err) + } + }() + } + // First, try a graceful stop err := process.Signal(os.Interrupt) if err == nil { diff --git a/agent/proxy/daemon_test.go b/agent/proxy/daemon_test.go index a96e90699..17e821a44 100644 --- a/agent/proxy/daemon_test.go +++ b/agent/proxy/daemon_test.go @@ -177,6 +177,13 @@ func TestDaemonStart_pidFile(t *testing.T) { pidRaw, err := ioutil.ReadFile(pidPath) require.NoError(err) require.NotEmpty(pidRaw) + + // Stop + require.NoError(d.Stop()) + + // Pid file should be gone + _, err = os.Stat(pidPath) + require.True(os.IsNotExist(err)) } // Verify the pid file changes on restart From 954d286d73d7c03b7f66f574a5714a0dfd4511cc Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Fri, 4 May 2018 22:31:06 +0100 Subject: [PATCH 229/539] Make CSR work with jank domain --- agent/cache-types/connect_ca_leaf.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agent/cache-types/connect_ca_leaf.go b/agent/cache-types/connect_ca_leaf.go index 2c1cd156a..1058ec26a 100644 --- a/agent/cache-types/connect_ca_leaf.go +++ b/agent/cache-types/connect_ca_leaf.go @@ -102,7 +102,7 @@ func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache // needs a correct host ID, and we probably don't want to use TestCSR // and want a non-test-specific way to create a CSR. csr, pk := connect.TestCSR(&testing.RuntimeT{}, &connect.SpiffeIDService{ - Host: "1234.consul", + Host: "11111111-2222-3333-4444-555555555555.consul", Namespace: "default", Datacenter: reqReal.Datacenter, Service: reqReal.Service, From 498c63a6f117865f7f2be32e95eb440ea684391d Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 4 May 2018 13:33:05 -0700 Subject: [PATCH 230/539] agent/config: default connect enabled in dev mode This enables `consul agent -dev` to begin using Connect features with the built-in CA. I think this is expected behavior since you can imagine that new users would want to try. There is no real downside since we're just using the built-in CA. --- agent/config/default.go | 4 ++++ agent/config/runtime_test.go | 1 + 2 files changed, 5 insertions(+) diff --git a/agent/config/default.go b/agent/config/default.go index b266a83b9..9d96f5a5c 100644 --- a/agent/config/default.go +++ b/agent/config/default.go @@ -108,6 +108,10 @@ func DevSource() Source { ui = true log_level = "DEBUG" server = true + + connect = { + enabled = true + } performance = { raft_multiplier = 1 } diff --git a/agent/config/runtime_test.go b/agent/config/runtime_test.go index 3268c6c70..82787482f 100644 --- a/agent/config/runtime_test.go +++ b/agent/config/runtime_test.go @@ -263,6 +263,7 @@ func TestConfigFlagsAndEdgecases(t *testing.T) { rt.AdvertiseAddrLAN = ipAddr("127.0.0.1") rt.AdvertiseAddrWAN = ipAddr("127.0.0.1") rt.BindAddr = ipAddr("127.0.0.1") + rt.ConnectEnabled = true rt.DevMode = true rt.DisableAnonymousSignature = true rt.DisableKeyringFile = true From 662f38c62521616a78e973441a574580f913f491 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 4 May 2018 14:10:03 -0700 Subject: [PATCH 231/539] agent/structs: validate service definitions, port required for proxy --- agent/config/builder.go | 7 +++ agent/structs/service_definition.go | 31 ++++++++++++ agent/structs/service_definition_test.go | 54 +++++++++++++++++++++ agent/structs/testing_catalog.go | 8 +++ agent/structs/testing_service_definition.go | 7 +++ 5 files changed, 107 insertions(+) diff --git a/agent/config/builder.go b/agent/config/builder.go index cd851aaf3..88f65ec76 100644 --- a/agent/config/builder.go +++ b/agent/config/builder.go @@ -912,6 +912,13 @@ func (b *Builder) Validate(rt RuntimeConfig) error { return b.err } + // Check for errors in the service definitions + for _, s := range rt.Services { + if err := s.Validate(); err != nil { + return fmt.Errorf("service %q: %s", s.Name, err) + } + } + // ---------------------------------------------------------------- // warnings // diff --git a/agent/structs/service_definition.go b/agent/structs/service_definition.go index be69a7b57..506015649 100644 --- a/agent/structs/service_definition.go +++ b/agent/structs/service_definition.go @@ -1,5 +1,11 @@ package structs +import ( + "fmt" + + "github.com/hashicorp/go-multierror" +) + // ServiceDefinition is used to JSON decode the Service definitions. For // documentation on specific fields see NodeService which is better documented. type ServiceDefinition struct { @@ -68,6 +74,31 @@ func (s *ServiceDefinition) ConnectManagedProxy() (*ConnectManagedProxy, error) return p, nil } +// Validate validates the service definition. This also calls the underlying +// Validate method on the NodeService. +// +// NOTE(mitchellh): This currently only validates fields related to Connect +// and is incomplete with regards to other fields. +func (s *ServiceDefinition) Validate() error { + var result error + + if s.Kind == ServiceKindTypical { + if s.Connect != nil && s.Connect.Proxy != nil { + if s.Port == 0 { + result = multierror.Append(result, fmt.Errorf( + "Services with a Connect managed proxy must have a port set")) + } + } + } + + // Validate the NodeService which covers a lot + if err := s.NodeService().Validate(); err != nil { + result = multierror.Append(result, err) + } + + return result +} + func (s *ServiceDefinition) CheckTypes() (checks CheckTypes, err error) { if !s.Check.Empty() { err := s.Check.Validate() diff --git a/agent/structs/service_definition_test.go b/agent/structs/service_definition_test.go index d3bab4a08..c73cff217 100644 --- a/agent/structs/service_definition_test.go +++ b/agent/structs/service_definition_test.go @@ -2,10 +2,12 @@ package structs import ( "fmt" + "strings" "testing" "time" "github.com/pascaldekloe/goe/verify" + "github.com/stretchr/testify/require" ) func TestAgentStructs_CheckTypes(t *testing.T) { @@ -54,3 +56,55 @@ func TestAgentStructs_CheckTypes(t *testing.T) { } } } + +func TestServiceDefinitionValidate(t *testing.T) { + cases := []struct { + Name string + Modify func(*ServiceDefinition) + Err string + }{ + { + "valid", + func(x *ServiceDefinition) {}, + "", + }, + + { + "managed proxy with a port set", + func(x *ServiceDefinition) { + x.Port = 8080 + x.Connect = &ServiceDefinitionConnect{ + Proxy: &ServiceDefinitionConnectProxy{}, + } + }, + "", + }, + + { + "managed proxy with no port set", + func(x *ServiceDefinition) { + x.Connect = &ServiceDefinitionConnect{ + Proxy: &ServiceDefinitionConnectProxy{}, + } + }, + "must have a port", + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + require := require.New(t) + service := TestServiceDefinition(t) + tc.Modify(service) + + err := service.Validate() + t.Logf("error: %s", err) + require.Equal(err != nil, tc.Err != "") + if err == nil { + return + } + + require.Contains(strings.ToLower(err.Error()), strings.ToLower(tc.Err)) + }) + } +} diff --git a/agent/structs/testing_catalog.go b/agent/structs/testing_catalog.go index 1394b7081..a274ced77 100644 --- a/agent/structs/testing_catalog.go +++ b/agent/structs/testing_catalog.go @@ -29,6 +29,14 @@ func TestRegisterRequestProxy(t testing.T) *RegisterRequest { } } +// TestNodeService returns a *NodeService representing a valid regular service. +func TestNodeService(t testing.T) *NodeService { + return &NodeService{ + Kind: ServiceKindTypical, + Service: "web", + } +} + // TestNodeServiceProxy returns a *NodeService representing a valid // Connect proxy. func TestNodeServiceProxy(t testing.T) *NodeService { diff --git a/agent/structs/testing_service_definition.go b/agent/structs/testing_service_definition.go index b14e1e2ff..370458371 100644 --- a/agent/structs/testing_service_definition.go +++ b/agent/structs/testing_service_definition.go @@ -4,6 +4,13 @@ import ( "github.com/mitchellh/go-testing-interface" ) +// TestServiceDefinition returns a ServiceDefinition for a typical service. +func TestServiceDefinition(t testing.T) *ServiceDefinition { + return &ServiceDefinition{ + Name: "db", + } +} + // TestServiceDefinitionProxy returns a ServiceDefinition for a proxy. func TestServiceDefinitionProxy(t testing.T) *ServiceDefinition { return &ServiceDefinition{ From 80b6d0a6cf806e66a8b71f5e786b78cdc6f40e72 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 4 May 2018 16:13:40 -0700 Subject: [PATCH 232/539] Add missing vendor dep github.com/stretchr/objx --- vendor/github.com/stretchr/objx/Gopkg.lock | 30 + vendor/github.com/stretchr/objx/Gopkg.toml | 8 + vendor/github.com/stretchr/objx/LICENSE | 22 + vendor/github.com/stretchr/objx/README.md | 80 + vendor/github.com/stretchr/objx/Taskfile.yml | 31 + vendor/github.com/stretchr/objx/accessors.go | 119 + .../github.com/stretchr/objx/conversions.go | 109 + vendor/github.com/stretchr/objx/doc.go | 66 + vendor/github.com/stretchr/objx/map.go | 228 ++ vendor/github.com/stretchr/objx/mutations.go | 77 + vendor/github.com/stretchr/objx/security.go | 12 + vendor/github.com/stretchr/objx/tests.go | 17 + .../stretchr/objx/type_specific_codegen.go | 2516 +++++++++++++++++ vendor/github.com/stretchr/objx/value.go | 53 + vendor/vendor.json | 1 + 15 files changed, 3369 insertions(+) create mode 100644 vendor/github.com/stretchr/objx/Gopkg.lock create mode 100644 vendor/github.com/stretchr/objx/Gopkg.toml create mode 100644 vendor/github.com/stretchr/objx/LICENSE create mode 100644 vendor/github.com/stretchr/objx/README.md create mode 100644 vendor/github.com/stretchr/objx/Taskfile.yml create mode 100644 vendor/github.com/stretchr/objx/accessors.go create mode 100644 vendor/github.com/stretchr/objx/conversions.go create mode 100644 vendor/github.com/stretchr/objx/doc.go create mode 100644 vendor/github.com/stretchr/objx/map.go create mode 100644 vendor/github.com/stretchr/objx/mutations.go create mode 100644 vendor/github.com/stretchr/objx/security.go create mode 100644 vendor/github.com/stretchr/objx/tests.go create mode 100644 vendor/github.com/stretchr/objx/type_specific_codegen.go create mode 100644 vendor/github.com/stretchr/objx/value.go diff --git a/vendor/github.com/stretchr/objx/Gopkg.lock b/vendor/github.com/stretchr/objx/Gopkg.lock new file mode 100644 index 000000000..3e4e06df8 --- /dev/null +++ b/vendor/github.com/stretchr/objx/Gopkg.lock @@ -0,0 +1,30 @@ +# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'. + + +[[projects]] + name = "github.com/davecgh/go-spew" + packages = ["spew"] + revision = "346938d642f2ec3594ed81d874461961cd0faa76" + version = "v1.1.0" + +[[projects]] + name = "github.com/pmezard/go-difflib" + packages = ["difflib"] + revision = "792786c7400a136282c1664665ae0a8db921c6c2" + version = "v1.0.0" + +[[projects]] + name = "github.com/stretchr/testify" + packages = [ + "assert", + "require" + ] + revision = "12b6f73e6084dad08a7c6e575284b177ecafbc71" + version = "v1.2.1" + +[solve-meta] + analyzer-name = "dep" + analyzer-version = 1 + inputs-digest = "2d160a7dea4ffd13c6c31dab40373822f9d78c73beba016d662bef8f7a998876" + solver-name = "gps-cdcl" + solver-version = 1 diff --git a/vendor/github.com/stretchr/objx/Gopkg.toml b/vendor/github.com/stretchr/objx/Gopkg.toml new file mode 100644 index 000000000..d70f1570b --- /dev/null +++ b/vendor/github.com/stretchr/objx/Gopkg.toml @@ -0,0 +1,8 @@ +[prune] + unused-packages = true + non-go = true + go-tests = true + +[[constraint]] + name = "github.com/stretchr/testify" + version = "~1.2.0" diff --git a/vendor/github.com/stretchr/objx/LICENSE b/vendor/github.com/stretchr/objx/LICENSE new file mode 100644 index 000000000..44d4d9d5a --- /dev/null +++ b/vendor/github.com/stretchr/objx/LICENSE @@ -0,0 +1,22 @@ +The MIT License + +Copyright (c) 2014 Stretchr, Inc. +Copyright (c) 2017-2018 objx contributors + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/vendor/github.com/stretchr/objx/README.md b/vendor/github.com/stretchr/objx/README.md new file mode 100644 index 000000000..8fc8fa277 --- /dev/null +++ b/vendor/github.com/stretchr/objx/README.md @@ -0,0 +1,80 @@ +# Objx +[![Build Status](https://travis-ci.org/stretchr/objx.svg?branch=master)](https://travis-ci.org/stretchr/objx) +[![Go Report Card](https://goreportcard.com/badge/github.com/stretchr/objx)](https://goreportcard.com/report/github.com/stretchr/objx) +[![Maintainability](https://api.codeclimate.com/v1/badges/1d64bc6c8474c2074f2b/maintainability)](https://codeclimate.com/github/stretchr/objx/maintainability) +[![Test Coverage](https://api.codeclimate.com/v1/badges/1d64bc6c8474c2074f2b/test_coverage)](https://codeclimate.com/github/stretchr/objx/test_coverage) +[![Sourcegraph](https://sourcegraph.com/github.com/stretchr/objx/-/badge.svg)](https://sourcegraph.com/github.com/stretchr/objx) +[![GoDoc](https://godoc.org/github.com/stretchr/objx?status.svg)](https://godoc.org/github.com/stretchr/objx) + +Objx - Go package for dealing with maps, slices, JSON and other data. + +Get started: + +- Install Objx with [one line of code](#installation), or [update it with another](#staying-up-to-date) +- Check out the API Documentation http://godoc.org/github.com/stretchr/objx + +## Overview +Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes a powerful `Get` method (among others) that allows you to easily and quickly get access to data within the map, without having to worry too much about type assertions, missing data, default values etc. + +### Pattern +Objx uses a preditable pattern to make access data from within `map[string]interface{}` easy. Call one of the `objx.` functions to create your `objx.Map` to get going: + + m, err := objx.FromJSON(json) + +NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, the rest will be optimistic and try to figure things out without panicking. + +Use `Get` to access the value you're interested in. You can use dot and array +notation too: + + m.Get("places[0].latlng") + +Once you have sought the `Value` you're interested in, you can use the `Is*` methods to determine its type. + + if m.Get("code").IsStr() { // Your code... } + +Or you can just assume the type, and use one of the strong type methods to extract the real value: + + m.Get("code").Int() + +If there's no value there (or if it's the wrong type) then a default value will be returned, or you can be explicit about the default value. + + Get("code").Int(-1) + +If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, manipulating and selecting that data. You can find out more by exploring the index below. + +### Reading data +A simple example of how to use Objx: + + // Use MustFromJSON to make an objx.Map from some JSON + m := objx.MustFromJSON(`{"name": "Mat", "age": 30}`) + + // Get the details + name := m.Get("name").Str() + age := m.Get("age").Int() + + // Get their nickname (or use their name if they don't have one) + nickname := m.Get("nickname").Str(name) + +### Ranging +Since `objx.Map` is a `map[string]interface{}` you can treat it as such. For example, to `range` the data, do what you would expect: + + m := objx.MustFromJSON(json) + for key, value := range m { + // Your code... + } + +## Installation +To install Objx, use go get: + + go get github.com/stretchr/objx + +### Staying up to date +To update Objx to the latest version, run: + + go get -u github.com/stretchr/objx + +### Supported go versions +We support the lastest four major Go versions, which are 1.6, 1.7, 1.8 and 1.9 at the moment. + +## Contributing +Please feel free to submit issues, fork the repository and send pull requests! diff --git a/vendor/github.com/stretchr/objx/Taskfile.yml b/vendor/github.com/stretchr/objx/Taskfile.yml new file mode 100644 index 000000000..7d0199450 --- /dev/null +++ b/vendor/github.com/stretchr/objx/Taskfile.yml @@ -0,0 +1,31 @@ +default: + deps: [test] + +dl-deps: + desc: Downloads cli dependencies + cmds: + - go get -u github.com/golang/lint/golint + +update-deps: + desc: Updates dependencies + cmds: + - dep ensure + - dep ensure -update + +lint: + desc: Runs golint + cmds: + - go fmt $(go list ./... | grep -v /vendor/) + - go vet $(go list ./... | grep -v /vendor/) + - golint $(ls *.go | grep -v "doc.go") + silent: true + +test: + desc: Runs go tests + cmds: + - go test -race . + +test-coverage: + desc: Runs go tests and calucates test coverage + cmds: + - go test -coverprofile=c.out . diff --git a/vendor/github.com/stretchr/objx/accessors.go b/vendor/github.com/stretchr/objx/accessors.go new file mode 100644 index 000000000..676316281 --- /dev/null +++ b/vendor/github.com/stretchr/objx/accessors.go @@ -0,0 +1,119 @@ +package objx + +import ( + "regexp" + "strconv" + "strings" +) + +const ( + // PathSeparator is the character used to separate the elements + // of the keypath. + // + // For example, `location.address.city` + PathSeparator string = "." + + // arrayAccesRegexString is the regex used to extract the array number + // from the access path + arrayAccesRegexString = `^(.+)\[([0-9]+)\]$` +) + +// arrayAccesRegex is the compiled arrayAccesRegexString +var arrayAccesRegex = regexp.MustCompile(arrayAccesRegexString) + +// Get gets the value using the specified selector and +// returns it inside a new Obj object. +// +// If it cannot find the value, Get will return a nil +// value inside an instance of Obj. +// +// Get can only operate directly on map[string]interface{} and []interface. +// +// Example +// +// To access the title of the third chapter of the second book, do: +// +// o.Get("books[1].chapters[2].title") +func (m Map) Get(selector string) *Value { + rawObj := access(m, selector, nil, false) + return &Value{data: rawObj} +} + +// Set sets the value using the specified selector and +// returns the object on which Set was called. +// +// Set can only operate directly on map[string]interface{} and []interface +// +// Example +// +// To set the title of the third chapter of the second book, do: +// +// o.Set("books[1].chapters[2].title","Time to Go") +func (m Map) Set(selector string, value interface{}) Map { + access(m, selector, value, true) + return m +} + +// getIndex returns the index, which is hold in s by two braches. +// It also returns s withour the index part, e.g. name[1] will return (1, name). +// If no index is found, -1 is returned +func getIndex(s string) (int, string) { + arrayMatches := arrayAccesRegex.FindStringSubmatch(s) + if len(arrayMatches) > 0 { + // Get the key into the map + selector := arrayMatches[1] + // Get the index into the array at the key + // We know this cannt fail because arrayMatches[2] is an int for sure + index, _ := strconv.Atoi(arrayMatches[2]) + return index, selector + } + return -1, s +} + +// access accesses the object using the selector and performs the +// appropriate action. +func access(current interface{}, selector string, value interface{}, isSet bool) interface{} { + selSegs := strings.SplitN(selector, PathSeparator, 2) + thisSel := selSegs[0] + index := -1 + + if strings.Contains(thisSel, "[") { + index, thisSel = getIndex(thisSel) + } + + if curMap, ok := current.(Map); ok { + current = map[string]interface{}(curMap) + } + // get the object in question + switch current.(type) { + case map[string]interface{}: + curMSI := current.(map[string]interface{}) + if len(selSegs) <= 1 && isSet { + curMSI[thisSel] = value + return nil + } + + _, ok := curMSI[thisSel].(map[string]interface{}) + if (curMSI[thisSel] == nil || !ok) && index == -1 && isSet { + curMSI[thisSel] = map[string]interface{}{} + } + + current = curMSI[thisSel] + default: + current = nil + } + // do we need to access the item of an array? + if index > -1 { + if array, ok := current.([]interface{}); ok { + if index < len(array) { + current = array[index] + } else { + current = nil + } + } + } + if len(selSegs) > 1 { + current = access(current, selSegs[1], value, isSet) + } + return current +} diff --git a/vendor/github.com/stretchr/objx/conversions.go b/vendor/github.com/stretchr/objx/conversions.go new file mode 100644 index 000000000..ca1c2dec6 --- /dev/null +++ b/vendor/github.com/stretchr/objx/conversions.go @@ -0,0 +1,109 @@ +package objx + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "net/url" +) + +// SignatureSeparator is the character that is used to +// separate the Base64 string from the security signature. +const SignatureSeparator = "_" + +// JSON converts the contained object to a JSON string +// representation +func (m Map) JSON() (string, error) { + result, err := json.Marshal(m) + if err != nil { + err = errors.New("objx: JSON encode failed with: " + err.Error()) + } + return string(result), err +} + +// MustJSON converts the contained object to a JSON string +// representation and panics if there is an error +func (m Map) MustJSON() string { + result, err := m.JSON() + if err != nil { + panic(err.Error()) + } + return result +} + +// Base64 converts the contained object to a Base64 string +// representation of the JSON string representation +func (m Map) Base64() (string, error) { + var buf bytes.Buffer + + jsonData, err := m.JSON() + if err != nil { + return "", err + } + + encoder := base64.NewEncoder(base64.StdEncoding, &buf) + _, _ = encoder.Write([]byte(jsonData)) + _ = encoder.Close() + + return buf.String(), nil +} + +// MustBase64 converts the contained object to a Base64 string +// representation of the JSON string representation and panics +// if there is an error +func (m Map) MustBase64() string { + result, err := m.Base64() + if err != nil { + panic(err.Error()) + } + return result +} + +// SignedBase64 converts the contained object to a Base64 string +// representation of the JSON string representation and signs it +// using the provided key. +func (m Map) SignedBase64(key string) (string, error) { + base64, err := m.Base64() + if err != nil { + return "", err + } + + sig := HashWithKey(base64, key) + return base64 + SignatureSeparator + sig, nil +} + +// MustSignedBase64 converts the contained object to a Base64 string +// representation of the JSON string representation and signs it +// using the provided key and panics if there is an error +func (m Map) MustSignedBase64(key string) string { + result, err := m.SignedBase64(key) + if err != nil { + panic(err.Error()) + } + return result +} + +/* + URL Query + ------------------------------------------------ +*/ + +// URLValues creates a url.Values object from an Obj. This +// function requires that the wrapped object be a map[string]interface{} +func (m Map) URLValues() url.Values { + vals := make(url.Values) + for k, v := range m { + //TODO: can this be done without sprintf? + vals.Set(k, fmt.Sprintf("%v", v)) + } + return vals +} + +// URLQuery gets an encoded URL query representing the given +// Obj. This function requires that the wrapped object be a +// map[string]interface{} +func (m Map) URLQuery() (string, error) { + return m.URLValues().Encode(), nil +} diff --git a/vendor/github.com/stretchr/objx/doc.go b/vendor/github.com/stretchr/objx/doc.go new file mode 100644 index 000000000..6d6af1a83 --- /dev/null +++ b/vendor/github.com/stretchr/objx/doc.go @@ -0,0 +1,66 @@ +/* +Objx - Go package for dealing with maps, slices, JSON and other data. + +Overview + +Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes +a powerful `Get` method (among others) that allows you to easily and quickly get +access to data within the map, without having to worry too much about type assertions, +missing data, default values etc. + +Pattern + +Objx uses a preditable pattern to make access data from within `map[string]interface{}` easy. +Call one of the `objx.` functions to create your `objx.Map` to get going: + + m, err := objx.FromJSON(json) + +NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, +the rest will be optimistic and try to figure things out without panicking. + +Use `Get` to access the value you're interested in. You can use dot and array +notation too: + + m.Get("places[0].latlng") + +Once you have sought the `Value` you're interested in, you can use the `Is*` methods to determine its type. + + if m.Get("code").IsStr() { // Your code... } + +Or you can just assume the type, and use one of the strong type methods to extract the real value: + + m.Get("code").Int() + +If there's no value there (or if it's the wrong type) then a default value will be returned, +or you can be explicit about the default value. + + Get("code").Int(-1) + +If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, +manipulating and selecting that data. You can find out more by exploring the index below. + +Reading data + +A simple example of how to use Objx: + + // Use MustFromJSON to make an objx.Map from some JSON + m := objx.MustFromJSON(`{"name": "Mat", "age": 30}`) + + // Get the details + name := m.Get("name").Str() + age := m.Get("age").Int() + + // Get their nickname (or use their name if they don't have one) + nickname := m.Get("nickname").Str(name) + +Ranging + +Since `objx.Map` is a `map[string]interface{}` you can treat it as such. +For example, to `range` the data, do what you would expect: + + m := objx.MustFromJSON(json) + for key, value := range m { + // Your code... + } +*/ +package objx diff --git a/vendor/github.com/stretchr/objx/map.go b/vendor/github.com/stretchr/objx/map.go new file mode 100644 index 000000000..95149c06a --- /dev/null +++ b/vendor/github.com/stretchr/objx/map.go @@ -0,0 +1,228 @@ +package objx + +import ( + "encoding/base64" + "encoding/json" + "errors" + "io/ioutil" + "net/url" + "strings" +) + +// MSIConvertable is an interface that defines methods for converting your +// custom types to a map[string]interface{} representation. +type MSIConvertable interface { + // MSI gets a map[string]interface{} (msi) representing the + // object. + MSI() map[string]interface{} +} + +// Map provides extended functionality for working with +// untyped data, in particular map[string]interface (msi). +type Map map[string]interface{} + +// Value returns the internal value instance +func (m Map) Value() *Value { + return &Value{data: m} +} + +// Nil represents a nil Map. +var Nil = New(nil) + +// New creates a new Map containing the map[string]interface{} in the data argument. +// If the data argument is not a map[string]interface, New attempts to call the +// MSI() method on the MSIConvertable interface to create one. +func New(data interface{}) Map { + if _, ok := data.(map[string]interface{}); !ok { + if converter, ok := data.(MSIConvertable); ok { + data = converter.MSI() + } else { + return nil + } + } + return Map(data.(map[string]interface{})) +} + +// MSI creates a map[string]interface{} and puts it inside a new Map. +// +// The arguments follow a key, value pattern. +// +// +// Returns nil if any key argument is non-string or if there are an odd number of arguments. +// +// Example +// +// To easily create Maps: +// +// m := objx.MSI("name", "Mat", "age", 29, "subobj", objx.MSI("active", true)) +// +// // creates an Map equivalent to +// m := objx.Map{"name": "Mat", "age": 29, "subobj": objx.Map{"active": true}} +func MSI(keyAndValuePairs ...interface{}) Map { + newMap := Map{} + keyAndValuePairsLen := len(keyAndValuePairs) + if keyAndValuePairsLen%2 != 0 { + return nil + } + for i := 0; i < keyAndValuePairsLen; i = i + 2 { + key := keyAndValuePairs[i] + value := keyAndValuePairs[i+1] + + // make sure the key is a string + keyString, keyStringOK := key.(string) + if !keyStringOK { + return nil + } + newMap[keyString] = value + } + return newMap +} + +// ****** Conversion Constructors + +// MustFromJSON creates a new Map containing the data specified in the +// jsonString. +// +// Panics if the JSON is invalid. +func MustFromJSON(jsonString string) Map { + o, err := FromJSON(jsonString) + if err != nil { + panic("objx: MustFromJSON failed with error: " + err.Error()) + } + return o +} + +// FromJSON creates a new Map containing the data specified in the +// jsonString. +// +// Returns an error if the JSON is invalid. +func FromJSON(jsonString string) (Map, error) { + var m Map + err := json.Unmarshal([]byte(jsonString), &m) + if err != nil { + return Nil, err + } + m.tryConvertFloat64() + return m, nil +} + +func (m Map) tryConvertFloat64() { + for k, v := range m { + switch v.(type) { + case float64: + f := v.(float64) + if float64(int(f)) == f { + m[k] = int(f) + } + case map[string]interface{}: + t := New(v) + t.tryConvertFloat64() + m[k] = t + case []interface{}: + m[k] = tryConvertFloat64InSlice(v.([]interface{})) + } + } +} + +func tryConvertFloat64InSlice(s []interface{}) []interface{} { + for k, v := range s { + switch v.(type) { + case float64: + f := v.(float64) + if float64(int(f)) == f { + s[k] = int(f) + } + case map[string]interface{}: + t := New(v) + t.tryConvertFloat64() + s[k] = t + case []interface{}: + s[k] = tryConvertFloat64InSlice(v.([]interface{})) + } + } + return s +} + +// FromBase64 creates a new Obj containing the data specified +// in the Base64 string. +// +// The string is an encoded JSON string returned by Base64 +func FromBase64(base64String string) (Map, error) { + decoder := base64.NewDecoder(base64.StdEncoding, strings.NewReader(base64String)) + decoded, err := ioutil.ReadAll(decoder) + if err != nil { + return nil, err + } + return FromJSON(string(decoded)) +} + +// MustFromBase64 creates a new Obj containing the data specified +// in the Base64 string and panics if there is an error. +// +// The string is an encoded JSON string returned by Base64 +func MustFromBase64(base64String string) Map { + result, err := FromBase64(base64String) + if err != nil { + panic("objx: MustFromBase64 failed with error: " + err.Error()) + } + return result +} + +// FromSignedBase64 creates a new Obj containing the data specified +// in the Base64 string. +// +// The string is an encoded JSON string returned by SignedBase64 +func FromSignedBase64(base64String, key string) (Map, error) { + parts := strings.Split(base64String, SignatureSeparator) + if len(parts) != 2 { + return nil, errors.New("objx: Signed base64 string is malformed") + } + + sig := HashWithKey(parts[0], key) + if parts[1] != sig { + return nil, errors.New("objx: Signature for base64 data does not match") + } + return FromBase64(parts[0]) +} + +// MustFromSignedBase64 creates a new Obj containing the data specified +// in the Base64 string and panics if there is an error. +// +// The string is an encoded JSON string returned by Base64 +func MustFromSignedBase64(base64String, key string) Map { + result, err := FromSignedBase64(base64String, key) + if err != nil { + panic("objx: MustFromSignedBase64 failed with error: " + err.Error()) + } + return result +} + +// FromURLQuery generates a new Obj by parsing the specified +// query. +// +// For queries with multiple values, the first value is selected. +func FromURLQuery(query string) (Map, error) { + vals, err := url.ParseQuery(query) + if err != nil { + return nil, err + } + m := Map{} + for k, vals := range vals { + m[k] = vals[0] + } + return m, nil +} + +// MustFromURLQuery generates a new Obj by parsing the specified +// query. +// +// For queries with multiple values, the first value is selected. +// +// Panics if it encounters an error +func MustFromURLQuery(query string) Map { + o, err := FromURLQuery(query) + if err != nil { + panic("objx: MustFromURLQuery failed with error: " + err.Error()) + } + return o +} diff --git a/vendor/github.com/stretchr/objx/mutations.go b/vendor/github.com/stretchr/objx/mutations.go new file mode 100644 index 000000000..c3400a3f7 --- /dev/null +++ b/vendor/github.com/stretchr/objx/mutations.go @@ -0,0 +1,77 @@ +package objx + +// Exclude returns a new Map with the keys in the specified []string +// excluded. +func (m Map) Exclude(exclude []string) Map { + excluded := make(Map) + for k, v := range m { + if !contains(exclude, k) { + excluded[k] = v + } + } + return excluded +} + +// Copy creates a shallow copy of the Obj. +func (m Map) Copy() Map { + copied := Map{} + for k, v := range m { + copied[k] = v + } + return copied +} + +// Merge blends the specified map with a copy of this map and returns the result. +// +// Keys that appear in both will be selected from the specified map. +// This method requires that the wrapped object be a map[string]interface{} +func (m Map) Merge(merge Map) Map { + return m.Copy().MergeHere(merge) +} + +// MergeHere blends the specified map with this map and returns the current map. +// +// Keys that appear in both will be selected from the specified map. The original map +// will be modified. This method requires that +// the wrapped object be a map[string]interface{} +func (m Map) MergeHere(merge Map) Map { + for k, v := range merge { + m[k] = v + } + return m +} + +// Transform builds a new Obj giving the transformer a chance +// to change the keys and values as it goes. This method requires that +// the wrapped object be a map[string]interface{} +func (m Map) Transform(transformer func(key string, value interface{}) (string, interface{})) Map { + newMap := Map{} + for k, v := range m { + modifiedKey, modifiedVal := transformer(k, v) + newMap[modifiedKey] = modifiedVal + } + return newMap +} + +// TransformKeys builds a new map using the specified key mapping. +// +// Unspecified keys will be unaltered. +// This method requires that the wrapped object be a map[string]interface{} +func (m Map) TransformKeys(mapping map[string]string) Map { + return m.Transform(func(key string, value interface{}) (string, interface{}) { + if newKey, ok := mapping[key]; ok { + return newKey, value + } + return key, value + }) +} + +// Checks if a string slice contains a string +func contains(s []string, e string) bool { + for _, a := range s { + if a == e { + return true + } + } + return false +} diff --git a/vendor/github.com/stretchr/objx/security.go b/vendor/github.com/stretchr/objx/security.go new file mode 100644 index 000000000..692be8e2a --- /dev/null +++ b/vendor/github.com/stretchr/objx/security.go @@ -0,0 +1,12 @@ +package objx + +import ( + "crypto/sha1" + "encoding/hex" +) + +// HashWithKey hashes the specified string using the security key +func HashWithKey(data, key string) string { + d := sha1.Sum([]byte(data + ":" + key)) + return hex.EncodeToString(d[:]) +} diff --git a/vendor/github.com/stretchr/objx/tests.go b/vendor/github.com/stretchr/objx/tests.go new file mode 100644 index 000000000..d9e0b479a --- /dev/null +++ b/vendor/github.com/stretchr/objx/tests.go @@ -0,0 +1,17 @@ +package objx + +// Has gets whether there is something at the specified selector +// or not. +// +// If m is nil, Has will always return false. +func (m Map) Has(selector string) bool { + if m == nil { + return false + } + return !m.Get(selector).IsNil() +} + +// IsNil gets whether the data is nil or not. +func (v *Value) IsNil() bool { + return v == nil || v.data == nil +} diff --git a/vendor/github.com/stretchr/objx/type_specific_codegen.go b/vendor/github.com/stretchr/objx/type_specific_codegen.go new file mode 100644 index 000000000..de4240955 --- /dev/null +++ b/vendor/github.com/stretchr/objx/type_specific_codegen.go @@ -0,0 +1,2516 @@ +package objx + +/* + Inter (interface{} and []interface{}) +*/ + +// Inter gets the value as a interface{}, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Inter(optionalDefault ...interface{}) interface{} { + if s, ok := v.data.(interface{}); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInter gets the value as a interface{}. +// +// Panics if the object is not a interface{}. +func (v *Value) MustInter() interface{} { + return v.data.(interface{}) +} + +// InterSlice gets the value as a []interface{}, returns the optionalDefault +// value or nil if the value is not a []interface{}. +func (v *Value) InterSlice(optionalDefault ...[]interface{}) []interface{} { + if s, ok := v.data.([]interface{}); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInterSlice gets the value as a []interface{}. +// +// Panics if the object is not a []interface{}. +func (v *Value) MustInterSlice() []interface{} { + return v.data.([]interface{}) +} + +// IsInter gets whether the object contained is a interface{} or not. +func (v *Value) IsInter() bool { + _, ok := v.data.(interface{}) + return ok +} + +// IsInterSlice gets whether the object contained is a []interface{} or not. +func (v *Value) IsInterSlice() bool { + _, ok := v.data.([]interface{}) + return ok +} + +// EachInter calls the specified callback for each object +// in the []interface{}. +// +// Panics if the object is the wrong type. +func (v *Value) EachInter(callback func(int, interface{}) bool) *Value { + for index, val := range v.MustInterSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInter uses the specified decider function to select items +// from the []interface{}. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInter(decider func(int, interface{}) bool) *Value { + var selected []interface{} + v.EachInter(func(index int, val interface{}) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInter uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]interface{}. +func (v *Value) GroupInter(grouper func(int, interface{}) string) *Value { + groups := make(map[string][]interface{}) + v.EachInter(func(index int, val interface{}) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]interface{}, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInter uses the specified function to replace each interface{}s +// by iterating each item. The data in the returned result will be a +// []interface{} containing the replaced items. +func (v *Value) ReplaceInter(replacer func(int, interface{}) interface{}) *Value { + arr := v.MustInterSlice() + replaced := make([]interface{}, len(arr)) + v.EachInter(func(index int, val interface{}) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInter uses the specified collector function to collect a value +// for each of the interface{}s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInter(collector func(int, interface{}) interface{}) *Value { + arr := v.MustInterSlice() + collected := make([]interface{}, len(arr)) + v.EachInter(func(index int, val interface{}) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + MSI (map[string]interface{} and []map[string]interface{}) +*/ + +// MSI gets the value as a map[string]interface{}, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) MSI(optionalDefault ...map[string]interface{}) map[string]interface{} { + if s, ok := v.data.(map[string]interface{}); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustMSI gets the value as a map[string]interface{}. +// +// Panics if the object is not a map[string]interface{}. +func (v *Value) MustMSI() map[string]interface{} { + return v.data.(map[string]interface{}) +} + +// MSISlice gets the value as a []map[string]interface{}, returns the optionalDefault +// value or nil if the value is not a []map[string]interface{}. +func (v *Value) MSISlice(optionalDefault ...[]map[string]interface{}) []map[string]interface{} { + if s, ok := v.data.([]map[string]interface{}); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustMSISlice gets the value as a []map[string]interface{}. +// +// Panics if the object is not a []map[string]interface{}. +func (v *Value) MustMSISlice() []map[string]interface{} { + return v.data.([]map[string]interface{}) +} + +// IsMSI gets whether the object contained is a map[string]interface{} or not. +func (v *Value) IsMSI() bool { + _, ok := v.data.(map[string]interface{}) + return ok +} + +// IsMSISlice gets whether the object contained is a []map[string]interface{} or not. +func (v *Value) IsMSISlice() bool { + _, ok := v.data.([]map[string]interface{}) + return ok +} + +// EachMSI calls the specified callback for each object +// in the []map[string]interface{}. +// +// Panics if the object is the wrong type. +func (v *Value) EachMSI(callback func(int, map[string]interface{}) bool) *Value { + for index, val := range v.MustMSISlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereMSI uses the specified decider function to select items +// from the []map[string]interface{}. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereMSI(decider func(int, map[string]interface{}) bool) *Value { + var selected []map[string]interface{} + v.EachMSI(func(index int, val map[string]interface{}) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupMSI uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]map[string]interface{}. +func (v *Value) GroupMSI(grouper func(int, map[string]interface{}) string) *Value { + groups := make(map[string][]map[string]interface{}) + v.EachMSI(func(index int, val map[string]interface{}) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]map[string]interface{}, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceMSI uses the specified function to replace each map[string]interface{}s +// by iterating each item. The data in the returned result will be a +// []map[string]interface{} containing the replaced items. +func (v *Value) ReplaceMSI(replacer func(int, map[string]interface{}) map[string]interface{}) *Value { + arr := v.MustMSISlice() + replaced := make([]map[string]interface{}, len(arr)) + v.EachMSI(func(index int, val map[string]interface{}) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectMSI uses the specified collector function to collect a value +// for each of the map[string]interface{}s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectMSI(collector func(int, map[string]interface{}) interface{}) *Value { + arr := v.MustMSISlice() + collected := make([]interface{}, len(arr)) + v.EachMSI(func(index int, val map[string]interface{}) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + ObjxMap ((Map) and [](Map)) +*/ + +// ObjxMap gets the value as a (Map), returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) ObjxMap(optionalDefault ...(Map)) Map { + if s, ok := v.data.((Map)); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return New(nil) +} + +// MustObjxMap gets the value as a (Map). +// +// Panics if the object is not a (Map). +func (v *Value) MustObjxMap() Map { + return v.data.((Map)) +} + +// ObjxMapSlice gets the value as a [](Map), returns the optionalDefault +// value or nil if the value is not a [](Map). +func (v *Value) ObjxMapSlice(optionalDefault ...[](Map)) [](Map) { + if s, ok := v.data.([]Map); ok { + return s + } + s, ok := v.data.([]interface{}) + if !ok { + if len(optionalDefault) == 1 { + return optionalDefault[0] + } else { + return nil + } + } + + result := make([]Map, len(s)) + for i := range s { + switch s[i].(type) { + case Map: + result[i] = s[i].(Map) + default: + return nil + } + } + return result +} + +// MustObjxMapSlice gets the value as a [](Map). +// +// Panics if the object is not a [](Map). +func (v *Value) MustObjxMapSlice() [](Map) { + return v.data.([](Map)) +} + +// IsObjxMap gets whether the object contained is a (Map) or not. +func (v *Value) IsObjxMap() bool { + _, ok := v.data.((Map)) + return ok +} + +// IsObjxMapSlice gets whether the object contained is a [](Map) or not. +func (v *Value) IsObjxMapSlice() bool { + _, ok := v.data.([](Map)) + return ok +} + +// EachObjxMap calls the specified callback for each object +// in the [](Map). +// +// Panics if the object is the wrong type. +func (v *Value) EachObjxMap(callback func(int, Map) bool) *Value { + for index, val := range v.MustObjxMapSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereObjxMap uses the specified decider function to select items +// from the [](Map). The object contained in the result will contain +// only the selected items. +func (v *Value) WhereObjxMap(decider func(int, Map) bool) *Value { + var selected [](Map) + v.EachObjxMap(func(index int, val Map) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupObjxMap uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][](Map). +func (v *Value) GroupObjxMap(grouper func(int, Map) string) *Value { + groups := make(map[string][](Map)) + v.EachObjxMap(func(index int, val Map) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([](Map), 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceObjxMap uses the specified function to replace each (Map)s +// by iterating each item. The data in the returned result will be a +// [](Map) containing the replaced items. +func (v *Value) ReplaceObjxMap(replacer func(int, Map) Map) *Value { + arr := v.MustObjxMapSlice() + replaced := make([](Map), len(arr)) + v.EachObjxMap(func(index int, val Map) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectObjxMap uses the specified collector function to collect a value +// for each of the (Map)s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectObjxMap(collector func(int, Map) interface{}) *Value { + arr := v.MustObjxMapSlice() + collected := make([]interface{}, len(arr)) + v.EachObjxMap(func(index int, val Map) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Bool (bool and []bool) +*/ + +// Bool gets the value as a bool, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Bool(optionalDefault ...bool) bool { + if s, ok := v.data.(bool); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return false +} + +// MustBool gets the value as a bool. +// +// Panics if the object is not a bool. +func (v *Value) MustBool() bool { + return v.data.(bool) +} + +// BoolSlice gets the value as a []bool, returns the optionalDefault +// value or nil if the value is not a []bool. +func (v *Value) BoolSlice(optionalDefault ...[]bool) []bool { + if s, ok := v.data.([]bool); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustBoolSlice gets the value as a []bool. +// +// Panics if the object is not a []bool. +func (v *Value) MustBoolSlice() []bool { + return v.data.([]bool) +} + +// IsBool gets whether the object contained is a bool or not. +func (v *Value) IsBool() bool { + _, ok := v.data.(bool) + return ok +} + +// IsBoolSlice gets whether the object contained is a []bool or not. +func (v *Value) IsBoolSlice() bool { + _, ok := v.data.([]bool) + return ok +} + +// EachBool calls the specified callback for each object +// in the []bool. +// +// Panics if the object is the wrong type. +func (v *Value) EachBool(callback func(int, bool) bool) *Value { + for index, val := range v.MustBoolSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereBool uses the specified decider function to select items +// from the []bool. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereBool(decider func(int, bool) bool) *Value { + var selected []bool + v.EachBool(func(index int, val bool) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupBool uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]bool. +func (v *Value) GroupBool(grouper func(int, bool) string) *Value { + groups := make(map[string][]bool) + v.EachBool(func(index int, val bool) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]bool, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceBool uses the specified function to replace each bools +// by iterating each item. The data in the returned result will be a +// []bool containing the replaced items. +func (v *Value) ReplaceBool(replacer func(int, bool) bool) *Value { + arr := v.MustBoolSlice() + replaced := make([]bool, len(arr)) + v.EachBool(func(index int, val bool) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectBool uses the specified collector function to collect a value +// for each of the bools in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectBool(collector func(int, bool) interface{}) *Value { + arr := v.MustBoolSlice() + collected := make([]interface{}, len(arr)) + v.EachBool(func(index int, val bool) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Str (string and []string) +*/ + +// Str gets the value as a string, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Str(optionalDefault ...string) string { + if s, ok := v.data.(string); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return "" +} + +// MustStr gets the value as a string. +// +// Panics if the object is not a string. +func (v *Value) MustStr() string { + return v.data.(string) +} + +// StrSlice gets the value as a []string, returns the optionalDefault +// value or nil if the value is not a []string. +func (v *Value) StrSlice(optionalDefault ...[]string) []string { + if s, ok := v.data.([]string); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustStrSlice gets the value as a []string. +// +// Panics if the object is not a []string. +func (v *Value) MustStrSlice() []string { + return v.data.([]string) +} + +// IsStr gets whether the object contained is a string or not. +func (v *Value) IsStr() bool { + _, ok := v.data.(string) + return ok +} + +// IsStrSlice gets whether the object contained is a []string or not. +func (v *Value) IsStrSlice() bool { + _, ok := v.data.([]string) + return ok +} + +// EachStr calls the specified callback for each object +// in the []string. +// +// Panics if the object is the wrong type. +func (v *Value) EachStr(callback func(int, string) bool) *Value { + for index, val := range v.MustStrSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereStr uses the specified decider function to select items +// from the []string. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereStr(decider func(int, string) bool) *Value { + var selected []string + v.EachStr(func(index int, val string) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupStr uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]string. +func (v *Value) GroupStr(grouper func(int, string) string) *Value { + groups := make(map[string][]string) + v.EachStr(func(index int, val string) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]string, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceStr uses the specified function to replace each strings +// by iterating each item. The data in the returned result will be a +// []string containing the replaced items. +func (v *Value) ReplaceStr(replacer func(int, string) string) *Value { + arr := v.MustStrSlice() + replaced := make([]string, len(arr)) + v.EachStr(func(index int, val string) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectStr uses the specified collector function to collect a value +// for each of the strings in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectStr(collector func(int, string) interface{}) *Value { + arr := v.MustStrSlice() + collected := make([]interface{}, len(arr)) + v.EachStr(func(index int, val string) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Int (int and []int) +*/ + +// Int gets the value as a int, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Int(optionalDefault ...int) int { + if s, ok := v.data.(int); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustInt gets the value as a int. +// +// Panics if the object is not a int. +func (v *Value) MustInt() int { + return v.data.(int) +} + +// IntSlice gets the value as a []int, returns the optionalDefault +// value or nil if the value is not a []int. +func (v *Value) IntSlice(optionalDefault ...[]int) []int { + if s, ok := v.data.([]int); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustIntSlice gets the value as a []int. +// +// Panics if the object is not a []int. +func (v *Value) MustIntSlice() []int { + return v.data.([]int) +} + +// IsInt gets whether the object contained is a int or not. +func (v *Value) IsInt() bool { + _, ok := v.data.(int) + return ok +} + +// IsIntSlice gets whether the object contained is a []int or not. +func (v *Value) IsIntSlice() bool { + _, ok := v.data.([]int) + return ok +} + +// EachInt calls the specified callback for each object +// in the []int. +// +// Panics if the object is the wrong type. +func (v *Value) EachInt(callback func(int, int) bool) *Value { + for index, val := range v.MustIntSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInt uses the specified decider function to select items +// from the []int. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInt(decider func(int, int) bool) *Value { + var selected []int + v.EachInt(func(index int, val int) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInt uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]int. +func (v *Value) GroupInt(grouper func(int, int) string) *Value { + groups := make(map[string][]int) + v.EachInt(func(index int, val int) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]int, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInt uses the specified function to replace each ints +// by iterating each item. The data in the returned result will be a +// []int containing the replaced items. +func (v *Value) ReplaceInt(replacer func(int, int) int) *Value { + arr := v.MustIntSlice() + replaced := make([]int, len(arr)) + v.EachInt(func(index int, val int) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInt uses the specified collector function to collect a value +// for each of the ints in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInt(collector func(int, int) interface{}) *Value { + arr := v.MustIntSlice() + collected := make([]interface{}, len(arr)) + v.EachInt(func(index int, val int) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Int8 (int8 and []int8) +*/ + +// Int8 gets the value as a int8, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Int8(optionalDefault ...int8) int8 { + if s, ok := v.data.(int8); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustInt8 gets the value as a int8. +// +// Panics if the object is not a int8. +func (v *Value) MustInt8() int8 { + return v.data.(int8) +} + +// Int8Slice gets the value as a []int8, returns the optionalDefault +// value or nil if the value is not a []int8. +func (v *Value) Int8Slice(optionalDefault ...[]int8) []int8 { + if s, ok := v.data.([]int8); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInt8Slice gets the value as a []int8. +// +// Panics if the object is not a []int8. +func (v *Value) MustInt8Slice() []int8 { + return v.data.([]int8) +} + +// IsInt8 gets whether the object contained is a int8 or not. +func (v *Value) IsInt8() bool { + _, ok := v.data.(int8) + return ok +} + +// IsInt8Slice gets whether the object contained is a []int8 or not. +func (v *Value) IsInt8Slice() bool { + _, ok := v.data.([]int8) + return ok +} + +// EachInt8 calls the specified callback for each object +// in the []int8. +// +// Panics if the object is the wrong type. +func (v *Value) EachInt8(callback func(int, int8) bool) *Value { + for index, val := range v.MustInt8Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInt8 uses the specified decider function to select items +// from the []int8. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInt8(decider func(int, int8) bool) *Value { + var selected []int8 + v.EachInt8(func(index int, val int8) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInt8 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]int8. +func (v *Value) GroupInt8(grouper func(int, int8) string) *Value { + groups := make(map[string][]int8) + v.EachInt8(func(index int, val int8) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]int8, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInt8 uses the specified function to replace each int8s +// by iterating each item. The data in the returned result will be a +// []int8 containing the replaced items. +func (v *Value) ReplaceInt8(replacer func(int, int8) int8) *Value { + arr := v.MustInt8Slice() + replaced := make([]int8, len(arr)) + v.EachInt8(func(index int, val int8) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInt8 uses the specified collector function to collect a value +// for each of the int8s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInt8(collector func(int, int8) interface{}) *Value { + arr := v.MustInt8Slice() + collected := make([]interface{}, len(arr)) + v.EachInt8(func(index int, val int8) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Int16 (int16 and []int16) +*/ + +// Int16 gets the value as a int16, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Int16(optionalDefault ...int16) int16 { + if s, ok := v.data.(int16); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustInt16 gets the value as a int16. +// +// Panics if the object is not a int16. +func (v *Value) MustInt16() int16 { + return v.data.(int16) +} + +// Int16Slice gets the value as a []int16, returns the optionalDefault +// value or nil if the value is not a []int16. +func (v *Value) Int16Slice(optionalDefault ...[]int16) []int16 { + if s, ok := v.data.([]int16); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInt16Slice gets the value as a []int16. +// +// Panics if the object is not a []int16. +func (v *Value) MustInt16Slice() []int16 { + return v.data.([]int16) +} + +// IsInt16 gets whether the object contained is a int16 or not. +func (v *Value) IsInt16() bool { + _, ok := v.data.(int16) + return ok +} + +// IsInt16Slice gets whether the object contained is a []int16 or not. +func (v *Value) IsInt16Slice() bool { + _, ok := v.data.([]int16) + return ok +} + +// EachInt16 calls the specified callback for each object +// in the []int16. +// +// Panics if the object is the wrong type. +func (v *Value) EachInt16(callback func(int, int16) bool) *Value { + for index, val := range v.MustInt16Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInt16 uses the specified decider function to select items +// from the []int16. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInt16(decider func(int, int16) bool) *Value { + var selected []int16 + v.EachInt16(func(index int, val int16) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInt16 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]int16. +func (v *Value) GroupInt16(grouper func(int, int16) string) *Value { + groups := make(map[string][]int16) + v.EachInt16(func(index int, val int16) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]int16, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInt16 uses the specified function to replace each int16s +// by iterating each item. The data in the returned result will be a +// []int16 containing the replaced items. +func (v *Value) ReplaceInt16(replacer func(int, int16) int16) *Value { + arr := v.MustInt16Slice() + replaced := make([]int16, len(arr)) + v.EachInt16(func(index int, val int16) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInt16 uses the specified collector function to collect a value +// for each of the int16s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInt16(collector func(int, int16) interface{}) *Value { + arr := v.MustInt16Slice() + collected := make([]interface{}, len(arr)) + v.EachInt16(func(index int, val int16) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Int32 (int32 and []int32) +*/ + +// Int32 gets the value as a int32, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Int32(optionalDefault ...int32) int32 { + if s, ok := v.data.(int32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustInt32 gets the value as a int32. +// +// Panics if the object is not a int32. +func (v *Value) MustInt32() int32 { + return v.data.(int32) +} + +// Int32Slice gets the value as a []int32, returns the optionalDefault +// value or nil if the value is not a []int32. +func (v *Value) Int32Slice(optionalDefault ...[]int32) []int32 { + if s, ok := v.data.([]int32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInt32Slice gets the value as a []int32. +// +// Panics if the object is not a []int32. +func (v *Value) MustInt32Slice() []int32 { + return v.data.([]int32) +} + +// IsInt32 gets whether the object contained is a int32 or not. +func (v *Value) IsInt32() bool { + _, ok := v.data.(int32) + return ok +} + +// IsInt32Slice gets whether the object contained is a []int32 or not. +func (v *Value) IsInt32Slice() bool { + _, ok := v.data.([]int32) + return ok +} + +// EachInt32 calls the specified callback for each object +// in the []int32. +// +// Panics if the object is the wrong type. +func (v *Value) EachInt32(callback func(int, int32) bool) *Value { + for index, val := range v.MustInt32Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInt32 uses the specified decider function to select items +// from the []int32. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInt32(decider func(int, int32) bool) *Value { + var selected []int32 + v.EachInt32(func(index int, val int32) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInt32 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]int32. +func (v *Value) GroupInt32(grouper func(int, int32) string) *Value { + groups := make(map[string][]int32) + v.EachInt32(func(index int, val int32) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]int32, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInt32 uses the specified function to replace each int32s +// by iterating each item. The data in the returned result will be a +// []int32 containing the replaced items. +func (v *Value) ReplaceInt32(replacer func(int, int32) int32) *Value { + arr := v.MustInt32Slice() + replaced := make([]int32, len(arr)) + v.EachInt32(func(index int, val int32) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInt32 uses the specified collector function to collect a value +// for each of the int32s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInt32(collector func(int, int32) interface{}) *Value { + arr := v.MustInt32Slice() + collected := make([]interface{}, len(arr)) + v.EachInt32(func(index int, val int32) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Int64 (int64 and []int64) +*/ + +// Int64 gets the value as a int64, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Int64(optionalDefault ...int64) int64 { + if s, ok := v.data.(int64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustInt64 gets the value as a int64. +// +// Panics if the object is not a int64. +func (v *Value) MustInt64() int64 { + return v.data.(int64) +} + +// Int64Slice gets the value as a []int64, returns the optionalDefault +// value or nil if the value is not a []int64. +func (v *Value) Int64Slice(optionalDefault ...[]int64) []int64 { + if s, ok := v.data.([]int64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustInt64Slice gets the value as a []int64. +// +// Panics if the object is not a []int64. +func (v *Value) MustInt64Slice() []int64 { + return v.data.([]int64) +} + +// IsInt64 gets whether the object contained is a int64 or not. +func (v *Value) IsInt64() bool { + _, ok := v.data.(int64) + return ok +} + +// IsInt64Slice gets whether the object contained is a []int64 or not. +func (v *Value) IsInt64Slice() bool { + _, ok := v.data.([]int64) + return ok +} + +// EachInt64 calls the specified callback for each object +// in the []int64. +// +// Panics if the object is the wrong type. +func (v *Value) EachInt64(callback func(int, int64) bool) *Value { + for index, val := range v.MustInt64Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereInt64 uses the specified decider function to select items +// from the []int64. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereInt64(decider func(int, int64) bool) *Value { + var selected []int64 + v.EachInt64(func(index int, val int64) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupInt64 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]int64. +func (v *Value) GroupInt64(grouper func(int, int64) string) *Value { + groups := make(map[string][]int64) + v.EachInt64(func(index int, val int64) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]int64, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceInt64 uses the specified function to replace each int64s +// by iterating each item. The data in the returned result will be a +// []int64 containing the replaced items. +func (v *Value) ReplaceInt64(replacer func(int, int64) int64) *Value { + arr := v.MustInt64Slice() + replaced := make([]int64, len(arr)) + v.EachInt64(func(index int, val int64) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectInt64 uses the specified collector function to collect a value +// for each of the int64s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectInt64(collector func(int, int64) interface{}) *Value { + arr := v.MustInt64Slice() + collected := make([]interface{}, len(arr)) + v.EachInt64(func(index int, val int64) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uint (uint and []uint) +*/ + +// Uint gets the value as a uint, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uint(optionalDefault ...uint) uint { + if s, ok := v.data.(uint); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUint gets the value as a uint. +// +// Panics if the object is not a uint. +func (v *Value) MustUint() uint { + return v.data.(uint) +} + +// UintSlice gets the value as a []uint, returns the optionalDefault +// value or nil if the value is not a []uint. +func (v *Value) UintSlice(optionalDefault ...[]uint) []uint { + if s, ok := v.data.([]uint); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUintSlice gets the value as a []uint. +// +// Panics if the object is not a []uint. +func (v *Value) MustUintSlice() []uint { + return v.data.([]uint) +} + +// IsUint gets whether the object contained is a uint or not. +func (v *Value) IsUint() bool { + _, ok := v.data.(uint) + return ok +} + +// IsUintSlice gets whether the object contained is a []uint or not. +func (v *Value) IsUintSlice() bool { + _, ok := v.data.([]uint) + return ok +} + +// EachUint calls the specified callback for each object +// in the []uint. +// +// Panics if the object is the wrong type. +func (v *Value) EachUint(callback func(int, uint) bool) *Value { + for index, val := range v.MustUintSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUint uses the specified decider function to select items +// from the []uint. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUint(decider func(int, uint) bool) *Value { + var selected []uint + v.EachUint(func(index int, val uint) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUint uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uint. +func (v *Value) GroupUint(grouper func(int, uint) string) *Value { + groups := make(map[string][]uint) + v.EachUint(func(index int, val uint) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uint, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUint uses the specified function to replace each uints +// by iterating each item. The data in the returned result will be a +// []uint containing the replaced items. +func (v *Value) ReplaceUint(replacer func(int, uint) uint) *Value { + arr := v.MustUintSlice() + replaced := make([]uint, len(arr)) + v.EachUint(func(index int, val uint) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUint uses the specified collector function to collect a value +// for each of the uints in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUint(collector func(int, uint) interface{}) *Value { + arr := v.MustUintSlice() + collected := make([]interface{}, len(arr)) + v.EachUint(func(index int, val uint) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uint8 (uint8 and []uint8) +*/ + +// Uint8 gets the value as a uint8, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uint8(optionalDefault ...uint8) uint8 { + if s, ok := v.data.(uint8); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUint8 gets the value as a uint8. +// +// Panics if the object is not a uint8. +func (v *Value) MustUint8() uint8 { + return v.data.(uint8) +} + +// Uint8Slice gets the value as a []uint8, returns the optionalDefault +// value or nil if the value is not a []uint8. +func (v *Value) Uint8Slice(optionalDefault ...[]uint8) []uint8 { + if s, ok := v.data.([]uint8); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUint8Slice gets the value as a []uint8. +// +// Panics if the object is not a []uint8. +func (v *Value) MustUint8Slice() []uint8 { + return v.data.([]uint8) +} + +// IsUint8 gets whether the object contained is a uint8 or not. +func (v *Value) IsUint8() bool { + _, ok := v.data.(uint8) + return ok +} + +// IsUint8Slice gets whether the object contained is a []uint8 or not. +func (v *Value) IsUint8Slice() bool { + _, ok := v.data.([]uint8) + return ok +} + +// EachUint8 calls the specified callback for each object +// in the []uint8. +// +// Panics if the object is the wrong type. +func (v *Value) EachUint8(callback func(int, uint8) bool) *Value { + for index, val := range v.MustUint8Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUint8 uses the specified decider function to select items +// from the []uint8. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUint8(decider func(int, uint8) bool) *Value { + var selected []uint8 + v.EachUint8(func(index int, val uint8) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUint8 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uint8. +func (v *Value) GroupUint8(grouper func(int, uint8) string) *Value { + groups := make(map[string][]uint8) + v.EachUint8(func(index int, val uint8) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uint8, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUint8 uses the specified function to replace each uint8s +// by iterating each item. The data in the returned result will be a +// []uint8 containing the replaced items. +func (v *Value) ReplaceUint8(replacer func(int, uint8) uint8) *Value { + arr := v.MustUint8Slice() + replaced := make([]uint8, len(arr)) + v.EachUint8(func(index int, val uint8) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUint8 uses the specified collector function to collect a value +// for each of the uint8s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUint8(collector func(int, uint8) interface{}) *Value { + arr := v.MustUint8Slice() + collected := make([]interface{}, len(arr)) + v.EachUint8(func(index int, val uint8) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uint16 (uint16 and []uint16) +*/ + +// Uint16 gets the value as a uint16, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uint16(optionalDefault ...uint16) uint16 { + if s, ok := v.data.(uint16); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUint16 gets the value as a uint16. +// +// Panics if the object is not a uint16. +func (v *Value) MustUint16() uint16 { + return v.data.(uint16) +} + +// Uint16Slice gets the value as a []uint16, returns the optionalDefault +// value or nil if the value is not a []uint16. +func (v *Value) Uint16Slice(optionalDefault ...[]uint16) []uint16 { + if s, ok := v.data.([]uint16); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUint16Slice gets the value as a []uint16. +// +// Panics if the object is not a []uint16. +func (v *Value) MustUint16Slice() []uint16 { + return v.data.([]uint16) +} + +// IsUint16 gets whether the object contained is a uint16 or not. +func (v *Value) IsUint16() bool { + _, ok := v.data.(uint16) + return ok +} + +// IsUint16Slice gets whether the object contained is a []uint16 or not. +func (v *Value) IsUint16Slice() bool { + _, ok := v.data.([]uint16) + return ok +} + +// EachUint16 calls the specified callback for each object +// in the []uint16. +// +// Panics if the object is the wrong type. +func (v *Value) EachUint16(callback func(int, uint16) bool) *Value { + for index, val := range v.MustUint16Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUint16 uses the specified decider function to select items +// from the []uint16. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUint16(decider func(int, uint16) bool) *Value { + var selected []uint16 + v.EachUint16(func(index int, val uint16) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUint16 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uint16. +func (v *Value) GroupUint16(grouper func(int, uint16) string) *Value { + groups := make(map[string][]uint16) + v.EachUint16(func(index int, val uint16) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uint16, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUint16 uses the specified function to replace each uint16s +// by iterating each item. The data in the returned result will be a +// []uint16 containing the replaced items. +func (v *Value) ReplaceUint16(replacer func(int, uint16) uint16) *Value { + arr := v.MustUint16Slice() + replaced := make([]uint16, len(arr)) + v.EachUint16(func(index int, val uint16) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUint16 uses the specified collector function to collect a value +// for each of the uint16s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUint16(collector func(int, uint16) interface{}) *Value { + arr := v.MustUint16Slice() + collected := make([]interface{}, len(arr)) + v.EachUint16(func(index int, val uint16) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uint32 (uint32 and []uint32) +*/ + +// Uint32 gets the value as a uint32, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uint32(optionalDefault ...uint32) uint32 { + if s, ok := v.data.(uint32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUint32 gets the value as a uint32. +// +// Panics if the object is not a uint32. +func (v *Value) MustUint32() uint32 { + return v.data.(uint32) +} + +// Uint32Slice gets the value as a []uint32, returns the optionalDefault +// value or nil if the value is not a []uint32. +func (v *Value) Uint32Slice(optionalDefault ...[]uint32) []uint32 { + if s, ok := v.data.([]uint32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUint32Slice gets the value as a []uint32. +// +// Panics if the object is not a []uint32. +func (v *Value) MustUint32Slice() []uint32 { + return v.data.([]uint32) +} + +// IsUint32 gets whether the object contained is a uint32 or not. +func (v *Value) IsUint32() bool { + _, ok := v.data.(uint32) + return ok +} + +// IsUint32Slice gets whether the object contained is a []uint32 or not. +func (v *Value) IsUint32Slice() bool { + _, ok := v.data.([]uint32) + return ok +} + +// EachUint32 calls the specified callback for each object +// in the []uint32. +// +// Panics if the object is the wrong type. +func (v *Value) EachUint32(callback func(int, uint32) bool) *Value { + for index, val := range v.MustUint32Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUint32 uses the specified decider function to select items +// from the []uint32. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUint32(decider func(int, uint32) bool) *Value { + var selected []uint32 + v.EachUint32(func(index int, val uint32) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUint32 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uint32. +func (v *Value) GroupUint32(grouper func(int, uint32) string) *Value { + groups := make(map[string][]uint32) + v.EachUint32(func(index int, val uint32) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uint32, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUint32 uses the specified function to replace each uint32s +// by iterating each item. The data in the returned result will be a +// []uint32 containing the replaced items. +func (v *Value) ReplaceUint32(replacer func(int, uint32) uint32) *Value { + arr := v.MustUint32Slice() + replaced := make([]uint32, len(arr)) + v.EachUint32(func(index int, val uint32) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUint32 uses the specified collector function to collect a value +// for each of the uint32s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUint32(collector func(int, uint32) interface{}) *Value { + arr := v.MustUint32Slice() + collected := make([]interface{}, len(arr)) + v.EachUint32(func(index int, val uint32) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uint64 (uint64 and []uint64) +*/ + +// Uint64 gets the value as a uint64, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uint64(optionalDefault ...uint64) uint64 { + if s, ok := v.data.(uint64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUint64 gets the value as a uint64. +// +// Panics if the object is not a uint64. +func (v *Value) MustUint64() uint64 { + return v.data.(uint64) +} + +// Uint64Slice gets the value as a []uint64, returns the optionalDefault +// value or nil if the value is not a []uint64. +func (v *Value) Uint64Slice(optionalDefault ...[]uint64) []uint64 { + if s, ok := v.data.([]uint64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUint64Slice gets the value as a []uint64. +// +// Panics if the object is not a []uint64. +func (v *Value) MustUint64Slice() []uint64 { + return v.data.([]uint64) +} + +// IsUint64 gets whether the object contained is a uint64 or not. +func (v *Value) IsUint64() bool { + _, ok := v.data.(uint64) + return ok +} + +// IsUint64Slice gets whether the object contained is a []uint64 or not. +func (v *Value) IsUint64Slice() bool { + _, ok := v.data.([]uint64) + return ok +} + +// EachUint64 calls the specified callback for each object +// in the []uint64. +// +// Panics if the object is the wrong type. +func (v *Value) EachUint64(callback func(int, uint64) bool) *Value { + for index, val := range v.MustUint64Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUint64 uses the specified decider function to select items +// from the []uint64. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUint64(decider func(int, uint64) bool) *Value { + var selected []uint64 + v.EachUint64(func(index int, val uint64) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUint64 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uint64. +func (v *Value) GroupUint64(grouper func(int, uint64) string) *Value { + groups := make(map[string][]uint64) + v.EachUint64(func(index int, val uint64) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uint64, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUint64 uses the specified function to replace each uint64s +// by iterating each item. The data in the returned result will be a +// []uint64 containing the replaced items. +func (v *Value) ReplaceUint64(replacer func(int, uint64) uint64) *Value { + arr := v.MustUint64Slice() + replaced := make([]uint64, len(arr)) + v.EachUint64(func(index int, val uint64) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUint64 uses the specified collector function to collect a value +// for each of the uint64s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUint64(collector func(int, uint64) interface{}) *Value { + arr := v.MustUint64Slice() + collected := make([]interface{}, len(arr)) + v.EachUint64(func(index int, val uint64) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Uintptr (uintptr and []uintptr) +*/ + +// Uintptr gets the value as a uintptr, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Uintptr(optionalDefault ...uintptr) uintptr { + if s, ok := v.data.(uintptr); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustUintptr gets the value as a uintptr. +// +// Panics if the object is not a uintptr. +func (v *Value) MustUintptr() uintptr { + return v.data.(uintptr) +} + +// UintptrSlice gets the value as a []uintptr, returns the optionalDefault +// value or nil if the value is not a []uintptr. +func (v *Value) UintptrSlice(optionalDefault ...[]uintptr) []uintptr { + if s, ok := v.data.([]uintptr); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustUintptrSlice gets the value as a []uintptr. +// +// Panics if the object is not a []uintptr. +func (v *Value) MustUintptrSlice() []uintptr { + return v.data.([]uintptr) +} + +// IsUintptr gets whether the object contained is a uintptr or not. +func (v *Value) IsUintptr() bool { + _, ok := v.data.(uintptr) + return ok +} + +// IsUintptrSlice gets whether the object contained is a []uintptr or not. +func (v *Value) IsUintptrSlice() bool { + _, ok := v.data.([]uintptr) + return ok +} + +// EachUintptr calls the specified callback for each object +// in the []uintptr. +// +// Panics if the object is the wrong type. +func (v *Value) EachUintptr(callback func(int, uintptr) bool) *Value { + for index, val := range v.MustUintptrSlice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereUintptr uses the specified decider function to select items +// from the []uintptr. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereUintptr(decider func(int, uintptr) bool) *Value { + var selected []uintptr + v.EachUintptr(func(index int, val uintptr) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupUintptr uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]uintptr. +func (v *Value) GroupUintptr(grouper func(int, uintptr) string) *Value { + groups := make(map[string][]uintptr) + v.EachUintptr(func(index int, val uintptr) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]uintptr, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceUintptr uses the specified function to replace each uintptrs +// by iterating each item. The data in the returned result will be a +// []uintptr containing the replaced items. +func (v *Value) ReplaceUintptr(replacer func(int, uintptr) uintptr) *Value { + arr := v.MustUintptrSlice() + replaced := make([]uintptr, len(arr)) + v.EachUintptr(func(index int, val uintptr) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectUintptr uses the specified collector function to collect a value +// for each of the uintptrs in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectUintptr(collector func(int, uintptr) interface{}) *Value { + arr := v.MustUintptrSlice() + collected := make([]interface{}, len(arr)) + v.EachUintptr(func(index int, val uintptr) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Float32 (float32 and []float32) +*/ + +// Float32 gets the value as a float32, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Float32(optionalDefault ...float32) float32 { + if s, ok := v.data.(float32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustFloat32 gets the value as a float32. +// +// Panics if the object is not a float32. +func (v *Value) MustFloat32() float32 { + return v.data.(float32) +} + +// Float32Slice gets the value as a []float32, returns the optionalDefault +// value or nil if the value is not a []float32. +func (v *Value) Float32Slice(optionalDefault ...[]float32) []float32 { + if s, ok := v.data.([]float32); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustFloat32Slice gets the value as a []float32. +// +// Panics if the object is not a []float32. +func (v *Value) MustFloat32Slice() []float32 { + return v.data.([]float32) +} + +// IsFloat32 gets whether the object contained is a float32 or not. +func (v *Value) IsFloat32() bool { + _, ok := v.data.(float32) + return ok +} + +// IsFloat32Slice gets whether the object contained is a []float32 or not. +func (v *Value) IsFloat32Slice() bool { + _, ok := v.data.([]float32) + return ok +} + +// EachFloat32 calls the specified callback for each object +// in the []float32. +// +// Panics if the object is the wrong type. +func (v *Value) EachFloat32(callback func(int, float32) bool) *Value { + for index, val := range v.MustFloat32Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereFloat32 uses the specified decider function to select items +// from the []float32. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereFloat32(decider func(int, float32) bool) *Value { + var selected []float32 + v.EachFloat32(func(index int, val float32) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupFloat32 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]float32. +func (v *Value) GroupFloat32(grouper func(int, float32) string) *Value { + groups := make(map[string][]float32) + v.EachFloat32(func(index int, val float32) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]float32, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceFloat32 uses the specified function to replace each float32s +// by iterating each item. The data in the returned result will be a +// []float32 containing the replaced items. +func (v *Value) ReplaceFloat32(replacer func(int, float32) float32) *Value { + arr := v.MustFloat32Slice() + replaced := make([]float32, len(arr)) + v.EachFloat32(func(index int, val float32) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectFloat32 uses the specified collector function to collect a value +// for each of the float32s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectFloat32(collector func(int, float32) interface{}) *Value { + arr := v.MustFloat32Slice() + collected := make([]interface{}, len(arr)) + v.EachFloat32(func(index int, val float32) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Float64 (float64 and []float64) +*/ + +// Float64 gets the value as a float64, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Float64(optionalDefault ...float64) float64 { + if s, ok := v.data.(float64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustFloat64 gets the value as a float64. +// +// Panics if the object is not a float64. +func (v *Value) MustFloat64() float64 { + return v.data.(float64) +} + +// Float64Slice gets the value as a []float64, returns the optionalDefault +// value or nil if the value is not a []float64. +func (v *Value) Float64Slice(optionalDefault ...[]float64) []float64 { + if s, ok := v.data.([]float64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustFloat64Slice gets the value as a []float64. +// +// Panics if the object is not a []float64. +func (v *Value) MustFloat64Slice() []float64 { + return v.data.([]float64) +} + +// IsFloat64 gets whether the object contained is a float64 or not. +func (v *Value) IsFloat64() bool { + _, ok := v.data.(float64) + return ok +} + +// IsFloat64Slice gets whether the object contained is a []float64 or not. +func (v *Value) IsFloat64Slice() bool { + _, ok := v.data.([]float64) + return ok +} + +// EachFloat64 calls the specified callback for each object +// in the []float64. +// +// Panics if the object is the wrong type. +func (v *Value) EachFloat64(callback func(int, float64) bool) *Value { + for index, val := range v.MustFloat64Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereFloat64 uses the specified decider function to select items +// from the []float64. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereFloat64(decider func(int, float64) bool) *Value { + var selected []float64 + v.EachFloat64(func(index int, val float64) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupFloat64 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]float64. +func (v *Value) GroupFloat64(grouper func(int, float64) string) *Value { + groups := make(map[string][]float64) + v.EachFloat64(func(index int, val float64) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]float64, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceFloat64 uses the specified function to replace each float64s +// by iterating each item. The data in the returned result will be a +// []float64 containing the replaced items. +func (v *Value) ReplaceFloat64(replacer func(int, float64) float64) *Value { + arr := v.MustFloat64Slice() + replaced := make([]float64, len(arr)) + v.EachFloat64(func(index int, val float64) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectFloat64 uses the specified collector function to collect a value +// for each of the float64s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectFloat64(collector func(int, float64) interface{}) *Value { + arr := v.MustFloat64Slice() + collected := make([]interface{}, len(arr)) + v.EachFloat64(func(index int, val float64) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Complex64 (complex64 and []complex64) +*/ + +// Complex64 gets the value as a complex64, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Complex64(optionalDefault ...complex64) complex64 { + if s, ok := v.data.(complex64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustComplex64 gets the value as a complex64. +// +// Panics if the object is not a complex64. +func (v *Value) MustComplex64() complex64 { + return v.data.(complex64) +} + +// Complex64Slice gets the value as a []complex64, returns the optionalDefault +// value or nil if the value is not a []complex64. +func (v *Value) Complex64Slice(optionalDefault ...[]complex64) []complex64 { + if s, ok := v.data.([]complex64); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustComplex64Slice gets the value as a []complex64. +// +// Panics if the object is not a []complex64. +func (v *Value) MustComplex64Slice() []complex64 { + return v.data.([]complex64) +} + +// IsComplex64 gets whether the object contained is a complex64 or not. +func (v *Value) IsComplex64() bool { + _, ok := v.data.(complex64) + return ok +} + +// IsComplex64Slice gets whether the object contained is a []complex64 or not. +func (v *Value) IsComplex64Slice() bool { + _, ok := v.data.([]complex64) + return ok +} + +// EachComplex64 calls the specified callback for each object +// in the []complex64. +// +// Panics if the object is the wrong type. +func (v *Value) EachComplex64(callback func(int, complex64) bool) *Value { + for index, val := range v.MustComplex64Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereComplex64 uses the specified decider function to select items +// from the []complex64. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereComplex64(decider func(int, complex64) bool) *Value { + var selected []complex64 + v.EachComplex64(func(index int, val complex64) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupComplex64 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]complex64. +func (v *Value) GroupComplex64(grouper func(int, complex64) string) *Value { + groups := make(map[string][]complex64) + v.EachComplex64(func(index int, val complex64) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]complex64, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceComplex64 uses the specified function to replace each complex64s +// by iterating each item. The data in the returned result will be a +// []complex64 containing the replaced items. +func (v *Value) ReplaceComplex64(replacer func(int, complex64) complex64) *Value { + arr := v.MustComplex64Slice() + replaced := make([]complex64, len(arr)) + v.EachComplex64(func(index int, val complex64) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectComplex64 uses the specified collector function to collect a value +// for each of the complex64s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectComplex64(collector func(int, complex64) interface{}) *Value { + arr := v.MustComplex64Slice() + collected := make([]interface{}, len(arr)) + v.EachComplex64(func(index int, val complex64) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} + +/* + Complex128 (complex128 and []complex128) +*/ + +// Complex128 gets the value as a complex128, returns the optionalDefault +// value or a system default object if the value is the wrong type. +func (v *Value) Complex128(optionalDefault ...complex128) complex128 { + if s, ok := v.data.(complex128); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return 0 +} + +// MustComplex128 gets the value as a complex128. +// +// Panics if the object is not a complex128. +func (v *Value) MustComplex128() complex128 { + return v.data.(complex128) +} + +// Complex128Slice gets the value as a []complex128, returns the optionalDefault +// value or nil if the value is not a []complex128. +func (v *Value) Complex128Slice(optionalDefault ...[]complex128) []complex128 { + if s, ok := v.data.([]complex128); ok { + return s + } + if len(optionalDefault) == 1 { + return optionalDefault[0] + } + return nil +} + +// MustComplex128Slice gets the value as a []complex128. +// +// Panics if the object is not a []complex128. +func (v *Value) MustComplex128Slice() []complex128 { + return v.data.([]complex128) +} + +// IsComplex128 gets whether the object contained is a complex128 or not. +func (v *Value) IsComplex128() bool { + _, ok := v.data.(complex128) + return ok +} + +// IsComplex128Slice gets whether the object contained is a []complex128 or not. +func (v *Value) IsComplex128Slice() bool { + _, ok := v.data.([]complex128) + return ok +} + +// EachComplex128 calls the specified callback for each object +// in the []complex128. +// +// Panics if the object is the wrong type. +func (v *Value) EachComplex128(callback func(int, complex128) bool) *Value { + for index, val := range v.MustComplex128Slice() { + carryon := callback(index, val) + if !carryon { + break + } + } + return v +} + +// WhereComplex128 uses the specified decider function to select items +// from the []complex128. The object contained in the result will contain +// only the selected items. +func (v *Value) WhereComplex128(decider func(int, complex128) bool) *Value { + var selected []complex128 + v.EachComplex128(func(index int, val complex128) bool { + shouldSelect := decider(index, val) + if !shouldSelect { + selected = append(selected, val) + } + return true + }) + return &Value{data: selected} +} + +// GroupComplex128 uses the specified grouper function to group the items +// keyed by the return of the grouper. The object contained in the +// result will contain a map[string][]complex128. +func (v *Value) GroupComplex128(grouper func(int, complex128) string) *Value { + groups := make(map[string][]complex128) + v.EachComplex128(func(index int, val complex128) bool { + group := grouper(index, val) + if _, ok := groups[group]; !ok { + groups[group] = make([]complex128, 0) + } + groups[group] = append(groups[group], val) + return true + }) + return &Value{data: groups} +} + +// ReplaceComplex128 uses the specified function to replace each complex128s +// by iterating each item. The data in the returned result will be a +// []complex128 containing the replaced items. +func (v *Value) ReplaceComplex128(replacer func(int, complex128) complex128) *Value { + arr := v.MustComplex128Slice() + replaced := make([]complex128, len(arr)) + v.EachComplex128(func(index int, val complex128) bool { + replaced[index] = replacer(index, val) + return true + }) + return &Value{data: replaced} +} + +// CollectComplex128 uses the specified collector function to collect a value +// for each of the complex128s in the slice. The data returned will be a +// []interface{}. +func (v *Value) CollectComplex128(collector func(int, complex128) interface{}) *Value { + arr := v.MustComplex128Slice() + collected := make([]interface{}, len(arr)) + v.EachComplex128(func(index int, val complex128) bool { + collected[index] = collector(index, val) + return true + }) + return &Value{data: collected} +} diff --git a/vendor/github.com/stretchr/objx/value.go b/vendor/github.com/stretchr/objx/value.go new file mode 100644 index 000000000..e4b4a1433 --- /dev/null +++ b/vendor/github.com/stretchr/objx/value.go @@ -0,0 +1,53 @@ +package objx + +import ( + "fmt" + "strconv" +) + +// Value provides methods for extracting interface{} data in various +// types. +type Value struct { + // data contains the raw data being managed by this Value + data interface{} +} + +// Data returns the raw data contained by this Value +func (v *Value) Data() interface{} { + return v.data +} + +// String returns the value always as a string +func (v *Value) String() string { + switch { + case v.IsStr(): + return v.Str() + case v.IsBool(): + return strconv.FormatBool(v.Bool()) + case v.IsFloat32(): + return strconv.FormatFloat(float64(v.Float32()), 'f', -1, 32) + case v.IsFloat64(): + return strconv.FormatFloat(v.Float64(), 'f', -1, 64) + case v.IsInt(): + return strconv.FormatInt(int64(v.Int()), 10) + case v.IsInt8(): + return strconv.FormatInt(int64(v.Int8()), 10) + case v.IsInt16(): + return strconv.FormatInt(int64(v.Int16()), 10) + case v.IsInt32(): + return strconv.FormatInt(int64(v.Int32()), 10) + case v.IsInt64(): + return strconv.FormatInt(v.Int64(), 10) + case v.IsUint(): + return strconv.FormatUint(uint64(v.Uint()), 10) + case v.IsUint8(): + return strconv.FormatUint(uint64(v.Uint8()), 10) + case v.IsUint16(): + return strconv.FormatUint(uint64(v.Uint16()), 10) + case v.IsUint32(): + return strconv.FormatUint(uint64(v.Uint32()), 10) + case v.IsUint64(): + return strconv.FormatUint(v.Uint64(), 10) + } + return fmt.Sprintf("%#v", v.Data()) +} diff --git a/vendor/vendor.json b/vendor/vendor.json index 897d91b55..d1cf4206c 100644 --- a/vendor/vendor.json +++ b/vendor/vendor.json @@ -115,6 +115,7 @@ {"path":"github.com/shirou/gopsutil/net","checksumSHA1":"OSvOZs5uK5iolCOeS46nB2InVy8=","revision":"32b6636de04b303274daac3ca2b10d3b0e4afc35","revisionTime":"2017-02-04T05:36:48Z"}, {"path":"github.com/shirou/gopsutil/process","checksumSHA1":"JX0bRK/BdKVfbm4XOxMducVdY58=","revision":"32b6636de04b303274daac3ca2b10d3b0e4afc35","revisionTime":"2017-02-04T05:36:48Z"}, {"path":"github.com/shirou/w32","checksumSHA1":"Nve7SpDmjsv6+rhkXAkfg/UQx94=","revision":"bb4de0191aa41b5507caa14b0650cdbddcd9280b","revisionTime":"2016-09-30T03:27:40Z"}, + {"path":"github.com/stretchr/objx","checksumSHA1":"n+vQ7Bmp+ODWGmCp8cI5MFsaZVA=","revision":"a5cfa15c000af5f09784e5355969ba7eb66ef0de","revisionTime":"2018-04-26T10:50:06Z"}, {"path":"github.com/stretchr/testify/assert","checksumSHA1":"6LwXZI7kXm1C0h4Ui0Y52p9uQhk=","revision":"c679ae2cc0cb27ec3293fea7e254e47386f05d69","revisionTime":"2018-03-14T08:05:35Z"}, {"path":"github.com/stretchr/testify/mock","checksumSHA1":"Qloi2PTvZv+D9FDHXM/banCoaFY=","revision":"c679ae2cc0cb27ec3293fea7e254e47386f05d69","revisionTime":"2018-03-14T08:05:35Z"}, {"path":"github.com/stretchr/testify/require","checksumSHA1":"KqYmXUcuGwsvBL6XVsQnXsFb3LI=","revision":"c679ae2cc0cb27ec3293fea7e254e47386f05d69","revisionTime":"2018-03-14T08:05:35Z"}, From f69c8b85efb01261edebee01e033befd14ff37d9 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 5 May 2018 11:10:24 -0700 Subject: [PATCH 233/539] agent/config: add managed proxy upstreams config to skip agent/config will turn [{}] into {} (single element maps into a single map) to work around HCL issues. These are resolved in HCL2 which I'm sure Consul will switch to eventually. This breaks the connect proxy configuration in service definition FILES since we call this patch function. For now, let's just special-case skip this. In the future we maybe Consul will adopt HCL2 and fix it, or we can do something else if we want. This works and is tested. --- agent/config/config.go | 1 + agent/config/runtime_test.go | 95 +++++++++++++++++++++++++++++++++++- 2 files changed, 95 insertions(+), 1 deletion(-) diff --git a/agent/config/config.go b/agent/config/config.go index 6dc652aed..161bbb9bb 100644 --- a/agent/config/config.go +++ b/agent/config/config.go @@ -84,6 +84,7 @@ func Parse(data string, format string) (c Config, err error) { "services", "services.checks", "watches", + "service.connect.proxy.config.upstreams", }) // There is a difference of representation of some fields depending on diff --git a/agent/config/runtime_test.go b/agent/config/runtime_test.go index 82787482f..2e72b4639 100644 --- a/agent/config/runtime_test.go +++ b/agent/config/runtime_test.go @@ -31,6 +31,7 @@ type configTest struct { pre, post func() json, jsontail []string hcl, hcltail []string + skipformat bool privatev4 func() ([]*net.IPAddr, error) publicv6 func() ([]*net.IPAddr, error) patch func(rt *RuntimeConfig) @@ -2069,6 +2070,92 @@ func TestConfigFlagsAndEdgecases(t *testing.T) { rt.DataDir = dataDir }, }, + { + desc: "HCL service managed proxy 'upstreams'", + args: []string{ + `-data-dir=` + dataDir, + }, + hcl: []string{ + `service { + name = "web" + port = 8080 + connect { + proxy { + config { + upstreams { + local_bind_port = 1234 + } + } + } + } + }`, + }, + skipformat: true, // skipping JSON cause we get slightly diff types (okay) + patch: func(rt *RuntimeConfig) { + rt.DataDir = dataDir + rt.Services = []*structs.ServiceDefinition{ + &structs.ServiceDefinition{ + Name: "web", + Port: 8080, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{ + Config: map[string]interface{}{ + "upstreams": []map[string]interface{}{ + map[string]interface{}{ + "local_bind_port": 1234, + }, + }, + }, + }, + }, + }, + } + }, + }, + { + desc: "JSON service managed proxy 'upstreams'", + args: []string{ + `-data-dir=` + dataDir, + }, + json: []string{ + `{ + "service": { + "name": "web", + "port": 8080, + "connect": { + "proxy": { + "config": { + "upstreams": [{ + "local_bind_port": 1234 + }] + } + } + } + } + }`, + }, + skipformat: true, // skipping HCL cause we get slightly diff types (okay) + patch: func(rt *RuntimeConfig) { + rt.DataDir = dataDir + rt.Services = []*structs.ServiceDefinition{ + &structs.ServiceDefinition{ + Name: "web", + Port: 8080, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{ + Config: map[string]interface{}{ + "upstreams": []interface{}{ + map[string]interface{}{ + "local_bind_port": float64(1234), + }, + }, + }, + }, + }, + }, + } + }, + }, } testConfig(t, tests, dataDir) @@ -2090,7 +2177,7 @@ func testConfig(t *testing.T, tests []configTest, dataDir string) { // json and hcl sources need to be in sync // to make sure we're generating the same config - if len(tt.json) != len(tt.hcl) { + if len(tt.json) != len(tt.hcl) && !tt.skipformat { t.Fatal(tt.desc, ": JSON and HCL test case out of sync") } @@ -2100,6 +2187,12 @@ func testConfig(t *testing.T, tests []configTest, dataDir string) { srcs, tails = tt.hcl, tt.hcltail } + // If we're skipping a format and the current format is empty, + // then skip it! + if tt.skipformat && len(srcs) == 0 { + continue + } + // build the description var desc []string if !flagsOnly { From 3a7aaa63bca3588cbcc1c3f3e8c26a23dc6e0118 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 6 May 2018 08:41:25 -0700 Subject: [PATCH 234/539] agent/proxy: pass proxy ID as an env var --- agent/proxy/daemon.go | 11 ++++++++++- agent/proxy/daemon_test.go | 4 +++- agent/proxy/manager.go | 1 + agent/proxy/proxy.go | 17 ++++++++++++++--- agent/proxy/proxy_test.go | 5 ++++- 5 files changed, 32 insertions(+), 6 deletions(-) diff --git a/agent/proxy/daemon.go b/agent/proxy/daemon.go index c0ae7fdee..013fbdc28 100644 --- a/agent/proxy/daemon.go +++ b/agent/proxy/daemon.go @@ -33,6 +33,10 @@ type Daemon struct { // be a Cmd that isn't yet started. Command *exec.Cmd + // ProxyId is the ID of the proxy service. This is required for API + // requests (along with the token) and is passed via env var. + ProxyId string + // ProxyToken is the special local-only ACL token that allows a proxy // to communicate to the Connect-specific endpoints. ProxyToken string @@ -204,7 +208,9 @@ func (p *Daemon) start() (*os.Process, error) { // reference. We allocate an exactly sized slice. cmd.Env = make([]string, len(p.Command.Env), len(p.Command.Env)+1) copy(cmd.Env, p.Command.Env) - cmd.Env = append(cmd.Env, fmt.Sprintf("%s=%s", EnvProxyToken, p.ProxyToken)) + cmd.Env = append(cmd.Env, + fmt.Sprintf("%s=%s", EnvProxyId, p.ProxyId), + fmt.Sprintf("%s=%s", EnvProxyToken, p.ProxyToken)) // Args must always contain a 0 entry which is usually the executed binary. // To be safe and a bit more robust we default this, but only to prevent @@ -387,6 +393,9 @@ func (p *Daemon) UnmarshalSnapshot(m map[string]interface{}) error { } // daemonSnapshot is the structure of the marshalled data for snapshotting. +// +// Note we don't have to store the ProxyId because this is stored directly +// within the manager snapshot and is restored automatically. type daemonSnapshot struct { // Pid of the process. This is the only value actually required to // regain mangement control. The remainder values are for Equal. diff --git a/agent/proxy/daemon_test.go b/agent/proxy/daemon_test.go index 17e821a44..f08716276 100644 --- a/agent/proxy/daemon_test.go +++ b/agent/proxy/daemon_test.go @@ -30,10 +30,12 @@ func TestDaemonStartStop(t *testing.T) { d := &Daemon{ Command: helperProcess("start-stop", path), + ProxyId: "tubes", ProxyToken: uuid, Logger: testLogger, } require.NoError(d.Start()) + defer d.Stop() // Wait for the file to exist retry.Run(t, func(r *retry.R) { @@ -49,7 +51,7 @@ func TestDaemonStartStop(t *testing.T) { // that we properly passed the token as an env var. data, err := ioutil.ReadFile(path) require.NoError(err) - require.Equal(uuid, string(data)) + require.Equal("tubes:"+uuid, string(data)) // Stop the process require.NoError(d.Stop()) diff --git a/agent/proxy/manager.go b/agent/proxy/manager.go index 5bb871c65..09eb1f601 100644 --- a/agent/proxy/manager.go +++ b/agent/proxy/manager.go @@ -411,6 +411,7 @@ func (m *Manager) newProxy(mp *local.ManagedProxy) (Proxy, error) { // Build the daemon structure proxy.Command = &cmd + proxy.ProxyId = id proxy.ProxyToken = mp.ProxyToken return proxy, nil diff --git a/agent/proxy/proxy.go b/agent/proxy/proxy.go index 1bb88da8e..90ae158f4 100644 --- a/agent/proxy/proxy.go +++ b/agent/proxy/proxy.go @@ -11,9 +11,16 @@ import ( "github.com/hashicorp/consul/agent/structs" ) -// EnvProxyToken is the name of the environment variable that is passed -// to managed proxies containing the proxy token. -const EnvProxyToken = "CONNECT_PROXY_TOKEN" +const ( + // EnvProxyId is the name of the environment variable that is set for + // managed proxies containing the proxy service ID. This is required along + // with the token to make API requests related to the proxy. + EnvProxyId = "CONNECT_PROXY_ID" + + // EnvProxyToken is the name of the environment variable that is passed + // to managed proxies containing the proxy token. + EnvProxyToken = "CONNECT_PROXY_TOKEN" +) // Proxy is the interface implemented by all types of managed proxies. // @@ -51,6 +58,10 @@ type Proxy interface { // so that Consul can recover the proxy process after a restart. The // result should only contain primitive values and containers (lists/maps). // + // MarshalSnapshot does NOT need to store the following fields, since they + // are part of the manager snapshot and will be automatically restored + // for any proxies: proxy ID. + // // UnmarshalSnapshot is called to restore the receiving Proxy from its // marshalled state. If UnmarshalSnapshot returns an error, the snapshot // is ignored and the marshalled snapshot will be lost. The manager will diff --git a/agent/proxy/proxy_test.go b/agent/proxy/proxy_test.go index d0812fc07..b46b5d677 100644 --- a/agent/proxy/proxy_test.go +++ b/agent/proxy/proxy_test.go @@ -78,7 +78,10 @@ func TestHelperProcess(t *testing.T) { defer signal.Stop(ch) path := args[0] - data := []byte(os.Getenv(EnvProxyToken)) + var data []byte + data = append(data, []byte(os.Getenv(EnvProxyId))...) + data = append(data, ':') + data = append(data, []byte(os.Getenv(EnvProxyToken))...) if err := ioutil.WriteFile(path, data, 0644); err != nil { t.Fatalf("err: %s", err) From 9435d8088c9473153310cbf30d63ebade782055f Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 6 May 2018 08:54:36 -0700 Subject: [PATCH 235/539] command/connect/proxy: set proxy ID from env var if set --- command/connect/proxy/proxy.go | 6 ++++++ connect/proxy/proxy.go | 1 - 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index 362e70459..a4558d21f 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -7,7 +7,9 @@ import ( "log" "net/http" _ "net/http/pprof" // Expose pprof if configured + "os" + proxyAgent "github.com/hashicorp/consul/agent/proxy" "github.com/hashicorp/consul/command/flags" proxyImpl "github.com/hashicorp/consul/connect/proxy" @@ -74,6 +76,10 @@ func (c *cmd) Run(args []string) int { return 1 } + // Load the proxy ID and token from env vars if they're set + if c.proxyID == "" { + c.proxyID = os.Getenv(proxyAgent.EnvProxyId) + } // Setup the log outputs logConfig := &logger.Config{ LogLevel: c.logLevel, diff --git a/connect/proxy/proxy.go b/connect/proxy/proxy.go index 717d45ae6..e3db982fe 100644 --- a/connect/proxy/proxy.go +++ b/connect/proxy/proxy.go @@ -68,7 +68,6 @@ func New(client *api.Client, proxyID string, logger *log.Logger) (*Proxy, error) // Serve the proxy instance until a fatal error occurs or proxy is closed. func (p *Proxy) Serve() error { - var cfg *Config // Watch for config changes (initial setup happens on first "change") From 4100c9567ff4ab7f7bc4a11b039569b03b6543da Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 6 May 2018 20:27:31 -0700 Subject: [PATCH 236/539] command/connect/proxy: set ACL token based on proxy token flag --- command/connect/proxy/proxy.go | 4 ++++ command/flags/http.go | 4 ++++ command/flags/http_test.go | 15 +++++++++++++++ 3 files changed, 23 insertions(+) create mode 100644 command/flags/http_test.go diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index a4558d21f..b797177c5 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -80,6 +80,10 @@ func (c *cmd) Run(args []string) int { if c.proxyID == "" { c.proxyID = os.Getenv(proxyAgent.EnvProxyId) } + if c.http.Token() == "" { + c.http.SetToken(os.Getenv(proxyAgent.EnvProxyToken)) + } + // Setup the log outputs logConfig := &logger.Config{ LogLevel: c.logLevel, diff --git a/command/flags/http.go b/command/flags/http.go index 591567a4f..7d02f6ab3 100644 --- a/command/flags/http.go +++ b/command/flags/http.go @@ -84,6 +84,10 @@ func (f *HTTPFlags) Token() string { return f.token.String() } +func (f *HTTPFlags) SetToken(v string) error { + return f.token.Set(v) +} + func (f *HTTPFlags) APIClient() (*api.Client, error) { c := api.DefaultConfig() diff --git a/command/flags/http_test.go b/command/flags/http_test.go new file mode 100644 index 000000000..867ce2a35 --- /dev/null +++ b/command/flags/http_test.go @@ -0,0 +1,15 @@ +package flags + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestHTTPFlagsSetToken(t *testing.T) { + var f HTTPFlags + require := require.New(t) + require.Empty(f.Token()) + require.NoError(f.SetToken("foo")) + require.Equal("foo", f.Token()) +} From 8f7b5f93cdf7b83b2579a4a74ccdc84c8604b595 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 6 May 2018 21:02:44 -0700 Subject: [PATCH 237/539] agent: verify proxy token for ProxyConfig endpoint + tests --- agent/agent.go | 33 ++++++ agent/agent_endpoint.go | 9 ++ agent/agent_endpoint_test.go | 211 +++++++++++++++++++++++++++++++++++ 3 files changed, 253 insertions(+) diff --git a/agent/agent.go b/agent/agent.go index 9c74b3760..ff840d162 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2115,6 +2115,39 @@ func (a *Agent) RemoveProxy(proxyID string, persist bool) error { return nil } +// verifyProxyToken takes a proxy service ID and a token and verifies +// that the token is allowed to access proxy-related information (leaf +// cert, config, etc.). +// +// The given token may be a local-only proxy token or it may be an ACL +// token. We will attempt to verify the local proxy token first. +func (a *Agent) verifyProxyToken(proxyId, token string) error { + proxy := a.State.Proxy(proxyId) + if proxy == nil { + return fmt.Errorf("unknown proxy service ID: %q", proxyId) + } + + // Easy case is if the token just matches our local proxy token. + // If this happens we can return without any requests. + if token == proxy.ProxyToken { + return nil + } + + // Doesn't match, we have to do a full token resolution. The required + // permission for any proxy-related endpont is service:write, since + // to register a proxy you require that permission and sensitive data + // is usually present in the configuration. + rule, err := a.resolveToken(token) + if err != nil { + return err + } + if rule != nil && !rule.ServiceWrite(proxy.Proxy.TargetServiceID, nil) { + return acl.ErrPermissionDenied + } + + return nil +} + func (a *Agent) cancelCheckMonitors(checkID types.CheckID) { // Stop any monitors delete(a.checkReapAfter, checkID) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 8f080ea7a..de1c0d48e 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -965,6 +965,10 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http return nil, nil } + // Parse the token + var token string + s.parseToken(req, &token) + // Parse hash specially since it's only this endpoint that uses it currently. // Eventually this should happen in parseWait and end up in QueryOptions but I // didn't want to make very general changes right away. @@ -980,6 +984,11 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http return "", nil, nil } + // Validate the ACL token + if err := s.agent.verifyProxyToken(id, token); err != nil { + return "", nil, err + } + // Lookup the target service as a convenience target := s.agent.State.Service(proxy.Proxy.TargetServiceID) if target == nil { diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index fa01eab89..66ebc59ef 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2517,6 +2517,217 @@ func TestAgentConnectProxyConfig_Blocking(t *testing.T) { } } +func TestAgentConnectProxyConfig_aclDefaultDeny(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()+` + connect { + enabled = true + } + `) + defer a.Shutdown() + + // Register a service with a managed proxy + { + reg := &structs.ServiceDefinition{ + ID: "test-id", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{}, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=root", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + req, _ := http.NewRequest("GET", "/v1/agent/connect/proxy/test-id-proxy", nil) + resp := httptest.NewRecorder() + _, err := a.srv.AgentConnectProxyConfig(resp, req) + require.True(acl.IsErrPermissionDenied(err)) + +} + +func TestAgentConnectProxyConfig_aclProxyToken(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()+` + connect { + enabled = true + } + `) + defer a.Shutdown() + + // Register a service with a managed proxy + { + reg := &structs.ServiceDefinition{ + ID: "test-id", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{}, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=root", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + // Get the proxy token from the agent directly, since there is no API + // to expose this. + proxy := a.State.Proxy("test-id-proxy") + require.NotNil(proxy) + token := proxy.ProxyToken + require.NotEmpty(token) + + req, _ := http.NewRequest( + "GET", "/v1/agent/connect/proxy/test-id-proxy?token="+token, nil) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentConnectProxyConfig(resp, req) + require.NoError(err) + proxyCfg := obj.(*api.ConnectProxyConfig) + require.Equal("test-id-proxy", proxyCfg.ProxyServiceID) + require.Equal("test-id", proxyCfg.TargetServiceID) + require.Equal("test", proxyCfg.TargetServiceName) +} + +func TestAgentConnectProxyConfig_aclServiceWrite(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()+` + connect { + enabled = true + } + `) + defer a.Shutdown() + + // Register a service with a managed proxy + { + reg := &structs.ServiceDefinition{ + ID: "test-id", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{}, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=root", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + // Create an ACL with service:write for our service + var token string + { + args := map[string]interface{}{ + "Name": "User Token", + "Type": "client", + "Rules": `service "test" { policy = "write" }`, + } + req, _ := http.NewRequest("PUT", "/v1/acl/create?token=root", jsonReader(args)) + resp := httptest.NewRecorder() + obj, err := a.srv.ACLCreate(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + aclResp := obj.(aclCreateResponse) + token = aclResp.ID + } + + req, _ := http.NewRequest( + "GET", "/v1/agent/connect/proxy/test-id-proxy?token="+token, nil) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentConnectProxyConfig(resp, req) + require.NoError(err) + proxyCfg := obj.(*api.ConnectProxyConfig) + require.Equal("test-id-proxy", proxyCfg.ProxyServiceID) + require.Equal("test-id", proxyCfg.TargetServiceID) + require.Equal("test", proxyCfg.TargetServiceName) +} + +func TestAgentConnectProxyConfig_aclServiceReadDeny(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()+` + connect { + enabled = true + } + `) + defer a.Shutdown() + + // Register a service with a managed proxy + { + reg := &structs.ServiceDefinition{ + ID: "test-id", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{}, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=root", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + // Create an ACL with service:read for our service + var token string + { + args := map[string]interface{}{ + "Name": "User Token", + "Type": "client", + "Rules": `service "test" { policy = "read" }`, + } + req, _ := http.NewRequest("PUT", "/v1/acl/create?token=root", jsonReader(args)) + resp := httptest.NewRecorder() + obj, err := a.srv.ACLCreate(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + aclResp := obj.(aclCreateResponse) + token = aclResp.ID + } + + req, _ := http.NewRequest( + "GET", "/v1/agent/connect/proxy/test-id-proxy?token="+token, nil) + resp := httptest.NewRecorder() + _, err := a.srv.AgentConnectProxyConfig(resp, req) + require.True(acl.IsErrPermissionDenied(err)) +} + func TestAgentConnectProxyConfig_ConfigHandling(t *testing.T) { t.Parallel() From b4f990bc6ce116047165cfa5c50b6d174e06392e Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 6 May 2018 21:46:22 -0700 Subject: [PATCH 238/539] agent: verify local proxy tokens for CA leaf + tests --- agent/acl.go | 13 ++ agent/agent.go | 49 +++++-- agent/agent_endpoint.go | 12 +- agent/agent_endpoint_test.go | 277 ++++++++++++++++++++++++++++++++++- 4 files changed, 333 insertions(+), 18 deletions(-) diff --git a/agent/acl.go b/agent/acl.go index 0bf4180eb..e266feafa 100644 --- a/agent/acl.go +++ b/agent/acl.go @@ -8,6 +8,7 @@ import ( "github.com/armon/go-metrics" "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/config" + "github.com/hashicorp/consul/agent/local" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/types" "github.com/hashicorp/golang-lru" @@ -239,6 +240,18 @@ func (a *Agent) resolveToken(id string) (acl.ACL, error) { return a.acls.lookupACL(a, id) } +// resolveProxyToken attempts to resolve an ACL ID to a local proxy token. +// If a local proxy isn't found with that token, nil is returned. +func (a *Agent) resolveProxyToken(id string) *local.ManagedProxy { + for _, p := range a.State.Proxies() { + if p.ProxyToken == id { + return p + } + } + + return nil +} + // vetServiceRegister makes sure the service registration action is allowed by // the given token. func (a *Agent) vetServiceRegister(token string, service *structs.NodeService) error { diff --git a/agent/agent.go b/agent/agent.go index ff840d162..21ada77cd 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2115,24 +2115,53 @@ func (a *Agent) RemoveProxy(proxyID string, persist bool) error { return nil } -// verifyProxyToken takes a proxy service ID and a token and verifies -// that the token is allowed to access proxy-related information (leaf +// verifyProxyToken takes a token and attempts to verify it against the +// targetService name. If targetProxy is specified, then the local proxy +// token must exactly match the given proxy ID. // cert, config, etc.). // // The given token may be a local-only proxy token or it may be an ACL // token. We will attempt to verify the local proxy token first. -func (a *Agent) verifyProxyToken(proxyId, token string) error { - proxy := a.State.Proxy(proxyId) - if proxy == nil { - return fmt.Errorf("unknown proxy service ID: %q", proxyId) +func (a *Agent) verifyProxyToken(token, targetService, targetProxy string) error { + // If we specify a target proxy, we look up that proxy directly. Otherwise, + // we resolve with any proxy we can find. + var proxy *local.ManagedProxy + if targetProxy != "" { + proxy = a.State.Proxy(targetProxy) + if proxy == nil { + return fmt.Errorf("unknown proxy service ID: %q", targetProxy) + } + + // If the token DOESN'T match, then we reset the proxy which will + // cause the logic below to fall back to normal ACLs. Otherwise, + // we keep the proxy set because we also have to verify that the + // target service matches on the proxy. + if token != proxy.ProxyToken { + proxy = nil + } + } else { + proxy = a.resolveProxyToken(token) } - // Easy case is if the token just matches our local proxy token. - // If this happens we can return without any requests. - if token == proxy.ProxyToken { + // The existence of a token isn't enough, we also need to verify + // that the service name of the matching proxy matches our target + // service. + if proxy != nil { + if proxy.Proxy.TargetServiceID != targetService { + return acl.ErrPermissionDenied + } + return nil } + // Retrieve the service specified. This should always exist because + // we only call this function for proxies and leaf certs and both can + // only be called for local services. + service := a.State.Service(targetService) + if service == nil { + return fmt.Errorf("unknown service ID: %s", targetService) + } + // Doesn't match, we have to do a full token resolution. The required // permission for any proxy-related endpont is service:write, since // to register a proxy you require that permission and sensitive data @@ -2141,7 +2170,7 @@ func (a *Agent) verifyProxyToken(proxyId, token string) error { if err != nil { return err } - if rule != nil && !rule.ServiceWrite(proxy.Proxy.TargetServiceID, nil) { + if rule != nil && !rule.ServiceWrite(service.Service, nil) { return acl.ErrPermissionDenied } diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index de1c0d48e..32b326867 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -925,15 +925,12 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. } args.MinQueryIndex = qOpts.MinQueryIndex - // Validate token - // TODO(banks): support correct proxy token checking too - rule, err := s.agent.resolveToken(qOpts.Token) + // Verify the proxy token. This will check both the local proxy token + // as well as the ACL if the token isn't local. + err := s.agent.verifyProxyToken(qOpts.Token, id, "") if err != nil { return nil, err } - if rule != nil && !rule.ServiceWrite(service.Service, nil) { - return nil, acl.ErrPermissionDenied - } raw, err := s.agent.cache.Get(cachetype.ConnectCALeafName, &args) if err != nil { @@ -985,7 +982,8 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http } // Validate the ACL token - if err := s.agent.verifyProxyToken(id, token); err != nil { + err := s.agent.verifyProxyToken(token, proxy.Proxy.TargetServiceID, id) + if err != nil { return "", nil, err } diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 66ebc59ef..d4b55a50f 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2195,6 +2195,282 @@ func TestAgentConnectCARoots_list(t *testing.T) { } } +func TestAgentConnectCALeafCert_aclDefaultDeny(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()+` + connect { + enabled = true + } + `) + defer a.Shutdown() + + // Register a service with a managed proxy + { + reg := &structs.ServiceDefinition{ + ID: "test-id", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{}, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=root", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test-id", nil) + resp := httptest.NewRecorder() + _, err := a.srv.AgentConnectCALeafCert(resp, req) + require.Error(err) + require.True(acl.IsErrPermissionDenied(err)) +} + +func TestAgentConnectCALeafCert_aclProxyToken(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()+` + connect { + enabled = true + } + `) + defer a.Shutdown() + + // Register a service with a managed proxy + { + reg := &structs.ServiceDefinition{ + ID: "test-id", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{}, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=root", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + // Get the proxy token from the agent directly, since there is no API. + proxy := a.State.Proxy("test-id-proxy") + require.NotNil(proxy) + token := proxy.ProxyToken + require.NotEmpty(token) + + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test-id?token="+token, nil) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentConnectCALeafCert(resp, req) + require.NoError(err) + + // Get the issued cert + _, ok := obj.(*structs.IssuedCert) + require.True(ok) +} + +func TestAgentConnectCALeafCert_aclProxyTokenOther(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()+` + connect { + enabled = true + } + `) + defer a.Shutdown() + + // Register a service with a managed proxy + { + reg := &structs.ServiceDefinition{ + ID: "test-id", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{}, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=root", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + // Register another service + { + reg := &structs.ServiceDefinition{ + ID: "wrong-id", + Name: "wrong", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{}, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=root", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + // Get the proxy token from the agent directly, since there is no API. + proxy := a.State.Proxy("wrong-id-proxy") + require.NotNil(proxy) + token := proxy.ProxyToken + require.NotEmpty(token) + + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test-id?token="+token, nil) + resp := httptest.NewRecorder() + _, err := a.srv.AgentConnectCALeafCert(resp, req) + require.Error(err) + require.True(acl.IsErrPermissionDenied(err)) +} + +func TestAgentConnectCALeafCert_aclServiceWrite(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()+` + connect { + enabled = true + } + `) + defer a.Shutdown() + + // Register a service with a managed proxy + { + reg := &structs.ServiceDefinition{ + ID: "test-id", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{}, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=root", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + // Create an ACL with service:write for our service + var token string + { + args := map[string]interface{}{ + "Name": "User Token", + "Type": "client", + "Rules": `service "test" { policy = "write" }`, + } + req, _ := http.NewRequest("PUT", "/v1/acl/create?token=root", jsonReader(args)) + resp := httptest.NewRecorder() + obj, err := a.srv.ACLCreate(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + aclResp := obj.(aclCreateResponse) + token = aclResp.ID + } + + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test-id?token="+token, nil) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentConnectCALeafCert(resp, req) + require.NoError(err) + + // Get the issued cert + _, ok := obj.(*structs.IssuedCert) + require.True(ok) +} + +func TestAgentConnectCALeafCert_aclServiceReadDeny(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), TestACLConfig()+` + connect { + enabled = true + } + `) + defer a.Shutdown() + + // Register a service with a managed proxy + { + reg := &structs.ServiceDefinition{ + ID: "test-id", + Name: "test", + Address: "127.0.0.1", + Port: 8000, + Check: structs.CheckType{ + TTL: 15 * time.Second, + }, + Connect: &structs.ServiceDefinitionConnect{ + Proxy: &structs.ServiceDefinitionConnectProxy{}, + }, + } + + req, _ := http.NewRequest("PUT", "/v1/agent/service/register?token=root", jsonReader(reg)) + resp := httptest.NewRecorder() + _, err := a.srv.AgentRegisterService(resp, req) + require.NoError(err) + require.Equal(200, resp.Code, "body: %s", resp.Body.String()) + } + + // Create an ACL with service:read for our service + var token string + { + args := map[string]interface{}{ + "Name": "User Token", + "Type": "client", + "Rules": `service "test" { policy = "read" }`, + } + req, _ := http.NewRequest("PUT", "/v1/acl/create?token=root", jsonReader(args)) + resp := httptest.NewRecorder() + obj, err := a.srv.ACLCreate(resp, req) + if err != nil { + t.Fatalf("err: %v", err) + } + aclResp := obj.(aclCreateResponse) + token = aclResp.ID + } + + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test-id?token="+token, nil) + resp := httptest.NewRecorder() + _, err := a.srv.AgentConnectCALeafCert(resp, req) + require.Error(err) + require.True(acl.IsErrPermissionDenied(err)) +} + func TestAgentConnectCALeafCert_good(t *testing.T) { t.Parallel() @@ -2554,7 +2830,6 @@ func TestAgentConnectProxyConfig_aclDefaultDeny(t *testing.T) { resp := httptest.NewRecorder() _, err := a.srv.AgentConnectProxyConfig(resp, req) require.True(acl.IsErrPermissionDenied(err)) - } func TestAgentConnectProxyConfig_aclProxyToken(t *testing.T) { From c57405b323399095eb4fb3d6d73e5a82dfdd1d5d Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 8 May 2018 21:27:23 -0700 Subject: [PATCH 239/539] agent/consul: retry reading provider a few times --- agent/consul/leader.go | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/agent/consul/leader.go b/agent/consul/leader.go index 2f01b8833..d71c3ead3 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -488,9 +488,23 @@ func (s *Server) createCAProvider(conf *structs.CAConfiguration) (connect.CAProv } func (s *Server) getCAProvider() connect.CAProvider { + retries := 0 + +RETRY_PROVIDER: s.caProviderLock.RLock() - defer s.caProviderLock.RUnlock() - return s.caProvider + result := s.caProvider + s.caProviderLock.RUnlock() + + // In cases where an agent is started with managed proxies, we may ask + // for the provider before establishLeadership completes. If we're the + // leader, then wait and get the provider again + if result == nil && s.IsLeader() && retries < 10 { + retries++ + time.Sleep(50 * time.Millisecond) + goto RETRY_PROVIDER + } + + return result } func (s *Server) setCAProvider(newProvider connect.CAProvider) { From 749f81373f69e9651582d496a9278564caed7f1f Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 8 May 2018 21:30:18 -0700 Subject: [PATCH 240/539] agent/consul: check nil on getCAProvider result --- agent/consul/connect_ca_endpoint.go | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 136cbcb49..619418bab 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -157,6 +157,9 @@ func (s *ConnectCA) ConfigurationSet( // Have the old provider cross-sign the new intermediate oldProvider := s.srv.getCAProvider() + if oldProvider == nil { + return fmt.Errorf("internal error: CA provider is nil") + } xcCert, err := oldProvider.CrossSignCA(intermediateCA) if err != nil { return err @@ -283,6 +286,9 @@ func (s *ConnectCA) Sign( } provider := s.srv.getCAProvider() + if provider == nil { + return fmt.Errorf("internal error: CA provider is nil") + } // todo(kyhavlov): more validation on the CSR before signing pem, err := provider.Sign(csr) From 54a1662da8172b4d0b4b043445bfe2692f5f34ee Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 8 May 2018 21:32:47 -0700 Subject: [PATCH 241/539] agent/consul: change provider wait from goto to a loop --- agent/consul/leader.go | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/agent/consul/leader.go b/agent/consul/leader.go index d71c3ead3..4d61ed0c4 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -489,19 +489,22 @@ func (s *Server) createCAProvider(conf *structs.CAConfiguration) (connect.CAProv func (s *Server) getCAProvider() connect.CAProvider { retries := 0 + var result connect.CAProvider + for result == nil { + s.caProviderLock.RLock() + result = s.caProvider + s.caProviderLock.RUnlock() -RETRY_PROVIDER: - s.caProviderLock.RLock() - result := s.caProvider - s.caProviderLock.RUnlock() + // In cases where an agent is started with managed proxies, we may ask + // for the provider before establishLeadership completes. If we're the + // leader, then wait and get the provider again + if result == nil && s.IsLeader() && retries < 10 { + retries++ + time.Sleep(50 * time.Millisecond) + continue + } - // In cases where an agent is started with managed proxies, we may ask - // for the provider before establishLeadership completes. If we're the - // leader, then wait and get the provider again - if result == nil && s.IsLeader() && retries < 10 { - retries++ - time.Sleep(50 * time.Millisecond) - goto RETRY_PROVIDER + break } return result From c42510e1ecb19e2e18307882e8ca64e253ed9c8b Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 9 May 2018 11:04:52 -0700 Subject: [PATCH 242/539] agent/cache: implement refresh backoff --- agent/agent.go | 6 ++--- agent/cache/cache.go | 53 ++++++++++++++++++++++++++++----------- agent/cache/cache_test.go | 42 +++++++++++++++++++++++++++++++ agent/cache/entry.go | 7 +++--- 4 files changed, 88 insertions(+), 20 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 21ada77cd..77045c69e 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2785,7 +2785,7 @@ func (a *Agent) registerCache() { }, &cache.RegisterOptions{ // Maintain a blocking query, retry dropped connections quickly Refresh: true, - RefreshTimer: 3 * time.Second, + RefreshTimer: 0 * time.Second, RefreshTimeout: 10 * time.Minute, }) @@ -2795,7 +2795,7 @@ func (a *Agent) registerCache() { }, &cache.RegisterOptions{ // Maintain a blocking query, retry dropped connections quickly Refresh: true, - RefreshTimer: 3 * time.Second, + RefreshTimer: 0 * time.Second, RefreshTimeout: 10 * time.Minute, }) @@ -2804,7 +2804,7 @@ func (a *Agent) registerCache() { }, &cache.RegisterOptions{ // Maintain a blocking query, retry dropped connections quickly Refresh: true, - RefreshTimer: 3 * time.Second, + RefreshTimer: 0 * time.Second, RefreshTimeout: 10 * time.Minute, }) } diff --git a/agent/cache/cache.go b/agent/cache/cache.go index cdcaffc58..754d635df 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -26,6 +26,13 @@ import ( //go:generate mockery -all -inpkg +// Constants related to refresh backoff. We probably don't ever need to +// make these configurable knobs since they primarily exist to lower load. +const ( + CacheRefreshBackoffMin = 3 // 3 attempts before backing off + CacheRefreshMaxWait = 1 * time.Minute // maximum backoff wait time +) + // Cache is a agent-local cache of Consul data. Create a Cache using the // New function. A zero-value Cache is not ready for usage and will result // in a panic. @@ -330,14 +337,6 @@ func (c *Cache) fetch(t, key string, r Request, allowNew bool) (<-chan struct{}, Timeout: tEntry.Opts.RefreshTimeout, }, r) - if err == nil { - metrics.IncrCounter([]string{"consul", "cache", "fetch_success"}, 1) - metrics.IncrCounter([]string{"consul", "cache", t, "fetch_success"}, 1) - } else { - metrics.IncrCounter([]string{"consul", "cache", "fetch_error"}, 1) - metrics.IncrCounter([]string{"consul", "cache", t, "fetch_error"}, 1) - } - // Copy the existing entry to start. newEntry := entry newEntry.Fetching = false @@ -351,10 +350,26 @@ func (c *Cache) fetch(t, key string, r Request, allowNew bool) (<-chan struct{}, newEntry.Valid = true } - // If we have an error and the prior entry wasn't valid, then we - // set the error at least. - if err != nil && !newEntry.Valid { - newEntry.Error = err + // Error handling + if err == nil { + metrics.IncrCounter([]string{"consul", "cache", "fetch_success"}, 1) + metrics.IncrCounter([]string{"consul", "cache", t, "fetch_success"}, 1) + + // Reset the attepts counter so we don't have any backoff + newEntry.ErrAttempts = 0 + } else { + metrics.IncrCounter([]string{"consul", "cache", "fetch_error"}, 1) + metrics.IncrCounter([]string{"consul", "cache", t, "fetch_error"}, 1) + + // Always increment the attempts to control backoff + newEntry.ErrAttempts++ + + // If the entry wasn't valid, we set an error. If it was valid, + // we don't set an error so that the prior value can continue + // being used. This will be evicted if the TTL comes up. + if !newEntry.Valid { + newEntry.Error = err + } } // Create a new waiter that will be used for the next fetch. @@ -384,7 +399,7 @@ func (c *Cache) fetch(t, key string, r Request, allowNew bool) (<-chan struct{}, // If refresh is enabled, run the refresh in due time. The refresh // below might block, but saves us from spawning another goroutine. if tEntry.Opts.Refresh { - c.refresh(tEntry.Opts, t, key, r) + c.refresh(tEntry.Opts, newEntry.ErrAttempts, t, key, r) } }() @@ -417,12 +432,22 @@ func (c *Cache) fetchDirect(t string, r Request) (interface{}, error) { // refresh triggers a fetch for a specific Request according to the // registration options. -func (c *Cache) refresh(opts *RegisterOptions, t string, key string, r Request) { +func (c *Cache) refresh(opts *RegisterOptions, attempt uint8, t string, key string, r Request) { // Sanity-check, we should not schedule anything that has refresh disabled if !opts.Refresh { return } + // If we're over the attempt minimum, start an exponential backoff. + if attempt > CacheRefreshBackoffMin { + waitTime := (1 << (attempt - CacheRefreshBackoffMin)) * time.Second + if waitTime > CacheRefreshMaxWait { + waitTime = CacheRefreshMaxWait + } + + time.Sleep(waitTime) + } + // If we have a timer, wait for it if opts.RefreshTimer > 0 { time.Sleep(opts.RefreshTimer) diff --git a/agent/cache/cache_test.go b/agent/cache/cache_test.go index cf179b2ab..07490be13 100644 --- a/agent/cache/cache_test.go +++ b/agent/cache/cache_test.go @@ -4,6 +4,7 @@ import ( "fmt" "sort" "sync" + "sync/atomic" "testing" "time" @@ -336,6 +337,47 @@ func TestCacheGet_periodicRefresh(t *testing.T) { TestCacheGetChResult(t, resultCh, 12) } +// Test that a refresh performs a backoff. +func TestCacheGet_periodicRefreshErrorBackoff(t *testing.T) { + t.Parallel() + + typ := TestType(t) + defer typ.AssertExpectations(t) + c := TestCache(t) + c.RegisterType("t", typ, &RegisterOptions{ + Refresh: true, + RefreshTimer: 0, + RefreshTimeout: 5 * time.Minute, + }) + + // Configure the type + var retries uint32 + fetchErr := fmt.Errorf("test fetch error") + typ.Static(FetchResult{Value: 1, Index: 4}, nil).Once() + typ.Static(FetchResult{Value: nil, Index: 5}, fetchErr).Run(func(args mock.Arguments) { + atomic.AddUint32(&retries, 1) + }) + + // Fetch + resultCh := TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{Key: "hello"})) + TestCacheGetChResult(t, resultCh, 1) + + // Sleep a bit. The refresh will quietly fail in the background. What we + // want to verify is that it doesn't retry too much. "Too much" is hard + // to measure since its CPU dependent if this test is failing. But due + // to the short sleep below, we can calculate about what we'd expect if + // backoff IS working. + time.Sleep(500 * time.Millisecond) + + // Fetch should work, we should get a 1 still. Errors are ignored. + resultCh = TestCacheGetCh(t, c, "t", TestRequest(t, RequestInfo{Key: "hello"})) + TestCacheGetChResult(t, resultCh, 1) + + // Check the number + actual := atomic.LoadUint32(&retries) + require.True(t, actual < 10, fmt.Sprintf("actual: %d", actual)) +} + // Test that the backend fetch sets the proper timeout. func TestCacheGet_fetchTimeout(t *testing.T) { t.Parallel() diff --git a/agent/cache/entry.go b/agent/cache/entry.go index 50c575ff7..6bab621a3 100644 --- a/agent/cache/entry.go +++ b/agent/cache/entry.go @@ -16,9 +16,10 @@ type cacheEntry struct { Index uint64 // Metadata that is used for internal accounting - Valid bool // True if the Value is set - Fetching bool // True if a fetch is already active - Waiter chan struct{} // Closed when this entry is invalidated + Valid bool // True if the Value is set + Fetching bool // True if a fetch is already active + Waiter chan struct{} // Closed when this entry is invalidated + ErrAttempts uint8 // Number of fetch errors since last success (Error may be nil) // Expiry contains information about the expiration of this // entry. This is a pointer as its shared as a value in the From 6cf2e1ef1ae0fc5702b1e493b0d88690a3d1383c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 9 May 2018 11:10:17 -0700 Subject: [PATCH 243/539] agent/cache: string through attempt rather than storing on the entry --- agent/cache/cache.go | 16 ++++++++-------- agent/cache/entry.go | 7 +++---- 2 files changed, 11 insertions(+), 12 deletions(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index 754d635df..d0d5665b1 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -257,7 +257,7 @@ RETRY_GET: // At this point, we know we either don't have a value at all or the // value we have is too old. We need to wait for new data. - waiterCh, err := c.fetch(t, key, r, true) + waiterCh, err := c.fetch(t, key, r, true, 0) if err != nil { return nil, err } @@ -287,7 +287,7 @@ func (c *Cache) entryKey(r *RequestInfo) string { // If allowNew is true then the fetch should create the cache entry // if it doesn't exist. If this is false, then fetch will do nothing // if the entry doesn't exist. This latter case is to support refreshing. -func (c *Cache) fetch(t, key string, r Request, allowNew bool) (<-chan struct{}, error) { +func (c *Cache) fetch(t, key string, r Request, allowNew bool, attempt uint8) (<-chan struct{}, error) { // Get the type that we're fetching c.typesLock.RLock() tEntry, ok := c.types[t] @@ -355,14 +355,14 @@ func (c *Cache) fetch(t, key string, r Request, allowNew bool) (<-chan struct{}, metrics.IncrCounter([]string{"consul", "cache", "fetch_success"}, 1) metrics.IncrCounter([]string{"consul", "cache", t, "fetch_success"}, 1) - // Reset the attepts counter so we don't have any backoff - newEntry.ErrAttempts = 0 + // Reset the attempts counter so we don't have any backoff + attempt = 0 } else { metrics.IncrCounter([]string{"consul", "cache", "fetch_error"}, 1) metrics.IncrCounter([]string{"consul", "cache", t, "fetch_error"}, 1) - // Always increment the attempts to control backoff - newEntry.ErrAttempts++ + // Increment attempt counter + attempt++ // If the entry wasn't valid, we set an error. If it was valid, // we don't set an error so that the prior value can continue @@ -399,7 +399,7 @@ func (c *Cache) fetch(t, key string, r Request, allowNew bool) (<-chan struct{}, // If refresh is enabled, run the refresh in due time. The refresh // below might block, but saves us from spawning another goroutine. if tEntry.Opts.Refresh { - c.refresh(tEntry.Opts, newEntry.ErrAttempts, t, key, r) + c.refresh(tEntry.Opts, attempt, t, key, r) } }() @@ -456,7 +456,7 @@ func (c *Cache) refresh(opts *RegisterOptions, attempt uint8, t string, key stri // Trigger. The "allowNew" field is false because in the time we were // waiting to refresh we may have expired and got evicted. If that // happened, we don't want to create a new entry. - c.fetch(t, key, r, false) + c.fetch(t, key, r, false, attempt) } // runExpiryLoop is a blocking function that watches the expiration diff --git a/agent/cache/entry.go b/agent/cache/entry.go index 6bab621a3..50c575ff7 100644 --- a/agent/cache/entry.go +++ b/agent/cache/entry.go @@ -16,10 +16,9 @@ type cacheEntry struct { Index uint64 // Metadata that is used for internal accounting - Valid bool // True if the Value is set - Fetching bool // True if a fetch is already active - Waiter chan struct{} // Closed when this entry is invalidated - ErrAttempts uint8 // Number of fetch errors since last success (Error may be nil) + Valid bool // True if the Value is set + Fetching bool // True if a fetch is already active + Waiter chan struct{} // Closed when this entry is invalidated // Expiry contains information about the expiration of this // entry. This is a pointer as its shared as a value in the From 4bb745a2d458e7d100b0318b7ea4cb9b11a9be63 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 9 May 2018 11:54:15 -0700 Subject: [PATCH 244/539] agent/cache: change uint8 to uint --- agent/cache/cache.go | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/agent/cache/cache.go b/agent/cache/cache.go index d0d5665b1..1b4653cb4 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -287,7 +287,7 @@ func (c *Cache) entryKey(r *RequestInfo) string { // If allowNew is true then the fetch should create the cache entry // if it doesn't exist. If this is false, then fetch will do nothing // if the entry doesn't exist. This latter case is to support refreshing. -func (c *Cache) fetch(t, key string, r Request, allowNew bool, attempt uint8) (<-chan struct{}, error) { +func (c *Cache) fetch(t, key string, r Request, allowNew bool, attempt uint) (<-chan struct{}, error) { // Get the type that we're fetching c.typesLock.RLock() tEntry, ok := c.types[t] @@ -432,7 +432,7 @@ func (c *Cache) fetchDirect(t string, r Request) (interface{}, error) { // refresh triggers a fetch for a specific Request according to the // registration options. -func (c *Cache) refresh(opts *RegisterOptions, attempt uint8, t string, key string, r Request) { +func (c *Cache) refresh(opts *RegisterOptions, attempt uint, t string, key string, r Request) { // Sanity-check, we should not schedule anything that has refresh disabled if !opts.Refresh { return From c90b353eea57fc28a506f0f29cf30d623bbf44b6 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Thu, 3 May 2018 12:50:45 -0700 Subject: [PATCH 245/539] Move connect CA provider to separate package --- agent/connect/{ => ca}/ca_provider.go | 4 +- .../ca/ca_provider_consul.go} | 63 ++++---- .../ca/ca_provider_consul_test.go} | 145 +++++++++++------- agent/connect/{ca.go => parsing.go} | 0 agent/consul/connect_ca_endpoint_test.go | 15 +- agent/consul/consul_ca_delegate.go | 28 ++++ agent/consul/leader.go | 7 +- agent/consul/server.go | 4 +- 8 files changed, 158 insertions(+), 108 deletions(-) rename agent/connect/{ => ca}/ca_provider.go (94%) rename agent/{consul/connect_ca_provider.go => connect/ca/ca_provider_consul.go} (90%) rename agent/{consul/connect_ca_provider_test.go => connect/ca/ca_provider_consul_test.go} (57%) rename agent/connect/{ca.go => parsing.go} (100%) create mode 100644 agent/consul/consul_ca_delegate.go diff --git a/agent/connect/ca_provider.go b/agent/connect/ca/ca_provider.go similarity index 94% rename from agent/connect/ca_provider.go rename to agent/connect/ca/ca_provider.go index bec028851..d557d289c 100644 --- a/agent/connect/ca_provider.go +++ b/agent/connect/ca/ca_provider.go @@ -4,10 +4,10 @@ import ( "crypto/x509" ) -// CAProvider is the interface for Consul to interact with +// Provider is the interface for Consul to interact with // an external CA that provides leaf certificate signing for // given SpiffeIDServices. -type CAProvider interface { +type Provider interface { // Active root returns the currently active root CA for this // provider. This should be a parent of the certificate returned by // ActiveIntermediate() diff --git a/agent/consul/connect_ca_provider.go b/agent/connect/ca/ca_provider_consul.go similarity index 90% rename from agent/consul/connect_ca_provider.go rename to agent/connect/ca/ca_provider_consul.go index 0d7d851b0..7d925a40f 100644 --- a/agent/consul/connect_ca_provider.go +++ b/agent/connect/ca/ca_provider_consul.go @@ -1,4 +1,4 @@ -package consul +package connect import ( "bytes" @@ -15,34 +15,39 @@ import ( "time" "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/agent/consul/state" "github.com/hashicorp/consul/agent/structs" "github.com/mitchellh/mapstructure" ) type ConsulCAProvider struct { - config *structs.ConsulCAProviderConfig - - id string - srv *Server + config *structs.ConsulCAProviderConfig + id string + delegate ConsulCAStateDelegate sync.RWMutex } +type ConsulCAStateDelegate interface { + State() *state.Store + ApplyCARequest(*structs.CARequest) error +} + // NewConsulCAProvider returns a new instance of the Consul CA provider, // bootstrapping its state in the state store necessary -func NewConsulCAProvider(rawConfig map[string]interface{}, srv *Server) (*ConsulCAProvider, error) { +func NewConsulCAProvider(rawConfig map[string]interface{}, delegate ConsulCAStateDelegate) (*ConsulCAProvider, error) { conf, err := ParseConsulCAConfig(rawConfig) if err != nil { return nil, err } provider := &ConsulCAProvider{ - config: conf, - srv: srv, - id: fmt.Sprintf("%s,%s", conf.PrivateKey, conf.RootCert), + config: conf, + delegate: delegate, + id: fmt.Sprintf("%s,%s", conf.PrivateKey, conf.RootCert), } // Check if this configuration of the provider has already been // initialized in the state store. - state := srv.fsm.State() + state := delegate.State() _, providerState, err := state.CAProviderState(provider.id) if err != nil { return nil, err @@ -64,13 +69,9 @@ func NewConsulCAProvider(rawConfig map[string]interface{}, srv *Server) (*Consul Op: structs.CAOpSetProviderState, ProviderState: &newState, } - resp, err := srv.raftApply(structs.ConnectCARequestType, args) - if err != nil { + if err := delegate.ApplyCARequest(args); err != nil { return nil, err } - if respErr, ok := resp.(error); ok { - return nil, respErr - } } idx, _, err := state.CAProviderState(provider.id) @@ -80,7 +81,7 @@ func NewConsulCAProvider(rawConfig map[string]interface{}, srv *Server) (*Consul // Generate a private key if needed if conf.PrivateKey == "" { - pk, err := generatePrivateKey() + pk, err := GeneratePrivateKey() if err != nil { return nil, err } @@ -105,13 +106,9 @@ func NewConsulCAProvider(rawConfig map[string]interface{}, srv *Server) (*Consul Op: structs.CAOpSetProviderState, ProviderState: &newState, } - resp, err := srv.raftApply(structs.ConnectCARequestType, args) - if err != nil { + if err := delegate.ApplyCARequest(args); err != nil { return nil, err } - if respErr, ok := resp.(error); ok { - return nil, respErr - } return provider, nil } @@ -131,7 +128,7 @@ func ParseConsulCAConfig(raw map[string]interface{}) (*structs.ConsulCAProviderC // Return the active root CA and generate a new one if needed func (c *ConsulCAProvider) ActiveRoot() (string, error) { - state := c.srv.fsm.State() + state := c.delegate.State() _, providerState, err := state.CAProviderState(c.id) if err != nil { return "", err @@ -165,13 +162,9 @@ func (c *ConsulCAProvider) Cleanup() error { Op: structs.CAOpDeleteProviderState, ProviderState: &structs.CAConsulProviderState{ID: c.id}, } - resp, err := c.srv.raftApply(structs.ConnectCARequestType, args) - if err != nil { + if err := c.delegate.ApplyCARequest(args); err != nil { return err } - if respErr, ok := resp.(error); ok { - return respErr - } return nil } @@ -185,7 +178,7 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { defer c.Unlock() // Get the provider state - state := c.srv.fsm.State() + state := c.delegate.State() _, providerState, err := state.CAProviderState(c.id) if err != nil { return "", err @@ -274,7 +267,7 @@ func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { defer c.Unlock() // Get the provider state - state := c.srv.fsm.State() + state := c.delegate.State() _, providerState, err := state.CAProviderState(c.id) if err != nil { return "", err @@ -333,19 +326,15 @@ func (c *ConsulCAProvider) incrementSerialIndex(providerState *structs.CAConsulP Op: structs.CAOpSetProviderState, ProviderState: &newState, } - resp, err := c.srv.raftApply(structs.ConnectCARequestType, args) - if err != nil { + if err := c.delegate.ApplyCARequest(args); err != nil { return err } - if respErr, ok := resp.(error); ok { - return respErr - } return nil } -// generatePrivateKey returns a new private key -func generatePrivateKey() (string, error) { +// GeneratePrivateKey returns a new private key +func GeneratePrivateKey() (string, error) { var pk *ecdsa.PrivateKey pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) @@ -369,7 +358,7 @@ func generatePrivateKey() (string, error) { // generateCA makes a new root CA using the current private key func (c *ConsulCAProvider) generateCA(privateKey string, sn uint64) (string, error) { - state := c.srv.fsm.State() + state := c.delegate.State() _, config, err := state.CAConfig() if err != nil { return "", err diff --git a/agent/consul/connect_ca_provider_test.go b/agent/connect/ca/ca_provider_consul_test.go similarity index 57% rename from agent/consul/connect_ca_provider_test.go rename to agent/connect/ca/ca_provider_consul_test.go index ead41309f..fd1c8ec29 100644 --- a/agent/consul/connect_ca_provider_test.go +++ b/agent/connect/ca/ca_provider_consul_test.go @@ -1,28 +1,81 @@ -package consul +package connect import ( - "os" + "fmt" "testing" "time" "github.com/hashicorp/consul/agent/connect" - "github.com/hashicorp/consul/testrpc" + "github.com/hashicorp/consul/agent/consul/state" + "github.com/hashicorp/consul/agent/structs" "github.com/stretchr/testify/assert" ) +type consulCAMockDelegate struct { + state *state.Store +} + +func (c *consulCAMockDelegate) State() *state.Store { + return c.state +} + +func (c *consulCAMockDelegate) ApplyCARequest(req *structs.CARequest) error { + idx, _, err := c.state.CAConfig() + if err != nil { + return err + } + + switch req.Op { + case structs.CAOpSetProviderState: + _, err := c.state.CASetProviderState(idx+1, req.ProviderState) + if err != nil { + return err + } + + return nil + case structs.CAOpDeleteProviderState: + if err := c.state.CADeleteProviderState(req.ProviderState.ID); err != nil { + return err + } + + return nil + default: + return fmt.Errorf("Invalid CA operation '%s'", req.Op) + } +} + +func newMockDelegate(t *testing.T, conf *structs.CAConfiguration) *consulCAMockDelegate { + s, err := state.NewStateStore(nil) + if err != nil { + t.Fatalf("err: %s", err) + } + if s == nil { + t.Fatalf("missing state store") + } + if err := s.CASetConfig(0, conf); err != nil { + t.Fatalf("err: %s", err) + } + + return &consulCAMockDelegate{s} +} + +func testConsulCAConfig() *structs.CAConfiguration { + return &structs.CAConfiguration{ + ClusterID: "asdf", + Provider: "consul", + Config: map[string]interface{}{}, + } +} + func TestCAProvider_Bootstrap(t *testing.T) { t.Parallel() assert := assert.New(t) - dir1, s1 := testServer(t) - defer os.RemoveAll(dir1) - defer s1.Shutdown() - codec := rpcClient(t, s1) - defer codec.Close() + conf := testConsulCAConfig() + delegate := newMockDelegate(t, conf) - testrpc.WaitForLeader(t, s1.RPC, "dc1") - - provider := s1.getCAProvider() + provider, err := NewConsulCAProvider(conf.Config, delegate) + assert.NoError(err) root, err := provider.ActiveRoot() assert.NoError(err) @@ -30,14 +83,12 @@ func TestCAProvider_Bootstrap(t *testing.T) { // Intermediate should be the same cert. inter, err := provider.ActiveIntermediate() assert.NoError(err) + assert.Equal(root, inter) - // Make sure we initialize without errors and that the - // root cert gets set to the active cert. - state := s1.fsm.State() - _, activeRoot, err := state.CARootActive(nil) + // Should be a valid cert + parsed, err := connect.ParseCert(root) assert.NoError(err) - assert.Equal(root, activeRoot.RootCert) - assert.Equal(inter, activeRoot.RootCert) + assert.Equal(parsed.URIs[0].String(), fmt.Sprintf("spiffe://%s.consul", conf.ClusterID)) } func TestCAProvider_Bootstrap_WithCert(t *testing.T) { @@ -46,49 +97,35 @@ func TestCAProvider_Bootstrap_WithCert(t *testing.T) { // Make sure setting a custom private key/root cert works. assert := assert.New(t) rootCA := connect.TestCA(t, nil) - dir1, s1 := testServerWithConfig(t, func(c *Config) { - c.CAConfig.Config["PrivateKey"] = rootCA.SigningKey - c.CAConfig.Config["RootCert"] = rootCA.RootCert - }) - defer os.RemoveAll(dir1) - defer s1.Shutdown() - codec := rpcClient(t, s1) - defer codec.Close() + conf := testConsulCAConfig() + conf.Config = map[string]interface{}{ + "PrivateKey": rootCA.SigningKey, + "RootCert": rootCA.RootCert, + } + delegate := newMockDelegate(t, conf) - testrpc.WaitForLeader(t, s1.RPC, "dc1") - - provider := s1.getCAProvider() + provider, err := NewConsulCAProvider(conf.Config, delegate) + assert.NoError(err) root, err := provider.ActiveRoot() assert.NoError(err) - - // Make sure we initialize without errors and that the - // root cert we provided gets set to the active cert. - state := s1.fsm.State() - _, activeRoot, err := state.CARootActive(nil) - assert.NoError(err) - assert.Equal(root, activeRoot.RootCert) - assert.Equal(rootCA.RootCert, activeRoot.RootCert) + assert.Equal(root, rootCA.RootCert) } func TestCAProvider_SignLeaf(t *testing.T) { t.Parallel() assert := assert.New(t) - dir1, s1 := testServer(t) - defer os.RemoveAll(dir1) - defer s1.Shutdown() - codec := rpcClient(t, s1) - defer codec.Close() + conf := testConsulCAConfig() + delegate := newMockDelegate(t, conf) - testrpc.WaitForLeader(t, s1.RPC, "dc1") - - provider := s1.getCAProvider() + provider, err := NewConsulCAProvider(conf.Config, delegate) + assert.NoError(err) spiffeService := &connect.SpiffeIDService{ - Host: s1.config.NodeName, + Host: "node1", Namespace: "default", - Datacenter: s1.config.Datacenter, + Datacenter: "dc1", Service: "foo", } @@ -141,18 +178,12 @@ func TestCAProvider_CrossSignCA(t *testing.T) { t.Parallel() assert := assert.New(t) + conf := testConsulCAConfig() + delegate := newMockDelegate(t, conf) + provider, err := NewConsulCAProvider(conf.Config, delegate) + assert.NoError(err) - // Make sure setting a custom private key/root cert works. - dir1, s1 := testServer(t) - defer os.RemoveAll(dir1) - defer s1.Shutdown() - codec := rpcClient(t, s1) - defer codec.Close() - - testrpc.WaitForLeader(t, s1.RPC, "dc1") - - provider := s1.getCAProvider() - + // Make a new CA cert to get cross-signed. rootCA := connect.TestCA(t, nil) rootPEM, err := provider.ActiveRoot() assert.NoError(err) diff --git a/agent/connect/ca.go b/agent/connect/parsing.go similarity index 100% rename from agent/connect/ca.go rename to agent/connect/parsing.go diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index 321bcfcb4..49baddfb9 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -7,6 +7,7 @@ import ( "time" "github.com/hashicorp/consul/agent/connect" + connect_ca "github.com/hashicorp/consul/agent/connect/ca" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/testrpc" "github.com/hashicorp/net-rpc-msgpackrpc" @@ -82,9 +83,9 @@ func TestConnectCAConfig_GetSet(t *testing.T) { var reply structs.CAConfiguration assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationGet", args, &reply)) - actual, err := ParseConsulCAConfig(reply.Config) + actual, err := connect_ca.ParseConsulCAConfig(reply.Config) assert.NoError(err) - expected, err := ParseConsulCAConfig(s1.config.CAConfig.Config) + expected, err := connect_ca.ParseConsulCAConfig(s1.config.CAConfig.Config) assert.NoError(err) assert.Equal(reply.Provider, s1.config.CAConfig.Provider) assert.Equal(actual, expected) @@ -117,9 +118,9 @@ func TestConnectCAConfig_GetSet(t *testing.T) { var reply structs.CAConfiguration assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationGet", args, &reply)) - actual, err := ParseConsulCAConfig(reply.Config) + actual, err := connect_ca.ParseConsulCAConfig(reply.Config) assert.NoError(err) - expected, err := ParseConsulCAConfig(newConfig.Config) + expected, err := connect_ca.ParseConsulCAConfig(newConfig.Config) assert.NoError(err) assert.Equal(reply.Provider, newConfig.Provider) assert.Equal(actual, expected) @@ -149,7 +150,7 @@ func TestConnectCAConfig_TriggerRotation(t *testing.T) { // Update the provider config to use a new private key, which should // cause a rotation. - newKey, err := generatePrivateKey() + newKey, err := connect_ca.GeneratePrivateKey() assert.NoError(err) newConfig := &structs.CAConfiguration{ Provider: "consul", @@ -219,9 +220,9 @@ func TestConnectCAConfig_TriggerRotation(t *testing.T) { var reply structs.CAConfiguration assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationGet", args, &reply)) - actual, err := ParseConsulCAConfig(reply.Config) + actual, err := connect_ca.ParseConsulCAConfig(reply.Config) assert.NoError(err) - expected, err := ParseConsulCAConfig(newConfig.Config) + expected, err := connect_ca.ParseConsulCAConfig(newConfig.Config) assert.NoError(err) assert.Equal(reply.Provider, newConfig.Provider) assert.Equal(actual, expected) diff --git a/agent/consul/consul_ca_delegate.go b/agent/consul/consul_ca_delegate.go new file mode 100644 index 000000000..5f32a9396 --- /dev/null +++ b/agent/consul/consul_ca_delegate.go @@ -0,0 +1,28 @@ +package consul + +import ( + "github.com/hashicorp/consul/agent/consul/state" + "github.com/hashicorp/consul/agent/structs" +) + +// consulCADelegate providers callbacks for the Consul CA provider +// to use the state store for its operations. +type consulCADelegate struct { + srv *Server +} + +func (c *consulCADelegate) State() *state.Store { + return c.srv.fsm.State() +} + +func (c *consulCADelegate) ApplyCARequest(req *structs.CARequest) error { + resp, err := c.srv.raftApply(structs.ConnectCARequestType, req) + if err != nil { + return err + } + if respErr, ok := resp.(error); ok { + return respErr + } + + return nil +} diff --git a/agent/consul/leader.go b/agent/consul/leader.go index 4d61ed0c4..b07a3622d 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -10,6 +10,7 @@ import ( "github.com/armon/go-metrics" "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/connect" + connect_ca "github.com/hashicorp/consul/agent/connect/ca" "github.com/hashicorp/consul/agent/consul/autopilot" "github.com/hashicorp/consul/agent/metadata" "github.com/hashicorp/consul/agent/structs" @@ -478,10 +479,10 @@ func (s *Server) initializeCA() error { } // createProvider returns a connect CA provider from the given config. -func (s *Server) createCAProvider(conf *structs.CAConfiguration) (connect.CAProvider, error) { +func (s *Server) createCAProvider(conf *structs.CAConfiguration) (connect_ca.Provider, error) { switch conf.Provider { case structs.ConsulCAProvider: - return NewConsulCAProvider(conf.Config, s) + return connect_ca.NewConsulCAProvider(conf.Config, &consulCADelegate{s}) default: return nil, fmt.Errorf("unknown CA provider %q", conf.Provider) } @@ -510,7 +511,7 @@ func (s *Server) getCAProvider() connect.CAProvider { return result } -func (s *Server) setCAProvider(newProvider connect.CAProvider) { +func (s *Server) setCAProvider(newProvider connect_ca.Provider) { s.caProviderLock.Lock() defer s.caProviderLock.Unlock() s.caProvider = newProvider diff --git a/agent/consul/server.go b/agent/consul/server.go index e15d5f71c..871115c35 100644 --- a/agent/consul/server.go +++ b/agent/consul/server.go @@ -18,7 +18,7 @@ import ( "time" "github.com/hashicorp/consul/acl" - "github.com/hashicorp/consul/agent/connect" + connect_ca "github.com/hashicorp/consul/agent/connect/ca" "github.com/hashicorp/consul/agent/consul/autopilot" "github.com/hashicorp/consul/agent/consul/fsm" "github.com/hashicorp/consul/agent/consul/state" @@ -99,7 +99,7 @@ type Server struct { // caProvider is the current CA provider in use for Connect. This is // only non-nil when we are the leader. - caProvider connect.CAProvider + caProvider connect_ca.Provider caProviderLock sync.RWMutex // Consul configuration From 5998623c445ca8633453ab035be4090c5176526c Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 4 May 2018 15:28:11 -0700 Subject: [PATCH 246/539] Add test for ca config http endpoint --- agent/connect/ca/ca_provider_consul.go | 4 +- agent/connect_ca_endpoint_test.go | 62 +++++++++++++++++++++++++- 2 files changed, 63 insertions(+), 3 deletions(-) diff --git a/agent/connect/ca/ca_provider_consul.go b/agent/connect/ca/ca_provider_consul.go index 7d925a40f..2b119c0a3 100644 --- a/agent/connect/ca/ca_provider_consul.go +++ b/agent/connect/ca/ca_provider_consul.go @@ -114,7 +114,7 @@ func NewConsulCAProvider(rawConfig map[string]interface{}, delegate ConsulCAStat } func ParseConsulCAConfig(raw map[string]interface{}) (*structs.ConsulCAProviderConfig, error) { - var config *structs.ConsulCAProviderConfig + var config structs.ConsulCAProviderConfig if err := mapstructure.WeakDecode(raw, &config); err != nil { return nil, fmt.Errorf("error decoding config: %s", err) } @@ -123,7 +123,7 @@ func ParseConsulCAConfig(raw map[string]interface{}) (*structs.ConsulCAProviderC return nil, fmt.Errorf("must provide a private key when providing a root cert") } - return config, nil + return &config, nil } // Return the active root CA and generate a new one if needed diff --git a/agent/connect_ca_endpoint_test.go b/agent/connect_ca_endpoint_test.go index a9b355e0d..04abcfa9a 100644 --- a/agent/connect_ca_endpoint_test.go +++ b/agent/connect_ca_endpoint_test.go @@ -1,11 +1,14 @@ package agent import ( + "bytes" "net/http" "net/http/httptest" "testing" + "time" "github.com/hashicorp/consul/agent/connect" + connect_ca "github.com/hashicorp/consul/agent/connect/ca" "github.com/hashicorp/consul/agent/structs" "github.com/stretchr/testify/assert" ) @@ -42,7 +45,7 @@ func TestConnectCARoots_list(t *testing.T) { req, _ := http.NewRequest("GET", "/v1/connect/ca/roots", nil) resp := httptest.NewRecorder() obj, err := a.srv.ConnectCARoots(resp, req) - assert.Nil(err) + assert.NoError(err) value := obj.(structs.IndexedCARoots) assert.Equal(value.ActiveRootID, ca2.ID) @@ -54,3 +57,60 @@ func TestConnectCARoots_list(t *testing.T) { assert.Equal("", r.SigningKey) } } + +func TestConnectCAConfig(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + expected := &structs.ConsulCAProviderConfig{ + RotationPeriod: 90 * 24 * time.Hour, + } + + // Get the initial config. + { + req, _ := http.NewRequest("GET", "/v1/connect/ca/configuration", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.ConnectCAConfiguration(resp, req) + assert.NoError(err) + + value := obj.(structs.CAConfiguration) + parsed, err := connect_ca.ParseConsulCAConfig(value.Config) + assert.NoError(err) + assert.Equal("consul", value.Provider) + assert.Equal(expected, parsed) + } + + // Set the config. + { + body := bytes.NewBuffer([]byte(` + { + "Provider": "consul", + "Config": { + "RotationPeriod": 3600000000000 + } + }`)) + req, _ := http.NewRequest("PUT", "/v1/connect/ca/configuration", body) + resp := httptest.NewRecorder() + _, err := a.srv.ConnectCAConfiguration(resp, req) + assert.NoError(err) + } + + // The config should be updated now. + { + expected.RotationPeriod = time.Hour + req, _ := http.NewRequest("GET", "/v1/connect/ca/configuration", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.ConnectCAConfiguration(resp, req) + assert.NoError(err) + + value := obj.(structs.CAConfiguration) + //t.Fatalf("%#v", value) + parsed, err := connect_ca.ParseConsulCAConfig(value.Config) + assert.NoError(err) + assert.Equal("consul", value.Provider) + assert.Equal(expected, parsed) + } +} From baf4db1c72c64a3dfdd3dee98cd7f88564134474 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 4 May 2018 16:01:38 -0700 Subject: [PATCH 247/539] Use provider state table for a global serial index --- agent/connect/ca/ca_provider_consul.go | 19 +++++++------- agent/consul/fsm/commands_oss_test.go | 7 +++--- agent/consul/state/connect_ca_test.go | 35 +++++++++++--------------- agent/structs/connect_ca.go | 7 +++--- 4 files changed, 30 insertions(+), 38 deletions(-) diff --git a/agent/connect/ca/ca_provider_consul.go b/agent/connect/ca/ca_provider_consul.go index 2b119c0a3..922472eb7 100644 --- a/agent/connect/ca/ca_provider_consul.go +++ b/agent/connect/ca/ca_provider_consul.go @@ -179,7 +179,7 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { // Get the provider state state := c.delegate.State() - _, providerState, err := state.CAProviderState(c.id) + idx, providerState, err := state.CAProviderState(c.id) if err != nil { return "", err } @@ -215,7 +215,7 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { // Cert template for generation sn := &big.Int{} - sn.SetUint64(providerState.SerialIndex + 1) + sn.SetUint64(idx + 1) template := x509.Certificate{ SerialNumber: sn, Subject: pkix.Name{CommonName: serviceId.Service}, @@ -252,7 +252,7 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { return "", fmt.Errorf("error encoding private key: %s", err) } - err = c.incrementSerialIndex(providerState) + err = c.incrementProviderIndex(providerState) if err != nil { return "", err } @@ -268,7 +268,7 @@ func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { // Get the provider state state := c.delegate.State() - _, providerState, err := state.CAProviderState(c.id) + idx, providerState, err := state.CAProviderState(c.id) if err != nil { return "", err } @@ -290,7 +290,7 @@ func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { // Create the cross-signing template from the existing root CA serialNum := &big.Int{} - serialNum.SetUint64(providerState.SerialIndex + 1) + serialNum.SetUint64(idx + 1) template := *cert template.SerialNumber = serialNum template.SignatureAlgorithm = rootCA.SignatureAlgorithm @@ -309,7 +309,7 @@ func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { return "", fmt.Errorf("error encoding private key: %s", err) } - err = c.incrementSerialIndex(providerState) + err = c.incrementProviderIndex(providerState) if err != nil { return "", err } @@ -317,11 +317,10 @@ func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { return buf.String(), nil } -// incrementSerialIndex increments the cert serial number index in the provider -// state. -func (c *ConsulCAProvider) incrementSerialIndex(providerState *structs.CAConsulProviderState) error { +// incrementProviderIndex does a write to increment the provider state store table index +// used for serial numbers when generating certificates. +func (c *ConsulCAProvider) incrementProviderIndex(providerState *structs.CAConsulProviderState) error { newState := *providerState - newState.SerialIndex++ args := &structs.CARequest{ Op: structs.CAOpSetProviderState, ProviderState: &newState, diff --git a/agent/consul/fsm/commands_oss_test.go b/agent/consul/fsm/commands_oss_test.go index 280bf5b38..85b20b442 100644 --- a/agent/consul/fsm/commands_oss_test.go +++ b/agent/consul/fsm/commands_oss_test.go @@ -1328,10 +1328,9 @@ func TestFSM_CABuiltinProvider(t *testing.T) { // Provider state. expected := &structs.CAConsulProviderState{ - ID: "foo", - PrivateKey: "a", - RootCert: "b", - SerialIndex: 2, + ID: "foo", + PrivateKey: "a", + RootCert: "b", RaftIndex: structs.RaftIndex{ CreateIndex: 1, ModifyIndex: 1, diff --git a/agent/consul/state/connect_ca_test.go b/agent/consul/state/connect_ca_test.go index 4639c7f5a..de914ee16 100644 --- a/agent/consul/state/connect_ca_test.go +++ b/agent/consul/state/connect_ca_test.go @@ -356,10 +356,9 @@ func TestStore_CABuiltinProvider(t *testing.T) { { expected := &structs.CAConsulProviderState{ - ID: "foo", - PrivateKey: "a", - RootCert: "b", - SerialIndex: 1, + ID: "foo", + PrivateKey: "a", + RootCert: "b", } ok, err := s.CASetProviderState(0, expected) @@ -374,10 +373,9 @@ func TestStore_CABuiltinProvider(t *testing.T) { { expected := &structs.CAConsulProviderState{ - ID: "bar", - PrivateKey: "c", - RootCert: "d", - SerialIndex: 2, + ID: "bar", + PrivateKey: "c", + RootCert: "d", } ok, err := s.CASetProviderState(1, expected) @@ -398,16 +396,14 @@ func TestStore_CABuiltinProvider_Snapshot_Restore(t *testing.T) { // Create multiple state entries. before := []*structs.CAConsulProviderState{ { - ID: "bar", - PrivateKey: "y", - RootCert: "z", - SerialIndex: 2, + ID: "bar", + PrivateKey: "y", + RootCert: "z", }, { - ID: "foo", - PrivateKey: "a", - RootCert: "b", - SerialIndex: 1, + ID: "foo", + PrivateKey: "a", + RootCert: "b", }, } @@ -423,10 +419,9 @@ func TestStore_CABuiltinProvider_Snapshot_Restore(t *testing.T) { // Modify the state store. after := &structs.CAConsulProviderState{ - ID: "foo", - PrivateKey: "c", - RootCert: "d", - SerialIndex: 1, + ID: "foo", + PrivateKey: "c", + RootCert: "d", } ok, err := s.CASetProviderState(100, after) assert.NoError(err) diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 0570057b6..ca60a677f 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -168,10 +168,9 @@ type ConsulCAProviderConfig struct { // CAConsulProviderState is used to track the built-in Consul CA provider's state. type CAConsulProviderState struct { - ID string - PrivateKey string - RootCert string - SerialIndex uint64 + ID string + PrivateKey string + RootCert string RaftIndex } From 1660f9ebab278296759571cc847d08598c4f4f93 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 4 May 2018 16:01:54 -0700 Subject: [PATCH 248/539] Add more metadata to structs.CARoot --- agent/consul/connect_ca_endpoint.go | 14 ++++++------ agent/consul/leader.go | 33 ++++++++++++++++++++++------- agent/structs/connect_ca.go | 11 ++++++++++ 3 files changed, 42 insertions(+), 16 deletions(-) diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 619418bab..8acaa4ed0 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -98,15 +98,9 @@ func (s *ConnectCA) ConfigurationSet( return err } - id, err := connect.CalculateCertFingerprint(newRootPEM) + newActiveRoot, err := parseCARoot(newRootPEM, args.Config.Provider) if err != nil { - return fmt.Errorf("error parsing root fingerprint: %v", err) - } - newActiveRoot := &structs.CARoot{ - ID: id, - Name: fmt.Sprintf("%s CA Root Cert", config.Provider), - RootCert: newRootPEM, - Active: true, + return err } // Compare the new provider's root CA ID to the current one. If they @@ -240,6 +234,10 @@ func (s *ConnectCA) Roots( reply.Roots[i] = &structs.CARoot{ ID: r.ID, Name: r.Name, + SerialNumber: r.SerialNumber, + SigningKeyID: r.SigningKeyID, + NotBefore: r.NotBefore, + NotAfter: r.NotAfter, RootCert: r.RootCert, IntermediateCerts: r.IntermediateCerts, RaftIndex: r.RaftIndex, diff --git a/agent/consul/leader.go b/agent/consul/leader.go index b07a3622d..a06871888 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -4,6 +4,7 @@ import ( "fmt" "net" "strconv" + "strings" "sync" "time" @@ -427,15 +428,9 @@ func (s *Server) initializeCA() error { return fmt.Errorf("error getting root cert: %v", err) } - id, err := connect.CalculateCertFingerprint(rootPEM) + rootCA, err := parseCARoot(rootPEM, conf.Provider) if err != nil { - return fmt.Errorf("error parsing root fingerprint: %v", err) - } - rootCA := &structs.CARoot{ - ID: id, - Name: fmt.Sprintf("%s CA Root Cert", conf.Provider), - RootCert: rootPEM, - Active: true, + return err } // Check if the CA root is already initialized and exit if it is. @@ -478,6 +473,28 @@ func (s *Server) initializeCA() error { return nil } +// parseCARoot returns a filled-in structs.CARoot from a raw PEM value. +func parseCARoot(pemValue, provider string) (*structs.CARoot, error) { + id, err := connect.CalculateCertFingerprint(pemValue) + if err != nil { + return nil, fmt.Errorf("error parsing root fingerprint: %v", err) + } + rootCert, err := connect.ParseCert(pemValue) + if err != nil { + return nil, fmt.Errorf("error parsing root cert: %v", err) + } + return &structs.CARoot{ + ID: id, + Name: fmt.Sprintf("%s CA Root Cert", strings.Title(provider)), + SerialNumber: rootCert.SerialNumber.Uint64(), + SigningKeyID: connect.HexString(rootCert.AuthorityKeyId), + NotBefore: rootCert.NotBefore, + NotAfter: rootCert.NotAfter, + RootCert: pemValue, + Active: true, + }, nil +} + // createProvider returns a connect CA provider from the given config. func (s *Server) createCAProvider(conf *structs.CAConfiguration) (connect_ca.Provider, error) { switch conf.Provider { diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index ca60a677f..3a4ca8131 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -28,6 +28,17 @@ type CARoot struct { // opaque to Consul and is not used for anything internally. Name string + // SerialNumber is the x509 serial number of the certificate. + SerialNumber uint64 + + // SigningKeyID is the ID of the public key that corresponds to the + // private key used to sign the certificate. + SigningKeyID string + + // Time validity bounds. + NotBefore time.Time + NotAfter time.Time + // RootCert is the PEM-encoded public certificate. RootCert string From d1265bc38b7a99bad1221baa175997583209959c Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Wed, 9 May 2018 15:12:31 -0700 Subject: [PATCH 249/539] Rename some of the CA structs/files --- .../ca/{ca_provider.go => provider.go} | 0 ..._provider_consul.go => provider_consul.go} | 55 +++++-------------- ...consul_test.go => provider_consul_test.go} | 10 ++-- agent/connect/generate.go | 34 ++++++++++++ agent/connect_ca_endpoint_test.go | 7 +-- agent/consul/connect_ca_endpoint_test.go | 16 +++--- agent/consul/leader.go | 12 ++-- agent/consul/server.go | 4 +- 8 files changed, 73 insertions(+), 65 deletions(-) rename agent/connect/ca/{ca_provider.go => provider.go} (100%) rename agent/connect/ca/{ca_provider_consul.go => provider_consul.go} (85%) rename agent/connect/ca/{ca_provider_consul_test.go => provider_consul_test.go} (94%) create mode 100644 agent/connect/generate.go diff --git a/agent/connect/ca/ca_provider.go b/agent/connect/ca/provider.go similarity index 100% rename from agent/connect/ca/ca_provider.go rename to agent/connect/ca/provider.go diff --git a/agent/connect/ca/ca_provider_consul.go b/agent/connect/ca/provider_consul.go similarity index 85% rename from agent/connect/ca/ca_provider_consul.go rename to agent/connect/ca/provider_consul.go index 922472eb7..8fa1fb3d3 100644 --- a/agent/connect/ca/ca_provider_consul.go +++ b/agent/connect/ca/provider_consul.go @@ -2,8 +2,6 @@ package connect import ( "bytes" - "crypto/ecdsa" - "crypto/elliptic" "crypto/rand" "crypto/x509" "crypto/x509/pkix" @@ -20,26 +18,26 @@ import ( "github.com/mitchellh/mapstructure" ) -type ConsulCAProvider struct { +type ConsulProvider struct { config *structs.ConsulCAProviderConfig id string - delegate ConsulCAStateDelegate + delegate ConsulProviderStateDelegate sync.RWMutex } -type ConsulCAStateDelegate interface { +type ConsulProviderStateDelegate interface { State() *state.Store ApplyCARequest(*structs.CARequest) error } -// NewConsulCAProvider returns a new instance of the Consul CA provider, +// NewConsulProvider returns a new instance of the Consul CA provider, // bootstrapping its state in the state store necessary -func NewConsulCAProvider(rawConfig map[string]interface{}, delegate ConsulCAStateDelegate) (*ConsulCAProvider, error) { +func NewConsulProvider(rawConfig map[string]interface{}, delegate ConsulProviderStateDelegate) (*ConsulProvider, error) { conf, err := ParseConsulCAConfig(rawConfig) if err != nil { return nil, err } - provider := &ConsulCAProvider{ + provider := &ConsulProvider{ config: conf, delegate: delegate, id: fmt.Sprintf("%s,%s", conf.PrivateKey, conf.RootCert), @@ -81,7 +79,7 @@ func NewConsulCAProvider(rawConfig map[string]interface{}, delegate ConsulCAStat // Generate a private key if needed if conf.PrivateKey == "" { - pk, err := GeneratePrivateKey() + pk, err := connect.GeneratePrivateKey() if err != nil { return nil, err } @@ -127,7 +125,7 @@ func ParseConsulCAConfig(raw map[string]interface{}) (*structs.ConsulCAProviderC } // Return the active root CA and generate a new one if needed -func (c *ConsulCAProvider) ActiveRoot() (string, error) { +func (c *ConsulProvider) ActiveRoot() (string, error) { state := c.delegate.State() _, providerState, err := state.CAProviderState(c.id) if err != nil { @@ -139,13 +137,13 @@ func (c *ConsulCAProvider) ActiveRoot() (string, error) { // We aren't maintaining separate root/intermediate CAs for the builtin // provider, so just return the root. -func (c *ConsulCAProvider) ActiveIntermediate() (string, error) { +func (c *ConsulProvider) ActiveIntermediate() (string, error) { return c.ActiveRoot() } // We aren't maintaining separate root/intermediate CAs for the builtin // provider, so just generate a CSR for the active root. -func (c *ConsulCAProvider) GenerateIntermediate() (string, error) { +func (c *ConsulProvider) GenerateIntermediate() (string, error) { ca, err := c.ActiveIntermediate() if err != nil { return "", err @@ -157,7 +155,7 @@ func (c *ConsulCAProvider) GenerateIntermediate() (string, error) { } // Remove the state store entry for this provider instance. -func (c *ConsulCAProvider) Cleanup() error { +func (c *ConsulProvider) Cleanup() error { args := &structs.CARequest{ Op: structs.CAOpDeleteProviderState, ProviderState: &structs.CAConsulProviderState{ID: c.id}, @@ -171,7 +169,7 @@ func (c *ConsulCAProvider) Cleanup() error { // Sign returns a new certificate valid for the given SpiffeIDService // using the current CA. -func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { +func (c *ConsulProvider) Sign(csr *x509.CertificateRequest) (string, error) { // Lock during the signing so we don't use the same index twice // for different cert serial numbers. c.Lock() @@ -262,7 +260,7 @@ func (c *ConsulCAProvider) Sign(csr *x509.CertificateRequest) (string, error) { } // CrossSignCA returns the given intermediate CA cert signed by the current active root. -func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { +func (c *ConsulProvider) CrossSignCA(cert *x509.Certificate) (string, error) { c.Lock() defer c.Unlock() @@ -319,7 +317,7 @@ func (c *ConsulCAProvider) CrossSignCA(cert *x509.Certificate) (string, error) { // incrementProviderIndex does a write to increment the provider state store table index // used for serial numbers when generating certificates. -func (c *ConsulCAProvider) incrementProviderIndex(providerState *structs.CAConsulProviderState) error { +func (c *ConsulProvider) incrementProviderIndex(providerState *structs.CAConsulProviderState) error { newState := *providerState args := &structs.CARequest{ Op: structs.CAOpSetProviderState, @@ -332,31 +330,8 @@ func (c *ConsulCAProvider) incrementProviderIndex(providerState *structs.CAConsu return nil } -// GeneratePrivateKey returns a new private key -func GeneratePrivateKey() (string, error) { - var pk *ecdsa.PrivateKey - - pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) - if err != nil { - return "", fmt.Errorf("error generating private key: %s", err) - } - - bs, err := x509.MarshalECPrivateKey(pk) - if err != nil { - return "", fmt.Errorf("error generating private key: %s", err) - } - - var buf bytes.Buffer - err = pem.Encode(&buf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) - if err != nil { - return "", fmt.Errorf("error encoding private key: %s", err) - } - - return buf.String(), nil -} - // generateCA makes a new root CA using the current private key -func (c *ConsulCAProvider) generateCA(privateKey string, sn uint64) (string, error) { +func (c *ConsulProvider) generateCA(privateKey string, sn uint64) (string, error) { state := c.delegate.State() _, config, err := state.CAConfig() if err != nil { diff --git a/agent/connect/ca/ca_provider_consul_test.go b/agent/connect/ca/provider_consul_test.go similarity index 94% rename from agent/connect/ca/ca_provider_consul_test.go rename to agent/connect/ca/provider_consul_test.go index fd1c8ec29..9f8cc04b4 100644 --- a/agent/connect/ca/ca_provider_consul_test.go +++ b/agent/connect/ca/provider_consul_test.go @@ -74,7 +74,7 @@ func TestCAProvider_Bootstrap(t *testing.T) { conf := testConsulCAConfig() delegate := newMockDelegate(t, conf) - provider, err := NewConsulCAProvider(conf.Config, delegate) + provider, err := NewConsulProvider(conf.Config, delegate) assert.NoError(err) root, err := provider.ActiveRoot() @@ -104,7 +104,7 @@ func TestCAProvider_Bootstrap_WithCert(t *testing.T) { } delegate := newMockDelegate(t, conf) - provider, err := NewConsulCAProvider(conf.Config, delegate) + provider, err := NewConsulProvider(conf.Config, delegate) assert.NoError(err) root, err := provider.ActiveRoot() @@ -119,7 +119,7 @@ func TestCAProvider_SignLeaf(t *testing.T) { conf := testConsulCAConfig() delegate := newMockDelegate(t, conf) - provider, err := NewConsulCAProvider(conf.Config, delegate) + provider, err := NewConsulProvider(conf.Config, delegate) assert.NoError(err) spiffeService := &connect.SpiffeIDService{ @@ -143,7 +143,7 @@ func TestCAProvider_SignLeaf(t *testing.T) { assert.NoError(err) assert.Equal(parsed.URIs[0], spiffeService.URI()) assert.Equal(parsed.Subject.CommonName, "foo") - assert.Equal(parsed.SerialNumber.Uint64(), uint64(1)) + assert.Equal(uint64(2), parsed.SerialNumber.Uint64()) // Ensure the cert is valid now and expires within the correct limit. assert.True(parsed.NotAfter.Sub(time.Now()) < 3*24*time.Hour) @@ -180,7 +180,7 @@ func TestCAProvider_CrossSignCA(t *testing.T) { assert := assert.New(t) conf := testConsulCAConfig() delegate := newMockDelegate(t, conf) - provider, err := NewConsulCAProvider(conf.Config, delegate) + provider, err := NewConsulProvider(conf.Config, delegate) assert.NoError(err) // Make a new CA cert to get cross-signed. diff --git a/agent/connect/generate.go b/agent/connect/generate.go new file mode 100644 index 000000000..1226323f0 --- /dev/null +++ b/agent/connect/generate.go @@ -0,0 +1,34 @@ +package connect + +import ( + "bytes" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/x509" + "encoding/pem" + "fmt" +) + +// GeneratePrivateKey returns a new private key +func GeneratePrivateKey() (string, error) { + var pk *ecdsa.PrivateKey + + pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if err != nil { + return "", fmt.Errorf("error generating private key: %s", err) + } + + bs, err := x509.MarshalECPrivateKey(pk) + if err != nil { + return "", fmt.Errorf("error generating private key: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) + if err != nil { + return "", fmt.Errorf("error encoding private key: %s", err) + } + + return buf.String(), nil +} diff --git a/agent/connect_ca_endpoint_test.go b/agent/connect_ca_endpoint_test.go index 04abcfa9a..afaa5f049 100644 --- a/agent/connect_ca_endpoint_test.go +++ b/agent/connect_ca_endpoint_test.go @@ -8,7 +8,7 @@ import ( "time" "github.com/hashicorp/consul/agent/connect" - connect_ca "github.com/hashicorp/consul/agent/connect/ca" + ca "github.com/hashicorp/consul/agent/connect/ca" "github.com/hashicorp/consul/agent/structs" "github.com/stretchr/testify/assert" ) @@ -77,7 +77,7 @@ func TestConnectCAConfig(t *testing.T) { assert.NoError(err) value := obj.(structs.CAConfiguration) - parsed, err := connect_ca.ParseConsulCAConfig(value.Config) + parsed, err := ca.ParseConsulCAConfig(value.Config) assert.NoError(err) assert.Equal("consul", value.Provider) assert.Equal(expected, parsed) @@ -107,8 +107,7 @@ func TestConnectCAConfig(t *testing.T) { assert.NoError(err) value := obj.(structs.CAConfiguration) - //t.Fatalf("%#v", value) - parsed, err := connect_ca.ParseConsulCAConfig(value.Config) + parsed, err := ca.ParseConsulCAConfig(value.Config) assert.NoError(err) assert.Equal("consul", value.Provider) assert.Equal(expected, parsed) diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index 49baddfb9..4609934ad 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -7,7 +7,7 @@ import ( "time" "github.com/hashicorp/consul/agent/connect" - connect_ca "github.com/hashicorp/consul/agent/connect/ca" + ca "github.com/hashicorp/consul/agent/connect/ca" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/testrpc" "github.com/hashicorp/net-rpc-msgpackrpc" @@ -83,9 +83,9 @@ func TestConnectCAConfig_GetSet(t *testing.T) { var reply structs.CAConfiguration assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationGet", args, &reply)) - actual, err := connect_ca.ParseConsulCAConfig(reply.Config) + actual, err := ca.ParseConsulCAConfig(reply.Config) assert.NoError(err) - expected, err := connect_ca.ParseConsulCAConfig(s1.config.CAConfig.Config) + expected, err := ca.ParseConsulCAConfig(s1.config.CAConfig.Config) assert.NoError(err) assert.Equal(reply.Provider, s1.config.CAConfig.Provider) assert.Equal(actual, expected) @@ -118,9 +118,9 @@ func TestConnectCAConfig_GetSet(t *testing.T) { var reply structs.CAConfiguration assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationGet", args, &reply)) - actual, err := connect_ca.ParseConsulCAConfig(reply.Config) + actual, err := ca.ParseConsulCAConfig(reply.Config) assert.NoError(err) - expected, err := connect_ca.ParseConsulCAConfig(newConfig.Config) + expected, err := ca.ParseConsulCAConfig(newConfig.Config) assert.NoError(err) assert.Equal(reply.Provider, newConfig.Provider) assert.Equal(actual, expected) @@ -150,7 +150,7 @@ func TestConnectCAConfig_TriggerRotation(t *testing.T) { // Update the provider config to use a new private key, which should // cause a rotation. - newKey, err := connect_ca.GeneratePrivateKey() + newKey, err := connect.GeneratePrivateKey() assert.NoError(err) newConfig := &structs.CAConfiguration{ Provider: "consul", @@ -220,9 +220,9 @@ func TestConnectCAConfig_TriggerRotation(t *testing.T) { var reply structs.CAConfiguration assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.ConfigurationGet", args, &reply)) - actual, err := connect_ca.ParseConsulCAConfig(reply.Config) + actual, err := ca.ParseConsulCAConfig(reply.Config) assert.NoError(err) - expected, err := connect_ca.ParseConsulCAConfig(newConfig.Config) + expected, err := ca.ParseConsulCAConfig(newConfig.Config) assert.NoError(err) assert.Equal(reply.Provider, newConfig.Provider) assert.Equal(actual, expected) diff --git a/agent/consul/leader.go b/agent/consul/leader.go index a06871888..579ff4b1d 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -11,7 +11,7 @@ import ( "github.com/armon/go-metrics" "github.com/hashicorp/consul/acl" "github.com/hashicorp/consul/agent/connect" - connect_ca "github.com/hashicorp/consul/agent/connect/ca" + ca "github.com/hashicorp/consul/agent/connect/ca" "github.com/hashicorp/consul/agent/consul/autopilot" "github.com/hashicorp/consul/agent/metadata" "github.com/hashicorp/consul/agent/structs" @@ -496,18 +496,18 @@ func parseCARoot(pemValue, provider string) (*structs.CARoot, error) { } // createProvider returns a connect CA provider from the given config. -func (s *Server) createCAProvider(conf *structs.CAConfiguration) (connect_ca.Provider, error) { +func (s *Server) createCAProvider(conf *structs.CAConfiguration) (ca.Provider, error) { switch conf.Provider { case structs.ConsulCAProvider: - return connect_ca.NewConsulCAProvider(conf.Config, &consulCADelegate{s}) + return ca.NewConsulProvider(conf.Config, &consulCADelegate{s}) default: return nil, fmt.Errorf("unknown CA provider %q", conf.Provider) } } -func (s *Server) getCAProvider() connect.CAProvider { +func (s *Server) getCAProvider() ca.Provider { retries := 0 - var result connect.CAProvider + var result ca.Provider for result == nil { s.caProviderLock.RLock() result = s.caProvider @@ -528,7 +528,7 @@ func (s *Server) getCAProvider() connect.CAProvider { return result } -func (s *Server) setCAProvider(newProvider connect_ca.Provider) { +func (s *Server) setCAProvider(newProvider ca.Provider) { s.caProviderLock.Lock() defer s.caProviderLock.Unlock() s.caProvider = newProvider diff --git a/agent/consul/server.go b/agent/consul/server.go index 871115c35..7b589d753 100644 --- a/agent/consul/server.go +++ b/agent/consul/server.go @@ -18,7 +18,7 @@ import ( "time" "github.com/hashicorp/consul/acl" - connect_ca "github.com/hashicorp/consul/agent/connect/ca" + ca "github.com/hashicorp/consul/agent/connect/ca" "github.com/hashicorp/consul/agent/consul/autopilot" "github.com/hashicorp/consul/agent/consul/fsm" "github.com/hashicorp/consul/agent/consul/state" @@ -99,7 +99,7 @@ type Server struct { // caProvider is the current CA provider in use for Connect. This is // only non-nil when we are the leader. - caProvider connect_ca.Provider + caProvider ca.Provider caProviderLock sync.RWMutex // Consul configuration From c808833a78275a3dd92784efd772e9e929df02bb Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Tue, 8 May 2018 14:23:44 +0100 Subject: [PATCH 250/539] Return TrustDomain from CARoots RPC --- agent/connect/ca/provider_consul.go | 2 +- agent/connect/uri_signing.go | 12 ++++++++++++ agent/connect/uri_signing_test.go | 11 +++++++++++ agent/consul/connect_ca_endpoint.go | 23 +++++++++++++++++++++++ agent/consul/connect_ca_endpoint_test.go | 13 ++++++++++--- agent/structs/connect_ca.go | 15 +++++++++++++++ 6 files changed, 72 insertions(+), 4 deletions(-) diff --git a/agent/connect/ca/provider_consul.go b/agent/connect/ca/provider_consul.go index 8fa1fb3d3..d88a58bfc 100644 --- a/agent/connect/ca/provider_consul.go +++ b/agent/connect/ca/provider_consul.go @@ -346,7 +346,7 @@ func (c *ConsulProvider) generateCA(privateKey string, sn uint64) (string, error name := fmt.Sprintf("Consul CA %d", sn) // The URI (SPIFFE compatible) for the cert - id := &connect.SpiffeIDSigning{ClusterID: config.ClusterID, Domain: "consul"} + id := connect.SpiffeIDSigningForCluster(config) keyId, err := connect.KeyId(privKey.Public()) if err != nil { return "", err diff --git a/agent/connect/uri_signing.go b/agent/connect/uri_signing.go index 213f744d1..b43971ed7 100644 --- a/agent/connect/uri_signing.go +++ b/agent/connect/uri_signing.go @@ -27,3 +27,15 @@ func (id *SpiffeIDSigning) Authorize(ixn *structs.Intention) (bool, bool) { // Never authorize as a client. return false, true } + +// SpiffeIDSigningForCluster returns the SPIFFE signing identifier (trust +// domain) representation of the given CA config. +// +// NOTE(banks): we intentionally fix the tld `.consul` for now rather than tie +// this to the `domain` config used for DNS because changing DNS domain can't +// break all certificate validation. That does mean that DNS prefix might not +// match the identity URIs and so the trust domain might not actually resolve +// which we would like but don't actually need. +func SpiffeIDSigningForCluster(config *structs.CAConfiguration) *SpiffeIDSigning { + return &SpiffeIDSigning{ClusterID: config.ClusterID, Domain: "consul"} +} diff --git a/agent/connect/uri_signing_test.go b/agent/connect/uri_signing_test.go index a9be3c5e2..98babbc2d 100644 --- a/agent/connect/uri_signing_test.go +++ b/agent/connect/uri_signing_test.go @@ -3,6 +3,8 @@ package connect import ( "testing" + "github.com/hashicorp/consul/agent/structs" + "github.com/stretchr/testify/assert" ) @@ -13,3 +15,12 @@ func TestSpiffeIDSigningAuthorize(t *testing.T) { assert.False(t, auth) assert.True(t, ok) } + +func TestSpiffeIDSigningForCluster(t *testing.T) { + // For now it should just append .consul to the ID. + config := &structs.CAConfiguration{ + ClusterID: testClusterID, + } + id := SpiffeIDSigningForCluster(config) + assert.Equal(t, id.URI().String(), "spiffe://"+testClusterID+".consul") +} diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index 8acaa4ed0..f70c0d3a3 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -211,6 +211,29 @@ func (s *ConnectCA) Roots( return err } + // Load the ClusterID to generate TrustDomain. We do this outside the loop + // since by definition this value should be immutable once set for lifetime of + // the cluster so we don't need to look it up more than once. We also don't + // have to worry about non-atomicity between the config fetch transaction and + // the CARoots transaction below since this field must remain immutable. Do + // not re-use this state/config for other logic that might care about changes + // of config during the blocking query below. + { + state := s.srv.fsm.State() + _, config, err := state.CAConfig() + if err != nil { + return err + } + // Build TrustDomain based on the ClusterID stored. + spiffeID := connect.SpiffeIDSigningForCluster(config) + uri := spiffeID.URI() + if uri == nil { + // Impossible(tm) but let's not panic + return errors.New("no trust domain found") + } + reply.TrustDomain = uri.Host + } + return s.srv.blockingQuery( &args.QueryOptions, &reply.QueryMeta, func(ws memdb.WatchSet, state *state.Store) error { diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index 4609934ad..655b1d7f4 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -2,10 +2,13 @@ package consul import ( "crypto/x509" + "fmt" "os" "testing" "time" + "github.com/stretchr/testify/require" + "github.com/hashicorp/consul/agent/connect" ca "github.com/hashicorp/consul/agent/connect/ca" "github.com/hashicorp/consul/agent/structs" @@ -27,6 +30,7 @@ func TestConnectCARoots(t *testing.T) { t.Parallel() assert := assert.New(t) + require := require.New(t) dir1, s1 := testServer(t) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -41,17 +45,19 @@ func TestConnectCARoots(t *testing.T) { ca2 := connect.TestCA(t, nil) ca2.Active = false idx, _, err := state.CARoots(nil) - assert.NoError(err) + require.NoError(err) ok, err := state.CARootSetCAS(idx, idx, []*structs.CARoot{ca1, ca2}) assert.True(ok) - assert.NoError(err) + require.NoError(err) + _, caCfg, err := state.CAConfig() + require.NoError(err) // Request args := &structs.DCSpecificRequest{ Datacenter: "dc1", } var reply structs.IndexedCARoots - assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", args, &reply)) + require.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.Roots", args, &reply)) // Verify assert.Equal(ca1.ID, reply.ActiveRootID) @@ -61,6 +67,7 @@ func TestConnectCARoots(t *testing.T) { assert.Equal("", r.SigningCert) assert.Equal("", r.SigningKey) } + assert.Equal(fmt.Sprintf("%s.consul", caCfg.ClusterID), reply.TrustDomain) } func TestConnectCAConfig_GetSet(t *testing.T) { diff --git a/agent/structs/connect_ca.go b/agent/structs/connect_ca.go index 3a4ca8131..fa273a3e4 100644 --- a/agent/structs/connect_ca.go +++ b/agent/structs/connect_ca.go @@ -11,6 +11,21 @@ type IndexedCARoots struct { // the process of being rotated out. ActiveRootID string + // TrustDomain is the identification root for this Consul cluster. All + // certificates signed by the cluster's CA must have their identifying URI in + // this domain. + // + // This does not include the protocol (currently spiffe://) since we may + // implement other protocols in future with equivalent semantics. It should be + // compared against the "authority" section of a URI (i.e. host:port). + // + // NOTE(banks): Later we may support explicitly trusting external domains + // which may be encoded into the CARoot struct or a separate list but this + // domain identifier should be immutable and cluster-wide so deserves to be at + // the root of this response rather than duplicated through all CARoots that + // are not externally trusted entities. + TrustDomain string + // Roots is a list of root CA certs to trust. Roots []*CARoot From 5a1408f18608252f25ef2b17bd6d96a5253d3e1f Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 9 May 2018 14:25:48 +0100 Subject: [PATCH 251/539] Add CSR signing verification of service ACL, trust domain and datacenter. --- agent/connect/uri_signing.go | 35 ++++++- agent/connect/uri_signing_test.go | 97 ++++++++++++++++++++ agent/consul/connect_ca_endpoint.go | 52 ++++++++--- agent/consul/connect_ca_endpoint_test.go | 112 ++++++++++++++++++++++- 4 files changed, 277 insertions(+), 19 deletions(-) diff --git a/agent/connect/uri_signing.go b/agent/connect/uri_signing.go index b43971ed7..843f95596 100644 --- a/agent/connect/uri_signing.go +++ b/agent/connect/uri_signing.go @@ -3,6 +3,7 @@ package connect import ( "fmt" "net/url" + "strings" "github.com/hashicorp/consul/agent/structs" ) @@ -18,16 +19,48 @@ type SpiffeIDSigning struct { func (id *SpiffeIDSigning) URI() *url.URL { var result url.URL result.Scheme = "spiffe" - result.Host = fmt.Sprintf("%s.%s", id.ClusterID, id.Domain) + result.Host = id.Host() return &result } +// Host is the canonical representation as a DNS-compatible hostname. +func (id *SpiffeIDSigning) Host() string { + return strings.ToLower(fmt.Sprintf("%s.%s", id.ClusterID, id.Domain)) +} + // CertURI impl. func (id *SpiffeIDSigning) Authorize(ixn *structs.Intention) (bool, bool) { // Never authorize as a client. return false, true } +// CanSign takes any CertURI and returns whether or not this signing entity is +// allowed to sign CSRs for that entity (i.e. represents the trust domain for +// that entity). +// +// I choose to make this a fixed centralised method here for now rather than a +// method on CertURI interface since we don't intend this to be extensible +// outside and it's easier to reason about the security properties when they are +// all in one place with "whitelist" semantics. +func (id *SpiffeIDSigning) CanSign(cu CertURI) bool { + switch other := cu.(type) { + case *SpiffeIDSigning: + // We can only sign other CA certificates for the same trust domain. Note + // that we could open this up later for example to support external + // federation of roots and cross-signing external roots that have different + // URI structure but it's simpler to start off restrictive. + return id == other + case *SpiffeIDService: + // The host component of the service must be an exact match for now under + // ascii case folding (since hostnames are case-insensitive). Later we might + // worry about Unicode domains if we start allowing customisation beyond the + // built-in cluster ids. + return strings.ToLower(other.Host) == id.Host() + default: + return false + } +} + // SpiffeIDSigningForCluster returns the SPIFFE signing identifier (trust // domain) representation of the given CA config. // diff --git a/agent/connect/uri_signing_test.go b/agent/connect/uri_signing_test.go index 98babbc2d..2d8975858 100644 --- a/agent/connect/uri_signing_test.go +++ b/agent/connect/uri_signing_test.go @@ -1,6 +1,8 @@ package connect import ( + "net/url" + "strings" "testing" "github.com/hashicorp/consul/agent/structs" @@ -24,3 +26,98 @@ func TestSpiffeIDSigningForCluster(t *testing.T) { id := SpiffeIDSigningForCluster(config) assert.Equal(t, id.URI().String(), "spiffe://"+testClusterID+".consul") } + +// fakeCertURI is a CertURI implementation that our implementation doesn't know +// about +type fakeCertURI string + +func (f fakeCertURI) Authorize(*structs.Intention) (auth bool, match bool) { + return false, false +} + +func (f fakeCertURI) URI() *url.URL { + u, _ := url.Parse(string(f)) + return u +} +func TestSpiffeIDSigning_CanSign(t *testing.T) { + + testSigning := &SpiffeIDSigning{ + ClusterID: testClusterID, + Domain: "consul", + } + + tests := []struct { + name string + id *SpiffeIDSigning + input CertURI + want bool + }{ + { + name: "same signing ID", + id: testSigning, + input: testSigning, + want: true, + }, + { + name: "other signing ID", + id: testSigning, + input: &SpiffeIDSigning{ + ClusterID: "fakedomain", + Domain: "consul", + }, + want: false, + }, + { + name: "different TLD signing ID", + id: testSigning, + input: &SpiffeIDSigning{ + ClusterID: testClusterID, + Domain: "evil", + }, + want: false, + }, + { + name: "nil", + id: testSigning, + input: nil, + want: false, + }, + { + name: "unrecognised CertURI implementation", + id: testSigning, + input: fakeCertURI("spiffe://foo.bar/baz"), + want: false, + }, + { + name: "service - good", + id: testSigning, + input: &SpiffeIDService{testClusterID + ".consul", "default", "dc1", "web"}, + want: true, + }, + { + name: "service - good midex case", + id: testSigning, + input: &SpiffeIDService{strings.ToUpper(testClusterID) + ".CONsuL", "defAUlt", "dc1", "WEB"}, + want: true, + }, + { + name: "service - different cluster", + id: testSigning, + input: &SpiffeIDService{"55555555-4444-3333-2222-111111111111.consul", "default", "dc1", "web"}, + want: false, + }, + { + name: "service - different TLD", + id: testSigning, + input: &SpiffeIDService{testClusterID + ".fake", "default", "dc1", "web"}, + want: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := tt.id.CanSign(tt.input) + assert.Equal(t, tt.want, got) + }) + } +} diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index f70c0d3a3..a72a4998b 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -225,13 +225,8 @@ func (s *ConnectCA) Roots( return err } // Build TrustDomain based on the ClusterID stored. - spiffeID := connect.SpiffeIDSigningForCluster(config) - uri := spiffeID.URI() - if uri == nil { - // Impossible(tm) but let's not panic - return errors.New("no trust domain found") - } - reply.TrustDomain = uri.Host + signingID := connect.SpiffeIDSigningForCluster(config) + reply.TrustDomain = signingID.Host() } return s.srv.blockingQuery( @@ -297,11 +292,11 @@ func (s *ConnectCA) Sign( } // Parse the SPIFFE ID - spiffeId, err := connect.ParseCertURI(csr.URIs[0]) + spiffeID, err := connect.ParseCertURI(csr.URIs[0]) if err != nil { return err } - serviceId, ok := spiffeId.(*connect.SpiffeIDService) + serviceID, ok := spiffeID.(*connect.SpiffeIDService) if !ok { return fmt.Errorf("SPIFFE ID in CSR must be a service ID") } @@ -311,7 +306,35 @@ func (s *ConnectCA) Sign( return fmt.Errorf("internal error: CA provider is nil") } - // todo(kyhavlov): more validation on the CSR before signing + // Verify that the CSR entity is in the cluster's trust domain + state := s.srv.fsm.State() + _, config, err := state.CAConfig() + if err != nil { + return err + } + signingID := connect.SpiffeIDSigningForCluster(config) + if !signingID.CanSign(serviceID) { + return fmt.Errorf("SPIFFE ID in CSR from a different trust domain: %s, "+ + "we are %s", serviceID.Host, signingID.Host()) + } + + // Verify that the ACL token provided has permission to act as this service + rule, err := s.srv.resolveToken(args.Token) + if err != nil { + return err + } + if rule != nil && !rule.ServiceWrite(serviceID.Service, nil) { + return acl.ErrPermissionDenied + } + + // Verify that the DC in the service URI matches us. We might relax this + // requirement later but being restrictive for now is safer. + if serviceID.Datacenter != s.srv.config.Datacenter { + return fmt.Errorf("SPIFFE ID in CSR from a different datacenter: %s, "+ + "we are %s", serviceID.Datacenter, s.srv.config.Datacenter) + } + + // All seems to be in order, actually sign it. pem, err := provider.Sign(csr) if err != nil { return err @@ -322,9 +345,10 @@ func (s *ConnectCA) Sign( // the built-in provider being supported and the implementation detail that we // have to write a SerialIndex update to the provider config table for every // cert issued so in all cases this index will be higher than any previous - // sign response. This has to happen after the provider.Sign call to observe - // the index update. - modIdx, _, err := s.srv.fsm.State().CAConfig() + // sign response. This has to be reloaded after the provider.Sign call to + // observe the index update. + state = s.srv.fsm.State() + modIdx, _, err := state.CAConfig() if err != nil { return err } @@ -338,7 +362,7 @@ func (s *ConnectCA) Sign( *reply = structs.IssuedCert{ SerialNumber: connect.HexString(cert.SerialNumber.Bytes()), CertPEM: pem, - Service: serviceId.Service, + Service: serviceID.Service, ServiceURI: cert.URIs[0].String(), ValidAfter: cert.NotBefore, ValidBefore: cert.NotAfter, diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index 655b1d7f4..f20935877 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -241,6 +241,7 @@ func TestConnectCASign(t *testing.T) { t.Parallel() assert := assert.New(t) + require := require.New(t) dir1, s1 := testServer(t) defer os.RemoveAll(dir1) defer s1.Shutdown() @@ -251,30 +252,133 @@ func TestConnectCASign(t *testing.T) { // Generate a CSR and request signing spiffeId := connect.TestSpiffeIDService(t, "web") + spiffeId.Host = testGetClusterTrustDomain(t, s1) csr, _ := connect.TestCSR(t, spiffeId) args := &structs.CASignRequest{ Datacenter: "dc1", CSR: csr, } var reply structs.IssuedCert - assert.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", args, &reply)) + require.NoError(msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", args, &reply)) // Get the current CA state := s1.fsm.State() _, ca, err := state.CARootActive(nil) - assert.NoError(err) + require.NoError(err) // Verify that the cert is signed by the CA roots := x509.NewCertPool() assert.True(roots.AppendCertsFromPEM([]byte(ca.RootCert))) leaf, err := connect.ParseCert(reply.CertPEM) - assert.NoError(err) + require.NoError(err) _, err = leaf.Verify(x509.VerifyOptions{ Roots: roots, }) - assert.NoError(err) + require.NoError(err) // Verify other fields assert.Equal("web", reply.Service) assert.Equal(spiffeId.URI().String(), reply.ServiceURI) } + +func testGetClusterTrustDomain(t *testing.T, s *Server) string { + t.Helper() + state := s.fsm.State() + _, config, err := state.CAConfig() + require.NoError(t, err) + // Build TrustDomain based on the ClusterID stored. + signingID := connect.SpiffeIDSigningForCluster(config) + return signingID.Host() +} + +func TestConnectCASignValidation(t *testing.T) { + t.Parallel() + + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create an ACL token with service:write for web* + var webToken string + { + arg := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: ` + service "web" { + policy = "write" + }`, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + require.NoError(t, msgpackrpc.CallWithCodec(codec, "ACL.Apply", &arg, &webToken)) + } + + trustDomain := testGetClusterTrustDomain(t, s1) + + tests := []struct { + name string + id connect.CertURI + wantErr string + }{ + { + name: "different cluster", + id: &connect.SpiffeIDService{ + "55555555-4444-3333-2222-111111111111.consul", + "default", "dc1", "web"}, + wantErr: "different trust domain", + }, + { + name: "same cluster should validate", + id: &connect.SpiffeIDService{ + trustDomain, + "default", "dc1", "web"}, + wantErr: "", + }, + { + name: "same cluster, CSR for a different DC should NOT validate", + id: &connect.SpiffeIDService{ + trustDomain, + "default", "dc2", "web"}, + wantErr: "different datacenter", + }, + { + name: "same cluster and DC, different service should not have perms", + id: &connect.SpiffeIDService{ + trustDomain, + "default", "dc1", "db"}, + wantErr: "Permission denied", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + csr, _ := connect.TestCSR(t, tt.id) + args := &structs.CASignRequest{ + Datacenter: "dc1", + CSR: csr, + WriteRequest: structs.WriteRequest{Token: webToken}, + } + var reply structs.IssuedCert + err := msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", args, &reply) + if tt.wantErr == "" { + require.NoError(t, err) + // No other validation that is handled in different tests + } else { + require.Error(t, err) + require.Contains(t, err.Error(), tt.wantErr) + } + }) + } +} From 30d90b3be4dcdb0155b7a9185966ffd339bc138f Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 9 May 2018 17:15:29 +0100 Subject: [PATCH 252/539] Generate CSR using real trust-domain --- agent/agent_endpoint_test.go | 26 ++++++---- agent/cache-types/connect_ca_leaf.go | 46 +++++++++++++----- agent/cache-types/connect_ca_leaf_test.go | 5 +- agent/connect/ca/provider_consul.go | 4 +- agent/connect/csr.go | 59 +++++++++++++++++++++++ agent/connect/testing_ca.go | 5 +- agent/connect/uri_signing.go | 3 +- agent/consul/connect_ca_endpoint.go | 14 ++++-- agent/consul/connect_ca_endpoint_test.go | 2 +- agent/testagent.go | 3 +- 10 files changed, 136 insertions(+), 31 deletions(-) create mode 100644 agent/connect/csr.go diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index d4b55a50f..4749148e1 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2106,13 +2106,14 @@ func TestAgentConnectCARoots_empty(t *testing.T) { t.Parallel() assert := assert.New(t) + require := require.New(t) a := NewTestAgent(t.Name(), "connect { enabled = false }") defer a.Shutdown() req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/roots", nil) resp := httptest.NewRecorder() obj, err := a.srv.AgentConnectCARoots(resp, req) - assert.Nil(err) + require.NoError(err) value := obj.(structs.IndexedCARoots) assert.Equal(value.ActiveRootID, "") @@ -2122,6 +2123,7 @@ func TestAgentConnectCARoots_empty(t *testing.T) { func TestAgentConnectCARoots_list(t *testing.T) { t.Parallel() + assert := assert.New(t) require := require.New(t) a := NewTestAgent(t.Name(), "") defer a.Shutdown() @@ -2137,30 +2139,34 @@ func TestAgentConnectCARoots_list(t *testing.T) { req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/roots", nil) resp := httptest.NewRecorder() obj, err := a.srv.AgentConnectCARoots(resp, req) - require.Nil(err) + require.NoError(err) value := obj.(structs.IndexedCARoots) - require.Equal(value.ActiveRootID, ca2.ID) - require.Len(value.Roots, 2) + assert.Equal(value.ActiveRootID, ca2.ID) + // Would like to assert that it's the same as the TestAgent domain but the + // only way to access that state via this package is by RPC to the server + // implementation running in TestAgent which is more or less a tautology. + assert.NotEmpty(value.TrustDomain) + assert.Len(value.Roots, 2) // We should never have the secret information for _, r := range value.Roots { - require.Equal("", r.SigningCert) - require.Equal("", r.SigningKey) + assert.Equal("", r.SigningCert) + assert.Equal("", r.SigningKey) } // That should've been a cache miss, so no hit change - require.Equal(cacheHits, a.cache.Hits()) + assert.Equal(cacheHits, a.cache.Hits()) // Test caching { // List it again obj2, err := a.srv.AgentConnectCARoots(httptest.NewRecorder(), req) - require.Nil(err) - require.Equal(obj, obj2) + require.NoError(err) + assert.Equal(obj, obj2) // Should cache hit this time and not make request - require.Equal(cacheHits+1, a.cache.Hits()) + assert.Equal(cacheHits+1, a.cache.Hits()) cacheHits++ } diff --git a/agent/cache-types/connect_ca_leaf.go b/agent/cache-types/connect_ca_leaf.go index 1058ec26a..4070298df 100644 --- a/agent/cache-types/connect_ca_leaf.go +++ b/agent/cache-types/connect_ca_leaf.go @@ -1,6 +1,7 @@ package cachetype import ( + "errors" "fmt" "sync" "sync/atomic" @@ -9,9 +10,7 @@ import ( "github.com/hashicorp/consul/agent/cache" "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" - // NOTE(mitcehllh): This is temporary while certs are stubbed out. - "github.com/mitchellh/go-testing-interface" ) // Recommended name for registration. @@ -97,16 +96,41 @@ func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache // by the above channel). } - // Create a CSR. - // TODO(mitchellh): This is obviously not production ready! The host - // needs a correct host ID, and we probably don't want to use TestCSR - // and want a non-test-specific way to create a CSR. - csr, pk := connect.TestCSR(&testing.RuntimeT{}, &connect.SpiffeIDService{ - Host: "11111111-2222-3333-4444-555555555555.consul", - Namespace: "default", + // Need to lookup RootCAs response to discover trust domain. First just lookup + // with no blocking info - this should be a cache hit most of the time. + rawRoots, err := c.Cache.Get(ConnectCARootName, &structs.DCSpecificRequest{ Datacenter: reqReal.Datacenter, - Service: reqReal.Service, }) + if err != nil { + return result, err + } + roots, ok := rawRoots.(*structs.IndexedCARoots) + if !ok { + return result, errors.New("invalid RootCA response type") + } + if roots.TrustDomain == "" { + return result, errors.New("cluster has no CA bootstrapped") + } + + // Build the service ID + serviceID := &connect.SpiffeIDService{ + Host: roots.TrustDomain, + Datacenter: reqReal.Datacenter, + Namespace: "default", + Service: reqReal.Service, + } + + // Create a new private key + pk, pkPEM, err := connect.GeneratePrivateKey() + if err != nil { + return result, err + } + + // Create a CSR. + csr, err := connect.CreateCSR(serviceID, pk) + if err != nil { + return result, err + } // Request signing var reply structs.IssuedCert @@ -117,7 +141,7 @@ func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache if err := c.RPC.RPC("ConnectCA.Sign", &args, &reply); err != nil { return result, err } - reply.PrivateKeyPEM = pk + reply.PrivateKeyPEM = pkPEM // Lock the issued certs map so we can insert it. We only insert if // we didn't happen to get a newer one. This should never happen since diff --git a/agent/cache-types/connect_ca_leaf_test.go b/agent/cache-types/connect_ca_leaf_test.go index 0612aed21..d55caf408 100644 --- a/agent/cache-types/connect_ca_leaf_test.go +++ b/agent/cache-types/connect_ca_leaf_test.go @@ -25,10 +25,11 @@ func TestConnectCALeaf_changingRoots(t *testing.T) { defer close(rootsCh) rootsCh <- structs.IndexedCARoots{ ActiveRootID: "1", + TrustDomain: "fake-trust-domain.consul", QueryMeta: structs.QueryMeta{Index: 1}, } - // Instrument ConnectCA.Sign to + // Instrument ConnectCA.Sign to return signed cert var resp *structs.IssuedCert var idx uint64 rpc.On("RPC", "ConnectCA.Sign", mock.Anything, mock.Anything).Return(nil). @@ -67,6 +68,7 @@ func TestConnectCALeaf_changingRoots(t *testing.T) { // Let's send in new roots, which should trigger the sign req rootsCh <- structs.IndexedCARoots{ ActiveRootID: "2", + TrustDomain: "fake-trust-domain.consul", QueryMeta: structs.QueryMeta{Index: 2}, } select { @@ -101,6 +103,7 @@ func TestConnectCALeaf_expiringLeaf(t *testing.T) { defer close(rootsCh) rootsCh <- structs.IndexedCARoots{ ActiveRootID: "1", + TrustDomain: "fake-trust-domain.consul", QueryMeta: structs.QueryMeta{Index: 1}, } diff --git a/agent/connect/ca/provider_consul.go b/agent/connect/ca/provider_consul.go index d88a58bfc..20641a16c 100644 --- a/agent/connect/ca/provider_consul.go +++ b/agent/connect/ca/provider_consul.go @@ -79,7 +79,7 @@ func NewConsulProvider(rawConfig map[string]interface{}, delegate ConsulProvider // Generate a private key if needed if conf.PrivateKey == "" { - pk, err := connect.GeneratePrivateKey() + _, pk, err := connect.GeneratePrivateKey() if err != nil { return nil, err } @@ -247,7 +247,7 @@ func (c *ConsulProvider) Sign(csr *x509.CertificateRequest) (string, error) { } err = pem.Encode(&buf, &pem.Block{Type: "CERTIFICATE", Bytes: bs}) if err != nil { - return "", fmt.Errorf("error encoding private key: %s", err) + return "", fmt.Errorf("error encoding certificate: %s", err) } err = c.incrementProviderIndex(providerState) diff --git a/agent/connect/csr.go b/agent/connect/csr.go new file mode 100644 index 000000000..4b975d06c --- /dev/null +++ b/agent/connect/csr.go @@ -0,0 +1,59 @@ +package connect + +import ( + "bytes" + "crypto" + "crypto/ecdsa" + "crypto/elliptic" + "crypto/rand" + "crypto/x509" + "encoding/pem" + "fmt" + "net/url" +) + +// CreateCSR returns a CSR to sign the given service along with the PEM-encoded +// private key for this certificate. +func CreateCSR(uri CertURI, privateKey crypto.Signer) (string, error) { + template := &x509.CertificateRequest{ + URIs: []*url.URL{uri.URI()}, + SignatureAlgorithm: x509.ECDSAWithSHA256, + } + + // Create the CSR itself + var csrBuf bytes.Buffer + bs, err := x509.CreateCertificateRequest(rand.Reader, template, privateKey) + if err != nil { + return "", err + } + + err = pem.Encode(&csrBuf, &pem.Block{Type: "CERTIFICATE REQUEST", Bytes: bs}) + if err != nil { + return "", err + } + + return csrBuf.String(), nil +} + +// GeneratePrivateKey generates a new Private key +func GeneratePrivateKey() (crypto.Signer, string, error) { + var pk *ecdsa.PrivateKey + + pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) + if err != nil { + return nil, "", fmt.Errorf("error generating private key: %s", err) + } + + bs, err := x509.MarshalECPrivateKey(pk) + if err != nil { + return nil, "", fmt.Errorf("error generating private key: %s", err) + } + + var buf bytes.Buffer + err = pem.Encode(&buf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) + if err != nil { + return nil, "", fmt.Errorf("error encoding private key: %s", err) + } + + return pk, buf.String(), nil +} diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index 552c57535..ba2d29203 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -161,7 +161,10 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) (string, string } // Generate fresh private key - pkSigner, pkPEM := testPrivateKey(t) + pkSigner, pkPEM, err := GeneratePrivateKey() + if err != nil { + t.Fatalf("failed to generate private key: %s", err) + } // Cert template for generation template := x509.Certificate{ diff --git a/agent/connect/uri_signing.go b/agent/connect/uri_signing.go index 843f95596..d934360eb 100644 --- a/agent/connect/uri_signing.go +++ b/agent/connect/uri_signing.go @@ -62,7 +62,8 @@ func (id *SpiffeIDSigning) CanSign(cu CertURI) bool { } // SpiffeIDSigningForCluster returns the SPIFFE signing identifier (trust -// domain) representation of the given CA config. +// domain) representation of the given CA config. If config is nil this function +// will panic. // // NOTE(banks): we intentionally fix the tld `.consul` for now rather than tie // this to the `domain` config used for DNS because changing DNS domain can't diff --git a/agent/consul/connect_ca_endpoint.go b/agent/consul/connect_ca_endpoint.go index a72a4998b..1e24bac7b 100644 --- a/agent/consul/connect_ca_endpoint.go +++ b/agent/consul/connect_ca_endpoint.go @@ -224,9 +224,17 @@ func (s *ConnectCA) Roots( if err != nil { return err } - // Build TrustDomain based on the ClusterID stored. - signingID := connect.SpiffeIDSigningForCluster(config) - reply.TrustDomain = signingID.Host() + // Check CA is actually bootstrapped... + if config != nil { + // Build TrustDomain based on the ClusterID stored. + signingID := connect.SpiffeIDSigningForCluster(config) + if signingID == nil { + // If CA is bootstrapped at all then this should never happen but be + // defensive. + return errors.New("no cluster trust domain setup") + } + reply.TrustDomain = signingID.Host() + } } return s.srv.blockingQuery( diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index f20935877..ac64ceb30 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -157,7 +157,7 @@ func TestConnectCAConfig_TriggerRotation(t *testing.T) { // Update the provider config to use a new private key, which should // cause a rotation. - newKey, err := connect.GeneratePrivateKey() + _, newKey, err := connect.GeneratePrivateKey() assert.NoError(err) newConfig := &structs.CAConfiguration{ Provider: "consul", diff --git a/agent/testagent.go b/agent/testagent.go index c2e4ddf01..724b0c80e 100644 --- a/agent/testagent.go +++ b/agent/testagent.go @@ -16,6 +16,8 @@ import ( "time" metrics "github.com/armon/go-metrics" + uuid "github.com/hashicorp/go-uuid" + "github.com/hashicorp/consul/agent/config" "github.com/hashicorp/consul/agent/consul" "github.com/hashicorp/consul/agent/structs" @@ -23,7 +25,6 @@ import ( "github.com/hashicorp/consul/lib/freeport" "github.com/hashicorp/consul/logger" "github.com/hashicorp/consul/testutil/retry" - uuid "github.com/hashicorp/go-uuid" ) func init() { From 5abf47472d0422c2eed57185e76730299e177f87 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 9 May 2018 20:30:43 +0100 Subject: [PATCH 253/539] Verify trust domain on /authorize calls --- agent/agent_endpoint.go | 28 ++++++++- agent/agent_endpoint_test.go | 106 +++++++++++++++++++++++++++----- agent/cache/cache.go | 2 +- agent/connect/testing_spiffe.go | 8 ++- 4 files changed, 123 insertions(+), 21 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 32b326867..c9afa55db 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -1159,8 +1159,30 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R return nil, acl.ErrPermissionDenied } - // TODO(mitchellh): we need to verify more things here, such as the - // trust domain, blacklist lookup of the serial, etc. + // Validate the trust domain matches ours. Later we will support explicit + // external federation but not built yet. + rootArgs := &structs.DCSpecificRequest{Datacenter: s.agent.config.Datacenter} + raw, err := s.agent.cache.Get(cachetype.ConnectCARootName, rootArgs) + if err != nil { + return nil, err + } + + roots, ok := raw.(*structs.IndexedCARoots) + if !ok { + return nil, fmt.Errorf("internal error: roots response type not correct") + } + if roots.TrustDomain == "" { + return nil, fmt.Errorf("connect CA not bootstrapped yet") + } + if roots.TrustDomain != strings.ToLower(uriService.Host) { + return &connectAuthorizeResp{ + Authorized: false, + Reason: fmt.Sprintf("Identity from an external trust domain: %s", + uriService.Host), + }, nil + } + + // TODO(banks): Implement revocation list checking here. // Get the intentions for this target service. args := &structs.IntentionQueryRequest{ @@ -1177,7 +1199,7 @@ func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.R } args.Token = token - raw, err := s.agent.cache.Get(cachetype.IntentionMatchName, args) + raw, err = s.agent.cache.Get(cachetype.IntentionMatchName, args) if err != nil { return nil, err } diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 4749148e1..c1fd16eb9 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -3286,6 +3286,17 @@ func TestAgentConnectAuthorize_idNotService(t *testing.T) { assert.Contains(obj.Reason, "must be a valid") } +func testFetchTrustDomain(t *testing.T, a *TestAgent) string { + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/roots", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentConnectCARoots(resp, req) + require.NoError(t, err) + + value := obj.(structs.IndexedCARoots) + require.NotEmpty(t, value.TrustDomain) + return value.TrustDomain +} + // Test when there is an intention allowing the connection func TestAgentConnectAuthorize_allow(t *testing.T) { t.Parallel() @@ -3296,6 +3307,8 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { target := "db" + trustDomain := testFetchTrustDomain(t, a) + // Create some intentions var ixnId string { @@ -3317,8 +3330,9 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { cacheHits := a.cache.Hits() args := &structs.ConnectAuthorizeRequest{ - Target: target, - ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), + Target: target, + ClientCertURI: connect.TestSpiffeIDServiceWithHost(t, "web", trustDomain). + URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -3330,8 +3344,13 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { require.True(obj.Authorized) require.Contains(obj.Reason, "Matched") - // That should've been a cache miss, so not hit change - require.Equal(cacheHits, a.cache.Hits()) + // That should've been a cache miss, so no hit change, however since + // testFetchTrustDomain already called Roots and caused it to be in cache, the + // authorize call above will also call it and see a cache hit for the Roots + // RPC. In other words, there are 2 cached calls in /authorize and we always + // expect one of them to be a hit. So asserting only 1 happened is as close as + // we can get to verifying that the intention match RPC was a hit. + require.Equal(cacheHits+1, a.cache.Hits()) // Make the request again { @@ -3346,9 +3365,10 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { require.Contains(obj.Reason, "Matched") } - // That should've been a cache hit - require.Equal(cacheHits+1, a.cache.Hits()) - cacheHits++ + // That should've been a cache hit. We add the one hit from Roots from first + // call as well as the 2 from this call (Roots + Intentions). + require.Equal(cacheHits+1+2, a.cache.Hits()) + cacheHits = a.cache.Hits() // Change the intention { @@ -3384,9 +3404,9 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { } // That should've been a cache hit, too, since it updated in the - // background. - require.Equal(cacheHits+1, a.cache.Hits()) - cacheHits++ + // background. (again 2 hits for Roots + Intentions) + require.Equal(cacheHits+2, a.cache.Hits()) + cacheHits += 2 } // Test when there is an intention denying the connection @@ -3399,6 +3419,8 @@ func TestAgentConnectAuthorize_deny(t *testing.T) { target := "db" + trustDomain := testFetchTrustDomain(t, a) + // Create some intentions { req := structs.IntentionRequest{ @@ -3417,8 +3439,9 @@ func TestAgentConnectAuthorize_deny(t *testing.T) { } args := &structs.ConnectAuthorizeRequest{ - Target: target, - ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), + Target: target, + ClientCertURI: connect.TestSpiffeIDServiceWithHost(t, "web", trustDomain). + URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -3431,6 +3454,53 @@ func TestAgentConnectAuthorize_deny(t *testing.T) { assert.Contains(obj.Reason, "Matched") } +// Test when there is an intention allowing service but for a different trust +// domain. +func TestAgentConnectAuthorize_denyTrustDomain(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + target := "db" + + // Create some intentions + { + req := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + req.Intention.SourceNS = structs.IntentionDefaultNamespace + req.Intention.SourceName = "web" + req.Intention.DestinationNS = structs.IntentionDefaultNamespace + req.Intention.DestinationName = target + req.Intention.Action = structs.IntentionActionAllow + + var reply string + assert.Nil(a.RPC("Intention.Apply", &req, &reply)) + } + + { + args := &structs.ConnectAuthorizeRequest{ + Target: target, + // Rely on the test trust domain this will choose to not match the random + // one picked on agent startup. + ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), + } + req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) + resp := httptest.NewRecorder() + respRaw, err := a.srv.AgentConnectAuthorize(resp, req) + assert.Nil(err) + assert.Equal(200, resp.Code) + + obj := respRaw.(*connectAuthorizeResp) + assert.False(obj.Authorized) + assert.Contains(obj.Reason, "Identity from an external trust domain") + } +} + func TestAgentConnectAuthorize_denyWildcard(t *testing.T) { t.Parallel() @@ -3440,6 +3510,8 @@ func TestAgentConnectAuthorize_denyWildcard(t *testing.T) { target := "db" + trustDomain := testFetchTrustDomain(t, a) + // Create some intentions { // Deny wildcard to DB @@ -3477,8 +3549,9 @@ func TestAgentConnectAuthorize_denyWildcard(t *testing.T) { // Web should be allowed { args := &structs.ConnectAuthorizeRequest{ - Target: target, - ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), + Target: target, + ClientCertURI: connect.TestSpiffeIDServiceWithHost(t, "web", trustDomain). + URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -3494,8 +3567,9 @@ func TestAgentConnectAuthorize_denyWildcard(t *testing.T) { // API should be denied { args := &structs.ConnectAuthorizeRequest{ - Target: target, - ClientCertURI: connect.TestSpiffeIDService(t, "api").URI().String(), + Target: target, + ClientCertURI: connect.TestSpiffeIDServiceWithHost(t, "api", trustDomain). + URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() diff --git a/agent/cache/cache.go b/agent/cache/cache.go index 1b4653cb4..e2eee03d8 100644 --- a/agent/cache/cache.go +++ b/agent/cache/cache.go @@ -72,7 +72,7 @@ type Cache struct { // of "//" in order to properly partition // requests to different datacenters and ACL tokens. This format has some // big drawbacks: we can't evict by datacenter, ACL token, etc. For an - // initial implementaiton this works and the tests are agnostic to the + // initial implementation this works and the tests are agnostic to the // internal storage format so changing this should be possible safely. entriesLock sync.RWMutex entries map[string]cacheEntry diff --git a/agent/connect/testing_spiffe.go b/agent/connect/testing_spiffe.go index d6a70cb81..42db76495 100644 --- a/agent/connect/testing_spiffe.go +++ b/agent/connect/testing_spiffe.go @@ -6,8 +6,14 @@ import ( // TestSpiffeIDService returns a SPIFFE ID representing a service. func TestSpiffeIDService(t testing.T, service string) *SpiffeIDService { + return TestSpiffeIDServiceWithHost(t, service, testClusterID+".consul") +} + +// TestSpiffeIDServiceWithHost returns a SPIFFE ID representing a service with +// the specified trust domain. +func TestSpiffeIDServiceWithHost(t testing.T, service, host string) *SpiffeIDService { return &SpiffeIDService{ - Host: testClusterID + ".consul", + Host: host, Namespace: "default", Datacenter: "dc1", Service: service, From bdd30b191b397861273a971a3a1abd64024b38bb Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Wed, 9 May 2018 20:34:14 +0100 Subject: [PATCH 254/539] Comment cleanup --- agent/cache-types/connect_ca_leaf.go | 1 - 1 file changed, 1 deletion(-) diff --git a/agent/cache-types/connect_ca_leaf.go b/agent/cache-types/connect_ca_leaf.go index 4070298df..ef354c9ce 100644 --- a/agent/cache-types/connect_ca_leaf.go +++ b/agent/cache-types/connect_ca_leaf.go @@ -10,7 +10,6 @@ import ( "github.com/hashicorp/consul/agent/cache" "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" - // NOTE(mitcehllh): This is temporary while certs are stubbed out. ) // Recommended name for registration. From 834ed1d25f086a9beb40d392a4b5e18f5ab42b74 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 10 May 2018 17:04:33 +0100 Subject: [PATCH 255/539] Fixed many tests after rebase. Some still failing and seem unrelated to any connect changes. --- agent/agent.go | 53 ++++++++++++++++----- agent/agent_endpoint.go | 5 +- agent/agent_endpoint_test.go | 60 +++++++----------------- agent/agent_test.go | 40 ++++++++++++++++ agent/cache-types/connect_ca_leaf.go | 7 ++- agent/connect/testing_ca.go | 12 ++--- agent/connect/testing_spiffe.go | 2 +- agent/connect/uri_signing_test.go | 14 +++--- agent/consul/connect_ca_endpoint_test.go | 40 +++++++--------- agent/consul/leader.go | 12 +++-- agent/consul/server_test.go | 6 +++ agent/testagent.go | 4 ++ api/agent_test.go | 6 +-- api/connect_ca.go | 1 + api/connect_ca_test.go | 41 +++++++++++++--- connect/tls_test.go | 6 +-- testutil/server.go | 7 +++ 17 files changed, 199 insertions(+), 117 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 77045c69e..eb9e203dc 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -932,6 +932,25 @@ func (a *Agent) consulConfig() (*consul.Config, error) { if a.config.ConnectEnabled { base.ConnectEnabled = true + // Allow config to specify cluster_id provided it's a valid UUID. This is + // meant only for tests where a deterministic ID makes fixtures much simpler + // to work with but since it's only read on initial cluster bootstrap it's not + // that much of a liability in production. The worst a user could do is + // configure logically separate clusters with same ID by mistake but we can + // avoid documenting this is even an option. + if clusterID, ok := a.config.ConnectCAConfig["cluster_id"]; ok { + if cIDStr, ok := clusterID.(string); ok { + if _, err := uuid.ParseUUID(cIDStr); err == nil { + // Valid UUID configured, use that + base.CAConfig.ClusterID = cIDStr + } + } + if base.CAConfig.ClusterID == "" { + a.logger.Println("[WARN] connect CA config cluster_id specified but ", + "is not a valid UUID, ignoring") + } + } + if a.config.ConnectCAProvider != "" { base.CAConfig.Provider = a.config.ConnectCAProvider @@ -2116,20 +2135,25 @@ func (a *Agent) RemoveProxy(proxyID string, persist bool) error { } // verifyProxyToken takes a token and attempts to verify it against the -// targetService name. If targetProxy is specified, then the local proxy -// token must exactly match the given proxy ID. -// cert, config, etc.). +// targetService name. If targetProxy is specified, then the local proxy token +// must exactly match the given proxy ID. cert, config, etc.). // -// The given token may be a local-only proxy token or it may be an ACL -// token. We will attempt to verify the local proxy token first. -func (a *Agent) verifyProxyToken(token, targetService, targetProxy string) error { +// The given token may be a local-only proxy token or it may be an ACL token. We +// will attempt to verify the local proxy token first. +// +// The effective ACL token is returned along with any error. In the case the +// token matches a proxy token, then the ACL token used to register that proxy's +// target service is returned for use in any RPC calls the proxy needs to make +// on behalf of that service. If the token was an ACL token already then it is +// always returned. Provided error is nil, a valid ACL token is always returned. +func (a *Agent) verifyProxyToken(token, targetService, targetProxy string) (string, error) { // If we specify a target proxy, we look up that proxy directly. Otherwise, // we resolve with any proxy we can find. var proxy *local.ManagedProxy if targetProxy != "" { proxy = a.State.Proxy(targetProxy) if proxy == nil { - return fmt.Errorf("unknown proxy service ID: %q", targetProxy) + return "", fmt.Errorf("unknown proxy service ID: %q", targetProxy) } // If the token DOESN'T match, then we reset the proxy which will @@ -2148,10 +2172,13 @@ func (a *Agent) verifyProxyToken(token, targetService, targetProxy string) error // service. if proxy != nil { if proxy.Proxy.TargetServiceID != targetService { - return acl.ErrPermissionDenied + return "", acl.ErrPermissionDenied } - return nil + // Resolve the actual ACL token used to register the proxy/service and + // return that for use in RPC calls. + aclToken := a.State.ServiceToken(targetService) + return aclToken, nil } // Retrieve the service specified. This should always exist because @@ -2159,7 +2186,7 @@ func (a *Agent) verifyProxyToken(token, targetService, targetProxy string) error // only be called for local services. service := a.State.Service(targetService) if service == nil { - return fmt.Errorf("unknown service ID: %s", targetService) + return "", fmt.Errorf("unknown service ID: %s", targetService) } // Doesn't match, we have to do a full token resolution. The required @@ -2168,13 +2195,13 @@ func (a *Agent) verifyProxyToken(token, targetService, targetProxy string) error // is usually present in the configuration. rule, err := a.resolveToken(token) if err != nil { - return err + return "", err } if rule != nil && !rule.ServiceWrite(service.Service, nil) { - return acl.ErrPermissionDenied + return "", acl.ErrPermissionDenied } - return nil + return token, nil } func (a *Agent) cancelCheckMonitors(checkID types.CheckID) { diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index c9afa55db..6a5126fa2 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -927,10 +927,11 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. // Verify the proxy token. This will check both the local proxy token // as well as the ACL if the token isn't local. - err := s.agent.verifyProxyToken(qOpts.Token, id, "") + effectiveToken, err := s.agent.verifyProxyToken(qOpts.Token, id, "") if err != nil { return nil, err } + args.Token = effectiveToken raw, err := s.agent.cache.Get(cachetype.ConnectCALeafName, &args) if err != nil { @@ -982,7 +983,7 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http } // Validate the ACL token - err := s.agent.verifyProxyToken(token, proxy.Proxy.TargetServiceID, id) + _, err := s.agent.verifyProxyToken(token, proxy.Proxy.TargetServiceID, id) if err != nil { return "", nil, err } diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index c1fd16eb9..ac2d28d00 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -3286,17 +3286,6 @@ func TestAgentConnectAuthorize_idNotService(t *testing.T) { assert.Contains(obj.Reason, "must be a valid") } -func testFetchTrustDomain(t *testing.T, a *TestAgent) string { - req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/roots", nil) - resp := httptest.NewRecorder() - obj, err := a.srv.AgentConnectCARoots(resp, req) - require.NoError(t, err) - - value := obj.(structs.IndexedCARoots) - require.NotEmpty(t, value.TrustDomain) - return value.TrustDomain -} - // Test when there is an intention allowing the connection func TestAgentConnectAuthorize_allow(t *testing.T) { t.Parallel() @@ -3307,8 +3296,6 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { target := "db" - trustDomain := testFetchTrustDomain(t, a) - // Create some intentions var ixnId string { @@ -3330,9 +3317,8 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { cacheHits := a.cache.Hits() args := &structs.ConnectAuthorizeRequest{ - Target: target, - ClientCertURI: connect.TestSpiffeIDServiceWithHost(t, "web", trustDomain). - URI().String(), + Target: target, + ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -3344,13 +3330,9 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { require.True(obj.Authorized) require.Contains(obj.Reason, "Matched") - // That should've been a cache miss, so no hit change, however since - // testFetchTrustDomain already called Roots and caused it to be in cache, the - // authorize call above will also call it and see a cache hit for the Roots - // RPC. In other words, there are 2 cached calls in /authorize and we always - // expect one of them to be a hit. So asserting only 1 happened is as close as - // we can get to verifying that the intention match RPC was a hit. - require.Equal(cacheHits+1, a.cache.Hits()) + // That should've been a cache miss (for both Intentions and Roots, so no hit + // change). + require.Equal(cacheHits, a.cache.Hits()) // Make the request again { @@ -3365,10 +3347,9 @@ func TestAgentConnectAuthorize_allow(t *testing.T) { require.Contains(obj.Reason, "Matched") } - // That should've been a cache hit. We add the one hit from Roots from first - // call as well as the 2 from this call (Roots + Intentions). - require.Equal(cacheHits+1+2, a.cache.Hits()) - cacheHits = a.cache.Hits() + // That should've been a cache hit. We add 2 (Roots + Intentions). + require.Equal(cacheHits+2, a.cache.Hits()) + cacheHits += 2 // Change the intention { @@ -3419,8 +3400,6 @@ func TestAgentConnectAuthorize_deny(t *testing.T) { target := "db" - trustDomain := testFetchTrustDomain(t, a) - // Create some intentions { req := structs.IntentionRequest{ @@ -3439,9 +3418,8 @@ func TestAgentConnectAuthorize_deny(t *testing.T) { } args := &structs.ConnectAuthorizeRequest{ - Target: target, - ClientCertURI: connect.TestSpiffeIDServiceWithHost(t, "web", trustDomain). - URI().String(), + Target: target, + ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -3484,10 +3462,8 @@ func TestAgentConnectAuthorize_denyTrustDomain(t *testing.T) { { args := &structs.ConnectAuthorizeRequest{ - Target: target, - // Rely on the test trust domain this will choose to not match the random - // one picked on agent startup. - ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), + Target: target, + ClientCertURI: "spiffe://fake-domain.consul/ns/default/dc/dc1/svc/web", } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -3510,8 +3486,6 @@ func TestAgentConnectAuthorize_denyWildcard(t *testing.T) { target := "db" - trustDomain := testFetchTrustDomain(t, a) - // Create some intentions { // Deny wildcard to DB @@ -3549,9 +3523,8 @@ func TestAgentConnectAuthorize_denyWildcard(t *testing.T) { // Web should be allowed { args := &structs.ConnectAuthorizeRequest{ - Target: target, - ClientCertURI: connect.TestSpiffeIDServiceWithHost(t, "web", trustDomain). - URI().String(), + Target: target, + ClientCertURI: connect.TestSpiffeIDService(t, "web").URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() @@ -3567,9 +3540,8 @@ func TestAgentConnectAuthorize_denyWildcard(t *testing.T) { // API should be denied { args := &structs.ConnectAuthorizeRequest{ - Target: target, - ClientCertURI: connect.TestSpiffeIDServiceWithHost(t, "api", trustDomain). - URI().String(), + Target: target, + ClientCertURI: connect.TestSpiffeIDService(t, "api").URI().String(), } req, _ := http.NewRequest("POST", "/v1/agent/connect/authorize", jsonReader(args)) resp := httptest.NewRecorder() diff --git a/agent/agent_test.go b/agent/agent_test.go index 6219b70da..911ed63a0 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -15,7 +15,10 @@ import ( "testing" "time" + "github.com/stretchr/testify/assert" + "github.com/hashicorp/consul/agent/checks" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/testutil" @@ -52,6 +55,43 @@ func TestAgent_MultiStartStop(t *testing.T) { } } +func TestAgent_ConnectClusterIDConfig(t *testing.T) { + tests := []struct { + name string + hcl string + wantClusterID string + }{ + { + name: "default TestAgent has fixed cluster id", + hcl: "", + wantClusterID: connect.TestClusterID, + }, + { + name: "no cluster ID specified remains null", + hcl: "connect { enabled = true }", + wantClusterID: "", + }, + { + name: "non-UUID cluster_id is ignored", + hcl: `connect { + enabled = true + ca_config { + cluster_id = "fake-id" + } + }`, + wantClusterID: "", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + a := NewTestAgent("test", tt.hcl) + cfg := a.consulConfig() + assert.Equal(t, tt.wantClusterID, cfg.CAConfig.ClusterID) + }) + } +} + func TestAgent_StartStop(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), "") diff --git a/agent/cache-types/connect_ca_leaf.go b/agent/cache-types/connect_ca_leaf.go index ef354c9ce..2316acab1 100644 --- a/agent/cache-types/connect_ca_leaf.go +++ b/agent/cache-types/connect_ca_leaf.go @@ -134,8 +134,9 @@ func (c *ConnectCALeaf) Fetch(opts cache.FetchOptions, req cache.Request) (cache // Request signing var reply structs.IssuedCert args := structs.CASignRequest{ - Datacenter: reqReal.Datacenter, - CSR: csr, + WriteRequest: structs.WriteRequest{Token: reqReal.Token}, + Datacenter: reqReal.Datacenter, + CSR: csr, } if err := c.RPC.RPC("ConnectCA.Sign", &args, &reply); err != nil { return result, err @@ -217,6 +218,7 @@ func (c *ConnectCALeaf) waitNewRootCA(datacenter string, ch chan<- error, // since this is only used for cache-related requests and not forwarded // directly to any Consul servers. type ConnectCALeafRequest struct { + Token string Datacenter string Service string // Service name, not ID MinQueryIndex uint64 @@ -224,6 +226,7 @@ type ConnectCALeafRequest struct { func (r *ConnectCALeafRequest) CacheInfo() cache.RequestInfo { return cache.RequestInfo{ + Token: r.Token, Key: r.Service, Datacenter: r.Datacenter, MinIndex: r.MinQueryIndex, diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index ba2d29203..cc015af81 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -20,12 +20,8 @@ import ( "github.com/mitchellh/go-testing-interface" ) -// testClusterID is the Consul cluster ID for testing. -// -// NOTE(mitchellh): This might have to change some other constant for -// real testing once we integrate the Cluster ID into the core. For now it -// is unchecked. -const testClusterID = "11111111-2222-3333-4444-555555555555" +// TestClusterID is the Consul cluster ID for testing. +const TestClusterID = "11111111-2222-3333-4444-555555555555" // testCACounter is just an atomically incremented counter for creating // unique names for the CA certs. @@ -53,7 +49,7 @@ func TestCA(t testing.T, xc *structs.CARoot) *structs.CARoot { } // The URI (SPIFFE compatible) for the cert - id := &SpiffeIDSigning{ClusterID: testClusterID, Domain: "consul"} + id := &SpiffeIDSigning{ClusterID: TestClusterID, Domain: "consul"} // Create the CA cert template := x509.Certificate{ @@ -148,7 +144,7 @@ func TestLeaf(t testing.T, service string, root *structs.CARoot) (string, string // Build the SPIFFE ID spiffeId := &SpiffeIDService{ - Host: fmt.Sprintf("%s.consul", testClusterID), + Host: fmt.Sprintf("%s.consul", TestClusterID), Namespace: "default", Datacenter: "dc1", Service: service, diff --git a/agent/connect/testing_spiffe.go b/agent/connect/testing_spiffe.go index 42db76495..c7fa6f753 100644 --- a/agent/connect/testing_spiffe.go +++ b/agent/connect/testing_spiffe.go @@ -6,7 +6,7 @@ import ( // TestSpiffeIDService returns a SPIFFE ID representing a service. func TestSpiffeIDService(t testing.T, service string) *SpiffeIDService { - return TestSpiffeIDServiceWithHost(t, service, testClusterID+".consul") + return TestSpiffeIDServiceWithHost(t, service, TestClusterID+".consul") } // TestSpiffeIDServiceWithHost returns a SPIFFE ID representing a service with diff --git a/agent/connect/uri_signing_test.go b/agent/connect/uri_signing_test.go index 2d8975858..6d04a5fab 100644 --- a/agent/connect/uri_signing_test.go +++ b/agent/connect/uri_signing_test.go @@ -21,10 +21,10 @@ func TestSpiffeIDSigningAuthorize(t *testing.T) { func TestSpiffeIDSigningForCluster(t *testing.T) { // For now it should just append .consul to the ID. config := &structs.CAConfiguration{ - ClusterID: testClusterID, + ClusterID: TestClusterID, } id := SpiffeIDSigningForCluster(config) - assert.Equal(t, id.URI().String(), "spiffe://"+testClusterID+".consul") + assert.Equal(t, id.URI().String(), "spiffe://"+TestClusterID+".consul") } // fakeCertURI is a CertURI implementation that our implementation doesn't know @@ -42,7 +42,7 @@ func (f fakeCertURI) URI() *url.URL { func TestSpiffeIDSigning_CanSign(t *testing.T) { testSigning := &SpiffeIDSigning{ - ClusterID: testClusterID, + ClusterID: TestClusterID, Domain: "consul", } @@ -71,7 +71,7 @@ func TestSpiffeIDSigning_CanSign(t *testing.T) { name: "different TLD signing ID", id: testSigning, input: &SpiffeIDSigning{ - ClusterID: testClusterID, + ClusterID: TestClusterID, Domain: "evil", }, want: false, @@ -91,13 +91,13 @@ func TestSpiffeIDSigning_CanSign(t *testing.T) { { name: "service - good", id: testSigning, - input: &SpiffeIDService{testClusterID + ".consul", "default", "dc1", "web"}, + input: &SpiffeIDService{TestClusterID + ".consul", "default", "dc1", "web"}, want: true, }, { name: "service - good midex case", id: testSigning, - input: &SpiffeIDService{strings.ToUpper(testClusterID) + ".CONsuL", "defAUlt", "dc1", "WEB"}, + input: &SpiffeIDService{strings.ToUpper(TestClusterID) + ".CONsuL", "defAUlt", "dc1", "WEB"}, want: true, }, { @@ -109,7 +109,7 @@ func TestSpiffeIDSigning_CanSign(t *testing.T) { { name: "service - different TLD", id: testSigning, - input: &SpiffeIDService{testClusterID + ".fake", "default", "dc1", "web"}, + input: &SpiffeIDService{TestClusterID + ".fake", "default", "dc1", "web"}, want: false, }, } diff --git a/agent/consul/connect_ca_endpoint_test.go b/agent/consul/connect_ca_endpoint_test.go index ac64ceb30..eb7176b67 100644 --- a/agent/consul/connect_ca_endpoint_test.go +++ b/agent/consul/connect_ca_endpoint_test.go @@ -252,7 +252,6 @@ func TestConnectCASign(t *testing.T) { // Generate a CSR and request signing spiffeId := connect.TestSpiffeIDService(t, "web") - spiffeId.Host = testGetClusterTrustDomain(t, s1) csr, _ := connect.TestCSR(t, spiffeId) args := &structs.CASignRequest{ Datacenter: "dc1", @@ -281,16 +280,6 @@ func TestConnectCASign(t *testing.T) { assert.Equal(spiffeId.URI().String(), reply.ServiceURI) } -func testGetClusterTrustDomain(t *testing.T, s *Server) string { - t.Helper() - state := s.fsm.State() - _, config, err := state.CAConfig() - require.NoError(t, err) - // Build TrustDomain based on the ClusterID stored. - signingID := connect.SpiffeIDSigningForCluster(config) - return signingID.Host() -} - func TestConnectCASignValidation(t *testing.T) { t.Parallel() @@ -325,7 +314,7 @@ func TestConnectCASignValidation(t *testing.T) { require.NoError(t, msgpackrpc.CallWithCodec(codec, "ACL.Apply", &arg, &webToken)) } - trustDomain := testGetClusterTrustDomain(t, s1) + testWebID := connect.TestSpiffeIDService(t, "web") tests := []struct { name string @@ -335,29 +324,36 @@ func TestConnectCASignValidation(t *testing.T) { { name: "different cluster", id: &connect.SpiffeIDService{ - "55555555-4444-3333-2222-111111111111.consul", - "default", "dc1", "web"}, + Host: "55555555-4444-3333-2222-111111111111.consul", + Namespace: testWebID.Namespace, + Datacenter: testWebID.Datacenter, + Service: testWebID.Service, + }, wantErr: "different trust domain", }, { - name: "same cluster should validate", - id: &connect.SpiffeIDService{ - trustDomain, - "default", "dc1", "web"}, + name: "same cluster should validate", + id: testWebID, wantErr: "", }, { name: "same cluster, CSR for a different DC should NOT validate", id: &connect.SpiffeIDService{ - trustDomain, - "default", "dc2", "web"}, + Host: testWebID.Host, + Namespace: testWebID.Namespace, + Datacenter: "dc2", + Service: testWebID.Service, + }, wantErr: "different datacenter", }, { name: "same cluster and DC, different service should not have perms", id: &connect.SpiffeIDService{ - trustDomain, - "default", "dc1", "db"}, + Host: testWebID.Host, + Namespace: testWebID.Namespace, + Datacenter: testWebID.Datacenter, + Service: "db", + }, wantErr: "Permission denied", }, } diff --git a/agent/consul/leader.go b/agent/consul/leader.go index 579ff4b1d..f47dde83f 100644 --- a/agent/consul/leader.go +++ b/agent/consul/leader.go @@ -383,13 +383,15 @@ func (s *Server) initializeCAConfig() (*structs.CAConfiguration, error) { return config, nil } - id, err := uuid.GenerateUUID() - if err != nil { - return nil, err + config = s.config.CAConfig + if config.ClusterID == "" { + id, err := uuid.GenerateUUID() + if err != nil { + return nil, err + } + config.ClusterID = id } - config = s.config.CAConfig - config.ClusterID = id req := structs.CARequest{ Op: structs.CAOpSetConfig, Config: config, diff --git a/agent/consul/server_test.go b/agent/consul/server_test.go index 84ec6743a..0359c847f 100644 --- a/agent/consul/server_test.go +++ b/agent/consul/server_test.go @@ -10,7 +10,9 @@ import ( "testing" "time" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/metadata" + "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/agent/token" "github.com/hashicorp/consul/lib/freeport" "github.com/hashicorp/consul/testrpc" @@ -92,6 +94,10 @@ func testServerConfig(t *testing.T) (string, *Config) { config.RPCHoldTimeout = 5 * time.Second config.ConnectEnabled = true + config.CAConfig = &structs.CAConfiguration{ + ClusterID: connect.TestClusterID, + Provider: structs.ConsulCAProvider, + } return dir, config } diff --git a/agent/testagent.go b/agent/testagent.go index 724b0c80e..26c81a81d 100644 --- a/agent/testagent.go +++ b/agent/testagent.go @@ -19,6 +19,7 @@ import ( uuid "github.com/hashicorp/go-uuid" "github.com/hashicorp/consul/agent/config" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/consul" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/consul/api" @@ -337,6 +338,9 @@ func TestConfig(sources ...config.Source) *config.RuntimeConfig { node_name = "Node ` + nodeID + `" connect { enabled = true + ca_config { + cluster_id = "` + connect.TestClusterID + `" + } } performance { raft_multiplier = 1 diff --git a/api/agent_test.go b/api/agent_test.go index 1f816c23a..ed73672de 100644 --- a/api/agent_test.go +++ b/api/agent_test.go @@ -203,7 +203,7 @@ func TestAPI_AgentServices_ManagedConnectProxy(t *testing.T) { Connect: &AgentServiceConnect{ Proxy: &AgentServiceConnectProxy{ ExecMode: ProxyExecModeScript, - Command: "foo.rb", + Command: []string{"foo.rb"}, Config: map[string]interface{}{ "foo": "bar", }, @@ -1123,7 +1123,7 @@ func TestAPI_AgentConnectAuthorize(t *testing.T) { Target: "foo", ClientCertSerial: "fake", // Importing connect.TestSpiffeIDService creates an import cycle - ClientCertURI: "spiffe://123.consul/ns/default/dc/ny1/svc/web", + ClientCertURI: "spiffe://11111111-2222-3333-4444-555555555555.consul/ns/default/dc/ny1/svc/web", } auth, err := agent.ConnectAuthorize(params) require.Nil(err) @@ -1169,7 +1169,7 @@ func TestAPI_AgentConnectProxyConfig(t *testing.T) { TargetServiceName: "foo", ContentHash: "e662ea8600d84cf0", ExecMode: "daemon", - Command: "consul connect proxy", + Command: []string{"consul connect proxy"}, Config: map[string]interface{}{ "bind_address": "127.0.0.1", "bind_port": float64(20000), diff --git a/api/connect_ca.go b/api/connect_ca.go index 00951c75d..ed0ac5e8f 100644 --- a/api/connect_ca.go +++ b/api/connect_ca.go @@ -7,6 +7,7 @@ import ( // CARootList is the structure for the results of listing roots. type CARootList struct { ActiveRootID string + TrustDomain string Roots []*CARoot } diff --git a/api/connect_ca_test.go b/api/connect_ca_test.go index 3ad7cb078..36fb12b56 100644 --- a/api/connect_ca_test.go +++ b/api/connect_ca_test.go @@ -3,24 +3,51 @@ package api import ( "testing" + "github.com/hashicorp/consul/testutil" + "github.com/hashicorp/consul/testutil/retry" "github.com/stretchr/testify/require" ) -// NOTE(mitchellh): we don't have a way to test CA roots yet since there -// is no API public way to configure the root certs. This wll be resolved -// in the future and we can write tests then. This is tested in agent and -// agent/consul which do have internal access to manually create roots. - func TestAPI_ConnectCARoots_empty(t *testing.T) { t.Parallel() require := require.New(t) - c, s := makeClient(t) + c, s := makeClientWithConfig(t, nil, func(c *testutil.TestServerConfig) { + // Don't bootstrap CA + c.Connect = nil + }) defer s.Stop() connect := c.Connect() list, meta, err := connect.CARoots(nil) - require.Nil(err) + require.NoError(err) require.Equal(uint64(0), meta.LastIndex) require.Len(list.Roots, 0) + require.Empty(list.TrustDomain) +} + +func TestAPI_ConnectCARoots_list(t *testing.T) { + t.Parallel() + + c, s := makeClient(t) + defer s.Stop() + + // This fails occasionally if server doesn't have time to bootstrap CA so + // retry + retry.Run(t, func(r *retry.R) { + connect := c.Connect() + list, meta, err := connect.CARoots(nil) + r.Check(err) + if meta.LastIndex <= 0 { + r.Fatalf("expected roots raft index to be > 0") + } + if v := len(list.Roots); v != 1 { + r.Fatalf("expected 1 root, got %d", v) + } + // connect.TestClusterID causes import cycle so hard code it + if list.TrustDomain != "11111111-2222-3333-4444-555555555555.consul" { + r.Fatalf("expected fixed trust domain got '%s'", list.TrustDomain) + } + }) + } diff --git a/connect/tls_test.go b/connect/tls_test.go index a9fd6fe8c..5df491866 100644 --- a/connect/tls_test.go +++ b/connect/tls_test.go @@ -147,7 +147,7 @@ func TestServerSideVerifier(t *testing.T) { cfg := api.DefaultConfig() cfg.Address = agent.HTTPAddr() client, err := api.NewClient(cfg) - require.Nil(t, err) + require.NoError(t, err) // Setup intentions to validate against. We actually default to allow so first // setup a blanket deny rule for db, then only allow web. @@ -162,7 +162,7 @@ func TestServerSideVerifier(t *testing.T) { Meta: map[string]string{}, } id, _, err := connect.IntentionCreate(ixn, nil) - require.Nil(t, err) + require.NoError(t, err) require.NotEmpty(t, id) ixn = &api.Intention{ @@ -175,7 +175,7 @@ func TestServerSideVerifier(t *testing.T) { Meta: map[string]string{}, } id, _, err = connect.IntentionCreate(ixn, nil) - require.Nil(t, err) + require.NoError(t, err) require.NotEmpty(t, id) tests := []struct { diff --git a/testutil/server.go b/testutil/server.go index f188079d7..e80b0e7fd 100644 --- a/testutil/server.go +++ b/testutil/server.go @@ -135,6 +135,13 @@ func defaultServerConfig() *TestServerConfig { Server: ports[5], }, ReadyTimeout: 10 * time.Second, + Connect: map[string]interface{}{ + "enabled": true, + "ca_config": map[string]interface{}{ + // const TestClusterID causes import cycle so hard code it here. + "cluster_id": "11111111-2222-3333-4444-555555555555", + }, + }, } } From cac32ba071891cafa81172e475d6e75c59becfc1 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 10 May 2018 17:14:16 +0100 Subject: [PATCH 256/539] More test cleanup --- api/agent.go | 6 ------ api/agent_test.go | 26 ++++++++------------------ 2 files changed, 8 insertions(+), 24 deletions(-) diff --git a/api/agent.go b/api/agent.go index 16241c6f9..7830f80f2 100644 --- a/api/agent.go +++ b/api/agent.go @@ -565,9 +565,6 @@ func (a *Agent) ForceLeave(node string) error { // ConnectAuthorize is used to authorize an incoming connection // to a natively integrated Connect service. -// -// TODO(mitchellh): we need to test this better once we have a way to -// configure CAs from the API package (when the CA work is done). func (a *Agent) ConnectAuthorize(auth *AgentAuthorizeParams) (*AgentAuthorize, error) { r := a.c.newRequest("POST", "/v1/agent/connect/authorize") r.obj = auth @@ -585,9 +582,6 @@ func (a *Agent) ConnectAuthorize(auth *AgentAuthorizeParams) (*AgentAuthorize, e } // ConnectCARoots returns the list of roots. -// -// TODO(mitchellh): we need to test this better once we have a way to -// configure CAs from the API package (when the CA work is done). func (a *Agent) ConnectCARoots(q *QueryOptions) (*CARootList, *QueryMeta, error) { r := a.c.newRequest("GET", "/v1/agent/connect/ca/roots") r.setQueryOptions(q) diff --git a/api/agent_test.go b/api/agent_test.go index ed73672de..ad236ba3a 100644 --- a/api/agent_test.go +++ b/api/agent_test.go @@ -1044,7 +1044,9 @@ func TestAPI_AgentConnectCARoots_empty(t *testing.T) { t.Parallel() require := require.New(t) - c, s := makeClient(t) + c, s := makeClientWithConfig(t, nil, func(c *testutil.TestServerConfig) { + c.Connect = nil // disable connect to prevent CA beening bootstrapped + }) defer s.Stop() agent := c.Agent() @@ -1058,12 +1060,7 @@ func TestAPI_AgentConnectCARoots_list(t *testing.T) { t.Parallel() require := require.New(t) - c, s := makeClientWithConfig(t, nil, func(c *testutil.TestServerConfig) { - // Force auto port range to 1 port so we have deterministic response. - c.Connect = map[string]interface{}{ - "enabled": true, - } - }) + c, s := makeClient(t) defer s.Stop() agent := c.Agent() @@ -1077,12 +1074,7 @@ func TestAPI_AgentConnectCALeaf(t *testing.T) { t.Parallel() require := require.New(t) - c, s := makeClientWithConfig(t, nil, func(c *testutil.TestServerConfig) { - // Force auto port range to 1 port so we have deterministic response. - c.Connect = map[string]interface{}{ - "enabled": true, - } - }) + c, s := makeClient(t) defer s.Stop() agent := c.Agent() @@ -1109,9 +1101,6 @@ func TestAPI_AgentConnectCALeaf(t *testing.T) { require.True(leaf.ValidBefore.After(time.Now())) } -// TODO(banks): once we have CA stuff setup properly we can probably make this -// much more complete. This is just a sanity check that the agent code basically -// works. func TestAPI_AgentConnectAuthorize(t *testing.T) { t.Parallel() require := require.New(t) @@ -1151,6 +1140,7 @@ func TestAPI_AgentConnectProxyConfig(t *testing.T) { Port: 8000, Connect: &AgentServiceConnect{ Proxy: &AgentServiceConnectProxy{ + Command: []string{"consul connect proxy"}, Config: map[string]interface{}{ "foo": "bar", }, @@ -1167,7 +1157,7 @@ func TestAPI_AgentConnectProxyConfig(t *testing.T) { ProxyServiceID: "foo-proxy", TargetServiceID: "foo", TargetServiceName: "foo", - ContentHash: "e662ea8600d84cf0", + ContentHash: "93baee1d838888ae", ExecMode: "daemon", Command: []string{"consul connect proxy"}, Config: map[string]interface{}{ @@ -1178,5 +1168,5 @@ func TestAPI_AgentConnectProxyConfig(t *testing.T) { }, } require.Equal(t, expectConfig, config) - require.Equal(t, "e662ea8600d84cf0", qm.LastContentHash) + require.Equal(t, expectConfig.ContentHash, qm.LastContentHash) } From dbcf286d4c54068059d859b598f418a8e9fe9504 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Thu, 10 May 2018 17:27:42 +0100 Subject: [PATCH 257/539] Ooops remove the CA stuff from actual server defaults and make it test server only --- agent/consul/config.go | 9 --------- agent/consul/server_test.go | 5 +++++ 2 files changed, 5 insertions(+), 9 deletions(-) diff --git a/agent/consul/config.go b/agent/consul/config.go index 94c8bc06a..6f9410c4b 100644 --- a/agent/consul/config.go +++ b/agent/consul/config.go @@ -435,15 +435,6 @@ func DefaultConfig() *Config { ServerHealthInterval: 2 * time.Second, AutopilotInterval: 10 * time.Second, - - CAConfig: &structs.CAConfiguration{ - Provider: "consul", - Config: map[string]interface{}{ - "PrivateKey": "", - "RootCert": "", - "RotationPeriod": 90 * 24 * time.Hour, - }, - }, } // Increase our reap interval to 3 days instead of 24h. diff --git a/agent/consul/server_test.go b/agent/consul/server_test.go index 0359c847f..43dcd13ff 100644 --- a/agent/consul/server_test.go +++ b/agent/consul/server_test.go @@ -97,6 +97,11 @@ func testServerConfig(t *testing.T) (string, *Config) { config.CAConfig = &structs.CAConfiguration{ ClusterID: connect.TestClusterID, Provider: structs.ConsulCAProvider, + Config: map[string]interface{}{ + "PrivateKey": "", + "RootCert": "", + "RotationPeriod": 90 * 24 * time.Hour, + }, } return dir, config From bd5eb8b749484e643b52ece95e89c779ed87391c Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Sat, 12 May 2018 09:48:16 +0100 Subject: [PATCH 258/539] Add default CA config back - I didn't add it and causes nil panics --- agent/consul/config.go | 9 +++++++++ agent/proxy/proxy_test.go | 3 --- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/agent/consul/config.go b/agent/consul/config.go index 6f9410c4b..461a7dcf7 100644 --- a/agent/consul/config.go +++ b/agent/consul/config.go @@ -433,6 +433,15 @@ func DefaultConfig() *Config { ServerStabilizationTime: 10 * time.Second, }, + CAConfig: &structs.CAConfiguration{ + Provider: "consul", + Config: map[string]interface{}{ + "PrivateKey": "", + "RootCert": "", + "RotationPeriod": 90 * 24 * time.Hour, + }, + }, + ServerHealthInterval: 2 * time.Second, AutopilotInterval: 10 * time.Second, } diff --git a/agent/proxy/proxy_test.go b/agent/proxy/proxy_test.go index b46b5d677..9b123787c 100644 --- a/agent/proxy/proxy_test.go +++ b/agent/proxy/proxy_test.go @@ -138,9 +138,6 @@ func TestHelperProcess(t *testing.T) { time.Sleep(25 * time.Millisecond) } - // Run forever - <-make(chan struct{}) - case "output": fmt.Fprintf(os.Stdout, "hello stdout\n") fmt.Fprintf(os.Stderr, "hello stderr\n") From 73f2a49ef18b0955479692133b71d88d1761268c Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Sat, 12 May 2018 11:27:44 +0100 Subject: [PATCH 259/539] Fix broken api test for service Meta (logical conflict rom OSS). Add test that would make this much easier to catch in future. --- agent/agent_endpoint.go | 1 + agent/agent_endpoint_test.go | 6 +++++- 2 files changed, 6 insertions(+), 1 deletion(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 6a5126fa2..0342d1fd4 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -175,6 +175,7 @@ func (s *HTTPServer) AgentServices(resp http.ResponseWriter, req *http.Request) ID: s.ID, Service: s.Service, Tags: s.Tags, + Meta: s.Meta, Port: s.Port, Address: s.Address, EnableTagOverride: s.EnableTagOverride, diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index ac2d28d00..9c10a61ff 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -56,7 +56,10 @@ func TestAgent_Services(t *testing.T) { ID: "mysql", Service: "mysql", Tags: []string{"master"}, - Port: 5000, + Meta: map[string]string{ + "foo": "bar", + }, + Port: 5000, } require.NoError(t, a.State.AddService(srv1, "")) @@ -81,6 +84,7 @@ func TestAgent_Services(t *testing.T) { val := obj.(map[string]*api.AgentService) assert.Lenf(t, val, 1, "bad services: %v", obj) assert.Equal(t, 5000, val["mysql"].Port) + assert.Equal(t, srv1.Meta, val["mysql"].Meta) assert.NotNil(t, val["mysql"].Connect) assert.NotNil(t, val["mysql"].Connect.Proxy) assert.Equal(t, prxy1.ExecMode.String(), string(val["mysql"].Connect.Proxy.ExecMode)) From 919fd3e148d2a8b53eccb6f61101bca4a5c4f666 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Sat, 12 May 2018 11:58:14 +0100 Subject: [PATCH 260/539] Fix logical conflicts with CA refactor --- agent/connect/csr.go | 26 -------------------------- agent/connect/generate.go | 13 +++++++------ 2 files changed, 7 insertions(+), 32 deletions(-) diff --git a/agent/connect/csr.go b/agent/connect/csr.go index 4b975d06c..16a46af3f 100644 --- a/agent/connect/csr.go +++ b/agent/connect/csr.go @@ -3,12 +3,9 @@ package connect import ( "bytes" "crypto" - "crypto/ecdsa" - "crypto/elliptic" "crypto/rand" "crypto/x509" "encoding/pem" - "fmt" "net/url" ) @@ -34,26 +31,3 @@ func CreateCSR(uri CertURI, privateKey crypto.Signer) (string, error) { return csrBuf.String(), nil } - -// GeneratePrivateKey generates a new Private key -func GeneratePrivateKey() (crypto.Signer, string, error) { - var pk *ecdsa.PrivateKey - - pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) - if err != nil { - return nil, "", fmt.Errorf("error generating private key: %s", err) - } - - bs, err := x509.MarshalECPrivateKey(pk) - if err != nil { - return nil, "", fmt.Errorf("error generating private key: %s", err) - } - - var buf bytes.Buffer - err = pem.Encode(&buf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) - if err != nil { - return nil, "", fmt.Errorf("error encoding private key: %s", err) - } - - return pk, buf.String(), nil -} diff --git a/agent/connect/generate.go b/agent/connect/generate.go index 1226323f0..47ea5f43e 100644 --- a/agent/connect/generate.go +++ b/agent/connect/generate.go @@ -2,6 +2,7 @@ package connect import ( "bytes" + "crypto" "crypto/ecdsa" "crypto/elliptic" "crypto/rand" @@ -10,25 +11,25 @@ import ( "fmt" ) -// GeneratePrivateKey returns a new private key -func GeneratePrivateKey() (string, error) { +// GeneratePrivateKey generates a new Private key +func GeneratePrivateKey() (crypto.Signer, string, error) { var pk *ecdsa.PrivateKey pk, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader) if err != nil { - return "", fmt.Errorf("error generating private key: %s", err) + return nil, "", fmt.Errorf("error generating private key: %s", err) } bs, err := x509.MarshalECPrivateKey(pk) if err != nil { - return "", fmt.Errorf("error generating private key: %s", err) + return nil, "", fmt.Errorf("error generating private key: %s", err) } var buf bytes.Buffer err = pem.Encode(&buf, &pem.Block{Type: "EC PRIVATE KEY", Bytes: bs}) if err != nil { - return "", fmt.Errorf("error encoding private key: %s", err) + return nil, "", fmt.Errorf("error encoding private key: %s", err) } - return buf.String(), nil + return pk, buf.String(), nil } From 69b668c95140c6d4e19d2835f85f0628f36340e0 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Sat, 12 May 2018 20:16:39 +0100 Subject: [PATCH 261/539] Make connect client resolver resolve trust domain properly --- connect/resolver.go | 40 ++++++++++++++++++++++++++++++++++------ 1 file changed, 34 insertions(+), 6 deletions(-) diff --git a/connect/resolver.go b/connect/resolver.go index 98d8c88d3..b7e89bd62 100644 --- a/connect/resolver.go +++ b/connect/resolver.go @@ -4,6 +4,7 @@ import ( "context" "fmt" "math/rand" + "sync" "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/api" @@ -73,11 +74,22 @@ type ConsulResolver struct { // Datacenter to resolve in, empty indicates agent's local DC. Datacenter string + + // trustDomain stores the cluster's trust domain it's populated once on first + // Resolve call and blocks all resolutions. + trustDomain string + trustDomainMu sync.Mutex } // Resolve performs service discovery against the local Consul agent and returns // the address and expected identity of a suitable service instance. func (cr *ConsulResolver) Resolve(ctx context.Context) (string, connect.CertURI, error) { + // Fetch trust domain if we've not done that yet + err := cr.ensureTrustDomain() + if err != nil { + return "", nil, err + } + switch cr.Type { case ConsulResolverTypeService: return cr.resolveService(ctx) @@ -91,6 +103,27 @@ func (cr *ConsulResolver) Resolve(ctx context.Context) (string, connect.CertURI, } } +func (cr *ConsulResolver) ensureTrustDomain() error { + cr.trustDomainMu.Lock() + defer cr.trustDomainMu.Unlock() + + if cr.trustDomain != "" { + return nil + } + + roots, _, err := cr.Client.Agent().ConnectCARoots(nil) + if err != nil { + return fmt.Errorf("failed fetching cluster trust domain: %s", err) + } + + if roots.TrustDomain == "" { + return fmt.Errorf("cluster trust domain empty, connect not bootstrapped") + } + + cr.trustDomain = roots.TrustDomain + return nil +} + func (cr *ConsulResolver) resolveService(ctx context.Context) (string, connect.CertURI, error) { health := cr.Client.Health() @@ -116,13 +149,8 @@ func (cr *ConsulResolver) resolveService(ctx context.Context) (string, connect.C port := svcs[idx].Service.Port // Generate the expected CertURI - - // TODO(banks): when we've figured out the CA story around generating and - // propagating these trust domains we need to actually fetch the trust domain - // somehow. We also need to implement namespaces. Use of test function here is - // temporary pending the work on trust domains. certURI := &connect.SpiffeIDService{ - Host: "11111111-2222-3333-4444-555555555555.consul", + Host: cr.trustDomain, Namespace: "default", Datacenter: svcs[idx].Node.Datacenter, Service: svcs[idx].Service.ProxyDestination, From 957aaf69abf5277ae7c3f35888e20f617f1ccae1 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Sat, 12 May 2018 23:37:44 +0100 Subject: [PATCH 262/539] Make Service logger log to right place again --- connect/proxy/proxy.go | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/connect/proxy/proxy.go b/connect/proxy/proxy.go index e3db982fe..64e098825 100644 --- a/connect/proxy/proxy.go +++ b/connect/proxy/proxy.go @@ -80,7 +80,8 @@ func (p *Proxy) Serve() error { // Initial setup // Setup Service instance now we know target ID etc - service, err := connect.NewService(newCfg.ProxiedServiceID, p.client) + service, err := connect.NewServiceWithLogger(newCfg.ProxiedServiceID, + p.client, p.logger) if err != nil { return err } From bd5e569dc7f703ebe4a45e61753033dbc1a572b6 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Tue, 22 May 2018 15:11:13 +0100 Subject: [PATCH 263/539] Make invalid clusterID be fatal --- agent/agent.go | 17 +++++++++++------ agent/agent_test.go | 37 ++++++++++++++++++++++++++----------- agent/testagent.go | 11 +++++++++++ 3 files changed, 48 insertions(+), 17 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index eb9e203dc..622a105e8 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -946,8 +946,12 @@ func (a *Agent) consulConfig() (*consul.Config, error) { } } if base.CAConfig.ClusterID == "" { - a.logger.Println("[WARN] connect CA config cluster_id specified but ", - "is not a valid UUID, ignoring") + // If the tried to specify an ID but typoed it don't ignore as they will + // then bootstrap with a new ID and have to throw away the whole cluster + // and start again. + a.logger.Println("[ERR] connect CA config cluster_id specified but " + + "is not a valid UUID, aborting startup") + return nil, fmt.Errorf("cluster_id was supplied but was not a valid UUID") } } @@ -1315,8 +1319,10 @@ func (a *Agent) ShutdownAgent() error { // NOTE(mitchellh): we use Kill for now to kill the processes since // the local state isn't snapshotting meaning the proxy tokens are // regenerated each time forcing the processes to restart anyways. - if err := a.proxyManager.Kill(); err != nil { - a.logger.Printf("[WARN] agent: error shutting down proxy manager: %s", err) + if a.proxyManager != nil { + if err := a.proxyManager.Kill(); err != nil { + a.logger.Printf("[WARN] agent: error shutting down proxy manager: %s", err) + } } var err error @@ -2177,8 +2183,7 @@ func (a *Agent) verifyProxyToken(token, targetService, targetProxy string) (stri // Resolve the actual ACL token used to register the proxy/service and // return that for use in RPC calls. - aclToken := a.State.ServiceToken(targetService) - return aclToken, nil + return a.State.ServiceToken(targetService), nil } // Retrieve the service specified. This should always exist because diff --git a/agent/agent_test.go b/agent/agent_test.go index 911ed63a0..993bf3b25 100644 --- a/agent/agent_test.go +++ b/agent/agent_test.go @@ -60,6 +60,7 @@ func TestAgent_ConnectClusterIDConfig(t *testing.T) { name string hcl string wantClusterID string + wantPanic bool }{ { name: "default TestAgent has fixed cluster id", @@ -72,22 +73,36 @@ func TestAgent_ConnectClusterIDConfig(t *testing.T) { wantClusterID: "", }, { - name: "non-UUID cluster_id is ignored", - hcl: `connect { - enabled = true - ca_config { - cluster_id = "fake-id" - } - }`, + name: "non-UUID cluster_id is fatal", + hcl: `connect { + enabled = true + ca_config { + cluster_id = "fake-id" + } + }`, wantClusterID: "", + wantPanic: true, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { - a := NewTestAgent("test", tt.hcl) - cfg := a.consulConfig() - assert.Equal(t, tt.wantClusterID, cfg.CAConfig.ClusterID) + // Indirection to support panic recovery cleanly + testFn := func() { + a := &TestAgent{Name: "test", HCL: tt.hcl} + a.ExpectConfigError = tt.wantPanic + a.Start() + defer a.Shutdown() + + cfg := a.consulConfig() + assert.Equal(t, tt.wantClusterID, cfg.CAConfig.ClusterID) + } + + if tt.wantPanic { + require.Panics(t, testFn) + } else { + testFn() + } }) } } @@ -95,7 +110,7 @@ func TestAgent_ConnectClusterIDConfig(t *testing.T) { func TestAgent_StartStop(t *testing.T) { t.Parallel() a := NewTestAgent(t.Name(), "") - // defer a.Shutdown() + defer a.Shutdown() if err := a.Leave(); err != nil { t.Fatalf("err: %v", err) diff --git a/agent/testagent.go b/agent/testagent.go index 26c81a81d..007d01322 100644 --- a/agent/testagent.go +++ b/agent/testagent.go @@ -45,6 +45,12 @@ type TestAgent struct { HCL string + // ExpectConfigError can be set to prevent the agent retrying Start on errors + // and eventually blowing up with runtime.Goexit. This enables tests to assert + // that some specific bit of config actually does prevent startup entirely in + // a reasonable way without reproducing a lot of the boilerplate here. + ExpectConfigError bool + // Config is the agent configuration. If Config is nil then // TestConfig() is used. If Config.DataDir is set then it is // the callers responsibility to clean up the data directory. @@ -159,6 +165,11 @@ func (a *TestAgent) Start() *TestAgent { } else if i == 0 { fmt.Println(id, a.Name, "Error starting agent:", err) runtime.Goexit() + } else if a.ExpectConfigError { + // Panic the error since this can be caught if needed. Pretty gross way to + // detect errors but enough for now and this is a tiny edge case that I'd + // otherwise not have a way to test at all... + panic(err) } else { agent.ShutdownAgent() agent.ShutdownEndpoints() From 526cfc34bd56e4b2bd67c955f1d72c7775daa861 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 10 May 2018 22:35:47 -0700 Subject: [PATCH 264/539] agent/consul: implement Intention.Test endpoint --- agent/consul/intention_endpoint.go | 90 +++++++++ agent/consul/intention_endpoint_test.go | 254 ++++++++++++++++++++++++ agent/structs/intention.go | 28 +++ 3 files changed, 372 insertions(+) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 2458a8ee9..7662ea852 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -7,6 +7,7 @@ import ( "github.com/armon/go-metrics" "github.com/hashicorp/consul/acl" + "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/consul/state" "github.com/hashicorp/consul/agent/structs" "github.com/hashicorp/go-memdb" @@ -252,3 +253,92 @@ func (s *Intention) Match( }, ) } + +// Test tests a source/destination and returns whether it would be allowed +// or denied based on the current ACL configuration. +func (s *Intention) Test( + args *structs.IntentionQueryRequest, + reply *structs.IntentionQueryTestResponse) error { + // Get the test args, and defensively guard against nil + query := args.Test + if query == nil { + return errors.New("Test must be specified on args") + } + + // Build the URI + var uri connect.CertURI + switch query.SourceType { + case structs.IntentionSourceConsul: + uri = &connect.SpiffeIDService{ + Namespace: query.SourceNS, + Service: query.SourceName, + } + + default: + return fmt.Errorf("unsupported SourceType: %q", query.SourceType) + } + + // Get the ACL token for the request for the checks below. + rule, err := s.srv.resolveToken(args.Token) + if err != nil { + return err + } + + // Perform the ACL check + if prefix, ok := query.GetACLPrefix(); ok { + if rule != nil && !rule.ServiceRead(prefix) { + s.srv.logger.Printf("[WARN] consul.intention: test on intention '%s' denied due to ACLs", prefix) + return acl.ErrPermissionDenied + } + } + + // Get the matches for this destination + state := s.srv.fsm.State() + _, matches, err := state.IntentionMatch(nil, &structs.IntentionQueryMatch{ + Type: structs.IntentionMatchDestination, + Entries: []structs.IntentionMatchEntry{ + structs.IntentionMatchEntry{ + Namespace: query.DestinationNS, + Name: query.DestinationName, + }, + }, + }) + if err != nil { + return err + } + if len(matches) != 1 { + // This should never happen since the documented behavior of the + // Match call is that it'll always return exactly the number of results + // as entries passed in. But we guard against misbehavior. + return errors.New("internal error loading matches") + } + + // Test the authorization for each match + for _, ixn := range matches[0] { + if auth, ok := uri.Authorize(ixn); ok { + reply.Allowed = auth + return nil + } + } + + // No match, we need to determine the default behavior. We do this by + // specifying the anonymous token token, which will get that behavior. + // The default behavior if ACLs are disabled is to allow connections + // to mimic the behavior of Consul itself: everything is allowed if + // ACLs are disabled. + // + // NOTE(mitchellh): This is the same behavior as the agent authorize + // endpoint. If this behavior is incorrect, we should also change it there + // which is much more important. + rule, err = s.srv.resolveToken("") + if err != nil { + return err + } + + reply.Allowed = true + if rule != nil { + reply.Allowed = rule.IntentionDefaultAllow() + } + + return nil +} diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index dfac4fc45..b1f51a714 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -10,6 +10,7 @@ import ( "github.com/hashicorp/consul/testrpc" "github.com/hashicorp/net-rpc-msgpackrpc" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" ) // Test basic creation @@ -1007,3 +1008,256 @@ service "bar" { assert.Equal(expected, actual) } } + +// Test the Test method defaults to allow with no ACL set. +func TestIntentionTest_defaultNoACL(t *testing.T) { + t.Parallel() + + require := require.New(t) + dir1, s1 := testServer(t) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Test + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + Test: &structs.IntentionQueryTest{ + SourceNS: "foo", + SourceName: "bar", + DestinationNS: "foo", + DestinationName: "qux", + SourceType: structs.IntentionSourceConsul, + }, + } + var resp structs.IntentionQueryTestResponse + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp)) + require.True(resp.Allowed) +} + +// Test the Test method defaults to deny with whitelist ACLs. +func TestIntentionTest_defaultACLDeny(t *testing.T) { + t.Parallel() + + require := require.New(t) + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Test + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + Test: &structs.IntentionQueryTest{ + SourceNS: "foo", + SourceName: "bar", + DestinationNS: "foo", + DestinationName: "qux", + SourceType: structs.IntentionSourceConsul, + }, + } + req.Token = "root" + var resp structs.IntentionQueryTestResponse + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp)) + require.False(resp.Allowed) +} + +// Test the Test method defaults to deny with blacklist ACLs. +func TestIntentionTest_defaultACLAllow(t *testing.T) { + t.Parallel() + + require := require.New(t) + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "allow" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Test + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + Test: &structs.IntentionQueryTest{ + SourceNS: "foo", + SourceName: "bar", + DestinationNS: "foo", + DestinationName: "qux", + SourceType: structs.IntentionSourceConsul, + }, + } + req.Token = "root" + var resp structs.IntentionQueryTestResponse + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp)) + require.True(resp.Allowed) +} + +// Test the Test method requires service:read permission. +func TestIntentionTest_aclDeny(t *testing.T) { + t.Parallel() + + require := require.New(t) + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create an ACL with service read permissions. This will grant permission. + var token string + { + var rules = ` +service "bar" { + policy = "read" +}` + + req := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: rules, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + require.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token)) + } + + // Test + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + Test: &structs.IntentionQueryTest{ + SourceNS: "foo", + SourceName: "qux", + DestinationNS: "foo", + DestinationName: "baz", + SourceType: structs.IntentionSourceConsul, + }, + } + req.Token = token + var resp structs.IntentionQueryTestResponse + err := msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp) + require.True(acl.IsErrPermissionDenied(err)) +} + +// Test the Test method returns allow/deny properly. +func TestIntentionTest_match(t *testing.T) { + t.Parallel() + + require := require.New(t) + dir1, s1 := testServerWithConfig(t, func(c *Config) { + c.ACLDatacenter = "dc1" + c.ACLMasterToken = "root" + c.ACLDefaultPolicy = "deny" + }) + defer os.RemoveAll(dir1) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + testrpc.WaitForLeader(t, s1.RPC, "dc1") + + // Create an ACL with service read permissions. This will grant permission. + var token string + { + var rules = ` +service "bar" { + policy = "read" +}` + + req := structs.ACLRequest{ + Datacenter: "dc1", + Op: structs.ACLSet, + ACL: structs.ACL{ + Name: "User token", + Type: structs.ACLTypeClient, + Rules: rules, + }, + WriteRequest: structs.WriteRequest{Token: "root"}, + } + require.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token)) + } + + // Create some intentions + { + insert := [][]string{ + {"foo", "*", "foo", "*"}, + {"foo", "*", "foo", "bar"}, + {"bar", "*", "foo", "bar"}, // duplicate destination different source + } + + for _, v := range insert { + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: &structs.Intention{ + SourceNS: v[0], + SourceName: v[1], + DestinationNS: v[2], + DestinationName: v[3], + Action: structs.IntentionActionAllow, + }, + } + ixn.WriteRequest.Token = "root" + + // Create + var reply string + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Apply", &ixn, &reply)) + } + } + + // Test + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + Test: &structs.IntentionQueryTest{ + SourceNS: "foo", + SourceName: "qux", + DestinationNS: "foo", + DestinationName: "bar", + SourceType: structs.IntentionSourceConsul, + }, + } + req.Token = token + var resp structs.IntentionQueryTestResponse + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp)) + require.True(resp.Allowed) + + // Test no match for sanity + { + req := &structs.IntentionQueryRequest{ + Datacenter: "dc1", + Test: &structs.IntentionQueryTest{ + SourceNS: "baz", + SourceName: "qux", + DestinationNS: "foo", + DestinationName: "bar", + SourceType: structs.IntentionSourceConsul, + }, + } + req.Token = token + var resp structs.IntentionQueryTestResponse + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp)) + require.False(resp.Allowed) + } +} diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 5c6b1e991..34d15d997 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -261,6 +261,10 @@ type IntentionQueryRequest struct { // resolving wildcards. Match *IntentionQueryMatch + // Test is non-nil if we're performing a test query. A test will + // return allowed/deny based on an exact match. + Test *IntentionQueryTest + // Options for queries QueryOptions } @@ -313,6 +317,30 @@ type IntentionMatchEntry struct { Name string } +// IntentionQueryTest are the parameters for performing a test request. +type IntentionQueryTest struct { + // SourceNS, SourceName, DestinationNS, and DestinationName are the + // source and namespace, respectively, for the test. These must be + // exact values. + SourceNS, SourceName string + DestinationNS, DestinationName string + + // SourceType is the type of the value for the source. + SourceType IntentionSourceType +} + +// GetACLPrefix returns the prefix to look up the ACL policy for this +// request, and a boolean noting whether the prefix is valid to check +// or not. You must check the ok value before using the prefix. +func (q *IntentionQueryTest) GetACLPrefix() (string, bool) { + return q.DestinationName, q.DestinationName != "" +} + +// IntentionQueryTestResponse is the response for a test request. +type IntentionQueryTestResponse struct { + Allowed bool +} + // IntentionPrecedenceSorter takes a list of intentions and sorts them // based on the match precedence rules for intentions. The intentions // closer to the head of the list have higher precedence. i.e. index 0 has From b02502be736261d09f8e376fa5cf50e207d1b7da Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 10 May 2018 22:37:02 -0700 Subject: [PATCH 265/539] agent: comments to point to differing logic --- agent/agent_endpoint.go | 3 +++ agent/consul/intention_endpoint.go | 4 ++++ 2 files changed, 7 insertions(+) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 0342d1fd4..b52abc732 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -1106,6 +1106,9 @@ func (s *HTTPServer) agentLocalBlockingQuery(resp http.ResponseWriter, hash stri // AgentConnectAuthorize // // POST /v1/agent/connect/authorize +// +// Note: when this logic changes, consider if the Intention.Test RPC method +// also needs to be updated. func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.Request) (interface{}, error) { // Fetch the token var token string diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 7662ea852..2bae56f5e 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -256,6 +256,10 @@ func (s *Intention) Match( // Test tests a source/destination and returns whether it would be allowed // or denied based on the current ACL configuration. +// +// Note: Whenever the logic for this method is changed, you should take +// a look at the agent authorize endpoint (agent/agent_endpoint.go) since +// the logic there is similar. func (s *Intention) Test( args *structs.IntentionQueryRequest, reply *structs.IntentionQueryTestResponse) error { From a48ff5431854062fdc9491722ff07fb835bb0974 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 10 May 2018 22:38:13 -0700 Subject: [PATCH 266/539] agent/consul: forward request if necessary --- agent/consul/intention_endpoint.go | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 2bae56f5e..378565241 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -263,6 +263,11 @@ func (s *Intention) Match( func (s *Intention) Test( args *structs.IntentionQueryRequest, reply *structs.IntentionQueryTestResponse) error { + // Forward maybe + if done, err := s.srv.forward("Intention.Test", args, args, reply); done { + return err + } + // Get the test args, and defensively guard against nil query := args.Test if query == nil { From b961bab08cde31abe68a55179a9c65ae11db1827 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Thu, 10 May 2018 22:52:57 -0700 Subject: [PATCH 267/539] agent: implement HTTP endpoint --- agent/http_oss.go | 1 + agent/intentions_endpoint.go | 53 ++++++++++++++++++ agent/intentions_endpoint_test.go | 91 +++++++++++++++++++++++++++++++ 3 files changed, 145 insertions(+) diff --git a/agent/http_oss.go b/agent/http_oss.go index 9b9857e40..92c7bf4c0 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -48,6 +48,7 @@ func init() { registerEndpoint("/v1/connect/ca/roots", []string{"GET"}, (*HTTPServer).ConnectCARoots) registerEndpoint("/v1/connect/intentions", []string{"GET", "POST"}, (*HTTPServer).IntentionEndpoint) registerEndpoint("/v1/connect/intentions/match", []string{"GET"}, (*HTTPServer).IntentionMatch) + registerEndpoint("/v1/connect/intentions/test", []string{"GET"}, (*HTTPServer).IntentionTest) registerEndpoint("/v1/connect/intentions/", []string{"GET", "PUT", "DELETE"}, (*HTTPServer).IntentionSpecific) registerEndpoint("/v1/coordinate/datacenters", []string{"GET"}, (*HTTPServer).CoordinateDatacenters) registerEndpoint("/v1/coordinate/nodes", []string{"GET"}, (*HTTPServer).CoordinateNodes) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index 5a2e0e809..cb846bc19 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -122,6 +122,59 @@ func (s *HTTPServer) IntentionMatch(resp http.ResponseWriter, req *http.Request) return response, nil } +// GET /v1/connect/intentions/test +func (s *HTTPServer) IntentionTest(resp http.ResponseWriter, req *http.Request) (interface{}, error) { + // Prepare args + args := &structs.IntentionQueryRequest{Test: &structs.IntentionQueryTest{}} + if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { + return nil, nil + } + + q := req.URL.Query() + + // Set the source type if set + args.Test.SourceType = structs.IntentionSourceConsul + if sourceType, ok := q["source-type"]; ok && len(sourceType) > 0 { + args.Test.SourceType = structs.IntentionSourceType(sourceType[0]) + } + + // Extract the source/destination + source, ok := q["source"] + if !ok || len(source) != 1 { + return nil, fmt.Errorf("required query parameter 'source' not set") + } + destination, ok := q["destination"] + if !ok || len(destination) != 1 { + return nil, fmt.Errorf("required query parameter 'destination' not set") + } + + // We parse them the same way as matches to extract namespace/name + args.Test.SourceName = source[0] + if args.Test.SourceType == structs.IntentionSourceConsul { + entry, err := parseIntentionMatchEntry(source[0]) + if err != nil { + return nil, fmt.Errorf("source %q is invalid: %s", source[0], err) + } + args.Test.SourceNS = entry.Namespace + args.Test.SourceName = entry.Name + } + + // The destination is always in the Consul format + entry, err := parseIntentionMatchEntry(destination[0]) + if err != nil { + return nil, fmt.Errorf("destination %q is invalid: %s", destination[0], err) + } + args.Test.DestinationNS = entry.Namespace + args.Test.DestinationName = entry.Name + + var reply structs.IntentionQueryTestResponse + if err := s.agent.RPC("Intention.Test", args, &reply); err != nil { + return nil, err + } + + return &reply, nil +} + // IntentionSpecific handles the endpoint for /v1/connection/intentions/:id func (s *HTTPServer) IntentionSpecific(resp http.ResponseWriter, req *http.Request) (interface{}, error) { id := strings.TrimPrefix(req.URL.Path, "/v1/connect/intentions/") diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index d4d68f26c..e669bcf5f 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -9,6 +9,7 @@ import ( "github.com/hashicorp/consul/agent/structs" "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" ) func TestIntentionsList_empty(t *testing.T) { @@ -180,6 +181,96 @@ func TestIntentionsMatch_noName(t *testing.T) { assert.Nil(obj) } +func TestIntentionsTest_basic(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Create some intentions + { + insert := [][]string{ + {"foo", "*", "foo", "*"}, + {"foo", "*", "foo", "bar"}, + {"bar", "*", "foo", "bar"}, + } + + for _, v := range insert { + ixn := structs.IntentionRequest{ + Datacenter: "dc1", + Op: structs.IntentionOpCreate, + Intention: structs.TestIntention(t), + } + ixn.Intention.SourceNS = v[0] + ixn.Intention.SourceName = v[1] + ixn.Intention.DestinationNS = v[2] + ixn.Intention.DestinationName = v[3] + ixn.Intention.Action = structs.IntentionActionDeny + + // Create + var reply string + require.Nil(a.RPC("Intention.Apply", &ixn, &reply)) + } + } + + // Request matching intention + { + req, _ := http.NewRequest("GET", + "/v1/connect/intentions/test?source=foo/bar&destination=foo/baz", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionTest(resp, req) + require.Nil(err) + value := obj.(*structs.IntentionQueryTestResponse) + require.False(value.Allowed) + } + + // Request non-matching intention + { + req, _ := http.NewRequest("GET", + "/v1/connect/intentions/test?source=foo/bar&destination=bar/qux", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionTest(resp, req) + require.Nil(err) + value := obj.(*structs.IntentionQueryTestResponse) + require.True(value.Allowed) + } +} + +func TestIntentionsTest_noSource(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Request + req, _ := http.NewRequest("GET", + "/v1/connect/intentions/test?destination=B", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionTest(resp, req) + require.NotNil(err) + require.Contains(err.Error(), "'source' not set") + require.Nil(obj) +} + +func TestIntentionsTest_noDestination(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // Request + req, _ := http.NewRequest("GET", + "/v1/connect/intentions/test?source=B", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.IntentionTest(resp, req) + require.NotNil(err) + require.Contains(err.Error(), "'destination' not set") + require.Nil(obj) +} + func TestIntentionsCreate_good(t *testing.T) { t.Parallel() From b5b29cd6afb594cff724d5d22e8211ce61ab5ef2 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 11 May 2018 09:19:22 -0700 Subject: [PATCH 268/539] agent: rename test to check --- agent/consul/intention_endpoint.go | 14 +++--- agent/consul/intention_endpoint_test.go | 64 ++++++++++++------------- agent/http_oss.go | 2 +- agent/intentions_endpoint.go | 24 +++++----- agent/intentions_endpoint_test.go | 18 +++---- agent/structs/intention.go | 14 +++--- 6 files changed, 68 insertions(+), 68 deletions(-) diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index 378565241..a0d88352f 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -254,24 +254,24 @@ func (s *Intention) Match( ) } -// Test tests a source/destination and returns whether it would be allowed +// Check tests a source/destination and returns whether it would be allowed // or denied based on the current ACL configuration. // // Note: Whenever the logic for this method is changed, you should take // a look at the agent authorize endpoint (agent/agent_endpoint.go) since // the logic there is similar. -func (s *Intention) Test( +func (s *Intention) Check( args *structs.IntentionQueryRequest, - reply *structs.IntentionQueryTestResponse) error { + reply *structs.IntentionQueryCheckResponse) error { // Forward maybe - if done, err := s.srv.forward("Intention.Test", args, args, reply); done { + if done, err := s.srv.forward("Intention.Check", args, args, reply); done { return err } // Get the test args, and defensively guard against nil - query := args.Test + query := args.Check if query == nil { - return errors.New("Test must be specified on args") + return errors.New("Check must be specified on args") } // Build the URI @@ -322,7 +322,7 @@ func (s *Intention) Test( return errors.New("internal error loading matches") } - // Test the authorization for each match + // Check the authorization for each match for _, ixn := range matches[0] { if auth, ok := uri.Authorize(ixn); ok { reply.Allowed = auth diff --git a/agent/consul/intention_endpoint_test.go b/agent/consul/intention_endpoint_test.go index b1f51a714..29db41f44 100644 --- a/agent/consul/intention_endpoint_test.go +++ b/agent/consul/intention_endpoint_test.go @@ -1009,8 +1009,8 @@ service "bar" { } } -// Test the Test method defaults to allow with no ACL set. -func TestIntentionTest_defaultNoACL(t *testing.T) { +// Test the Check method defaults to allow with no ACL set. +func TestIntentionCheck_defaultNoACL(t *testing.T) { t.Parallel() require := require.New(t) @@ -1025,7 +1025,7 @@ func TestIntentionTest_defaultNoACL(t *testing.T) { // Test req := &structs.IntentionQueryRequest{ Datacenter: "dc1", - Test: &structs.IntentionQueryTest{ + Check: &structs.IntentionQueryCheck{ SourceNS: "foo", SourceName: "bar", DestinationNS: "foo", @@ -1033,13 +1033,13 @@ func TestIntentionTest_defaultNoACL(t *testing.T) { SourceType: structs.IntentionSourceConsul, }, } - var resp structs.IntentionQueryTestResponse - require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp)) + var resp structs.IntentionQueryCheckResponse + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Check", req, &resp)) require.True(resp.Allowed) } -// Test the Test method defaults to deny with whitelist ACLs. -func TestIntentionTest_defaultACLDeny(t *testing.T) { +// Test the Check method defaults to deny with whitelist ACLs. +func TestIntentionCheck_defaultACLDeny(t *testing.T) { t.Parallel() require := require.New(t) @@ -1055,10 +1055,10 @@ func TestIntentionTest_defaultACLDeny(t *testing.T) { testrpc.WaitForLeader(t, s1.RPC, "dc1") - // Test + // Check req := &structs.IntentionQueryRequest{ Datacenter: "dc1", - Test: &structs.IntentionQueryTest{ + Check: &structs.IntentionQueryCheck{ SourceNS: "foo", SourceName: "bar", DestinationNS: "foo", @@ -1067,13 +1067,13 @@ func TestIntentionTest_defaultACLDeny(t *testing.T) { }, } req.Token = "root" - var resp structs.IntentionQueryTestResponse - require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp)) + var resp structs.IntentionQueryCheckResponse + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Check", req, &resp)) require.False(resp.Allowed) } -// Test the Test method defaults to deny with blacklist ACLs. -func TestIntentionTest_defaultACLAllow(t *testing.T) { +// Test the Check method defaults to deny with blacklist ACLs. +func TestIntentionCheck_defaultACLAllow(t *testing.T) { t.Parallel() require := require.New(t) @@ -1089,10 +1089,10 @@ func TestIntentionTest_defaultACLAllow(t *testing.T) { testrpc.WaitForLeader(t, s1.RPC, "dc1") - // Test + // Check req := &structs.IntentionQueryRequest{ Datacenter: "dc1", - Test: &structs.IntentionQueryTest{ + Check: &structs.IntentionQueryCheck{ SourceNS: "foo", SourceName: "bar", DestinationNS: "foo", @@ -1101,13 +1101,13 @@ func TestIntentionTest_defaultACLAllow(t *testing.T) { }, } req.Token = "root" - var resp structs.IntentionQueryTestResponse - require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp)) + var resp structs.IntentionQueryCheckResponse + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Check", req, &resp)) require.True(resp.Allowed) } -// Test the Test method requires service:read permission. -func TestIntentionTest_aclDeny(t *testing.T) { +// Test the Check method requires service:read permission. +func TestIntentionCheck_aclDeny(t *testing.T) { t.Parallel() require := require.New(t) @@ -1144,10 +1144,10 @@ service "bar" { require.Nil(msgpackrpc.CallWithCodec(codec, "ACL.Apply", &req, &token)) } - // Test + // Check req := &structs.IntentionQueryRequest{ Datacenter: "dc1", - Test: &structs.IntentionQueryTest{ + Check: &structs.IntentionQueryCheck{ SourceNS: "foo", SourceName: "qux", DestinationNS: "foo", @@ -1156,13 +1156,13 @@ service "bar" { }, } req.Token = token - var resp structs.IntentionQueryTestResponse - err := msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp) + var resp structs.IntentionQueryCheckResponse + err := msgpackrpc.CallWithCodec(codec, "Intention.Check", req, &resp) require.True(acl.IsErrPermissionDenied(err)) } -// Test the Test method returns allow/deny properly. -func TestIntentionTest_match(t *testing.T) { +// Test the Check method returns allow/deny properly. +func TestIntentionCheck_match(t *testing.T) { t.Parallel() require := require.New(t) @@ -1227,10 +1227,10 @@ service "bar" { } } - // Test + // Check req := &structs.IntentionQueryRequest{ Datacenter: "dc1", - Test: &structs.IntentionQueryTest{ + Check: &structs.IntentionQueryCheck{ SourceNS: "foo", SourceName: "qux", DestinationNS: "foo", @@ -1239,15 +1239,15 @@ service "bar" { }, } req.Token = token - var resp structs.IntentionQueryTestResponse - require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp)) + var resp structs.IntentionQueryCheckResponse + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Check", req, &resp)) require.True(resp.Allowed) // Test no match for sanity { req := &structs.IntentionQueryRequest{ Datacenter: "dc1", - Test: &structs.IntentionQueryTest{ + Check: &structs.IntentionQueryCheck{ SourceNS: "baz", SourceName: "qux", DestinationNS: "foo", @@ -1256,8 +1256,8 @@ service "bar" { }, } req.Token = token - var resp structs.IntentionQueryTestResponse - require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Test", req, &resp)) + var resp structs.IntentionQueryCheckResponse + require.Nil(msgpackrpc.CallWithCodec(codec, "Intention.Check", req, &resp)) require.False(resp.Allowed) } } diff --git a/agent/http_oss.go b/agent/http_oss.go index 92c7bf4c0..ac5eff335 100644 --- a/agent/http_oss.go +++ b/agent/http_oss.go @@ -48,7 +48,7 @@ func init() { registerEndpoint("/v1/connect/ca/roots", []string{"GET"}, (*HTTPServer).ConnectCARoots) registerEndpoint("/v1/connect/intentions", []string{"GET", "POST"}, (*HTTPServer).IntentionEndpoint) registerEndpoint("/v1/connect/intentions/match", []string{"GET"}, (*HTTPServer).IntentionMatch) - registerEndpoint("/v1/connect/intentions/test", []string{"GET"}, (*HTTPServer).IntentionTest) + registerEndpoint("/v1/connect/intentions/check", []string{"GET"}, (*HTTPServer).IntentionCheck) registerEndpoint("/v1/connect/intentions/", []string{"GET", "PUT", "DELETE"}, (*HTTPServer).IntentionSpecific) registerEndpoint("/v1/coordinate/datacenters", []string{"GET"}, (*HTTPServer).CoordinateDatacenters) registerEndpoint("/v1/coordinate/nodes", []string{"GET"}, (*HTTPServer).CoordinateNodes) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index cb846bc19..80ddedf24 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -123,9 +123,9 @@ func (s *HTTPServer) IntentionMatch(resp http.ResponseWriter, req *http.Request) } // GET /v1/connect/intentions/test -func (s *HTTPServer) IntentionTest(resp http.ResponseWriter, req *http.Request) (interface{}, error) { +func (s *HTTPServer) IntentionCheck(resp http.ResponseWriter, req *http.Request) (interface{}, error) { // Prepare args - args := &structs.IntentionQueryRequest{Test: &structs.IntentionQueryTest{}} + args := &structs.IntentionQueryRequest{Check: &structs.IntentionQueryCheck{}} if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { return nil, nil } @@ -133,9 +133,9 @@ func (s *HTTPServer) IntentionTest(resp http.ResponseWriter, req *http.Request) q := req.URL.Query() // Set the source type if set - args.Test.SourceType = structs.IntentionSourceConsul + args.Check.SourceType = structs.IntentionSourceConsul if sourceType, ok := q["source-type"]; ok && len(sourceType) > 0 { - args.Test.SourceType = structs.IntentionSourceType(sourceType[0]) + args.Check.SourceType = structs.IntentionSourceType(sourceType[0]) } // Extract the source/destination @@ -149,14 +149,14 @@ func (s *HTTPServer) IntentionTest(resp http.ResponseWriter, req *http.Request) } // We parse them the same way as matches to extract namespace/name - args.Test.SourceName = source[0] - if args.Test.SourceType == structs.IntentionSourceConsul { + args.Check.SourceName = source[0] + if args.Check.SourceType == structs.IntentionSourceConsul { entry, err := parseIntentionMatchEntry(source[0]) if err != nil { return nil, fmt.Errorf("source %q is invalid: %s", source[0], err) } - args.Test.SourceNS = entry.Namespace - args.Test.SourceName = entry.Name + args.Check.SourceNS = entry.Namespace + args.Check.SourceName = entry.Name } // The destination is always in the Consul format @@ -164,11 +164,11 @@ func (s *HTTPServer) IntentionTest(resp http.ResponseWriter, req *http.Request) if err != nil { return nil, fmt.Errorf("destination %q is invalid: %s", destination[0], err) } - args.Test.DestinationNS = entry.Namespace - args.Test.DestinationName = entry.Name + args.Check.DestinationNS = entry.Namespace + args.Check.DestinationName = entry.Name - var reply structs.IntentionQueryTestResponse - if err := s.agent.RPC("Intention.Test", args, &reply); err != nil { + var reply structs.IntentionQueryCheckResponse + if err := s.agent.RPC("Intention.Check", args, &reply); err != nil { return nil, err } diff --git a/agent/intentions_endpoint_test.go b/agent/intentions_endpoint_test.go index e669bcf5f..991ab9017 100644 --- a/agent/intentions_endpoint_test.go +++ b/agent/intentions_endpoint_test.go @@ -181,7 +181,7 @@ func TestIntentionsMatch_noName(t *testing.T) { assert.Nil(obj) } -func TestIntentionsTest_basic(t *testing.T) { +func TestIntentionsCheck_basic(t *testing.T) { t.Parallel() require := require.New(t) @@ -219,9 +219,9 @@ func TestIntentionsTest_basic(t *testing.T) { req, _ := http.NewRequest("GET", "/v1/connect/intentions/test?source=foo/bar&destination=foo/baz", nil) resp := httptest.NewRecorder() - obj, err := a.srv.IntentionTest(resp, req) + obj, err := a.srv.IntentionCheck(resp, req) require.Nil(err) - value := obj.(*structs.IntentionQueryTestResponse) + value := obj.(*structs.IntentionQueryCheckResponse) require.False(value.Allowed) } @@ -230,14 +230,14 @@ func TestIntentionsTest_basic(t *testing.T) { req, _ := http.NewRequest("GET", "/v1/connect/intentions/test?source=foo/bar&destination=bar/qux", nil) resp := httptest.NewRecorder() - obj, err := a.srv.IntentionTest(resp, req) + obj, err := a.srv.IntentionCheck(resp, req) require.Nil(err) - value := obj.(*structs.IntentionQueryTestResponse) + value := obj.(*structs.IntentionQueryCheckResponse) require.True(value.Allowed) } } -func TestIntentionsTest_noSource(t *testing.T) { +func TestIntentionsCheck_noSource(t *testing.T) { t.Parallel() require := require.New(t) @@ -248,13 +248,13 @@ func TestIntentionsTest_noSource(t *testing.T) { req, _ := http.NewRequest("GET", "/v1/connect/intentions/test?destination=B", nil) resp := httptest.NewRecorder() - obj, err := a.srv.IntentionTest(resp, req) + obj, err := a.srv.IntentionCheck(resp, req) require.NotNil(err) require.Contains(err.Error(), "'source' not set") require.Nil(obj) } -func TestIntentionsTest_noDestination(t *testing.T) { +func TestIntentionsCheck_noDestination(t *testing.T) { t.Parallel() require := require.New(t) @@ -265,7 +265,7 @@ func TestIntentionsTest_noDestination(t *testing.T) { req, _ := http.NewRequest("GET", "/v1/connect/intentions/test?source=B", nil) resp := httptest.NewRecorder() - obj, err := a.srv.IntentionTest(resp, req) + obj, err := a.srv.IntentionCheck(resp, req) require.NotNil(err) require.Contains(err.Error(), "'destination' not set") require.Nil(obj) diff --git a/agent/structs/intention.go b/agent/structs/intention.go index 34d15d997..19a6402ab 100644 --- a/agent/structs/intention.go +++ b/agent/structs/intention.go @@ -261,9 +261,9 @@ type IntentionQueryRequest struct { // resolving wildcards. Match *IntentionQueryMatch - // Test is non-nil if we're performing a test query. A test will + // Check is non-nil if we're performing a test query. A test will // return allowed/deny based on an exact match. - Test *IntentionQueryTest + Check *IntentionQueryCheck // Options for queries QueryOptions @@ -317,8 +317,8 @@ type IntentionMatchEntry struct { Name string } -// IntentionQueryTest are the parameters for performing a test request. -type IntentionQueryTest struct { +// IntentionQueryCheck are the parameters for performing a test request. +type IntentionQueryCheck struct { // SourceNS, SourceName, DestinationNS, and DestinationName are the // source and namespace, respectively, for the test. These must be // exact values. @@ -332,12 +332,12 @@ type IntentionQueryTest struct { // GetACLPrefix returns the prefix to look up the ACL policy for this // request, and a boolean noting whether the prefix is valid to check // or not. You must check the ok value before using the prefix. -func (q *IntentionQueryTest) GetACLPrefix() (string, bool) { +func (q *IntentionQueryCheck) GetACLPrefix() (string, bool) { return q.DestinationName, q.DestinationName != "" } -// IntentionQueryTestResponse is the response for a test request. -type IntentionQueryTestResponse struct { +// IntentionQueryCheckResponse is the response for a test request. +type IntentionQueryCheckResponse struct { Allowed bool } From bf99a7f54ac8d0d50e4a5786faa43db853ddd3ae Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 11 May 2018 17:19:54 -0700 Subject: [PATCH 269/539] api: IntentionCheck --- api/agent_test.go | 2 +- api/connect_intention.go | 39 ++++++++++++++++++++++++++++ api/connect_intention_test.go | 49 +++++++++++++++++++++++++++++++++++ 3 files changed, 89 insertions(+), 1 deletion(-) diff --git a/api/agent_test.go b/api/agent_test.go index ad236ba3a..1066a0b42 100644 --- a/api/agent_test.go +++ b/api/agent_test.go @@ -1159,7 +1159,7 @@ func TestAPI_AgentConnectProxyConfig(t *testing.T) { TargetServiceName: "foo", ContentHash: "93baee1d838888ae", ExecMode: "daemon", - Command: []string{"consul connect proxy"}, + Command: []string{"consul", "connect", "proxy"}, Config: map[string]interface{}{ "bind_address": "127.0.0.1", "bind_port": float64(20000), diff --git a/api/connect_intention.go b/api/connect_intention.go index aa2f82d3d..c28c55de1 100644 --- a/api/connect_intention.go +++ b/api/connect_intention.go @@ -83,6 +83,18 @@ const ( IntentionMatchDestination IntentionMatchType = "destination" ) +// IntentionCheck are the arguments for the intention check API. For +// more documentation see the IntentionCheck function. +type IntentionCheck struct { + // Source and Destination are the source and destination values to + // check. The destination is always a Consul service, but the source + // may be other values as defined by the SourceType. + Source, Destination string + + // SourceType is the type of the value for the source. + SourceType IntentionSourceType +} + // Intentions returns the list of intentions. func (h *Connect) Intentions(q *QueryOptions) ([]*Intention, *QueryMeta, error) { r := h.c.newRequest("GET", "/v1/connect/intentions") @@ -156,6 +168,33 @@ func (h *Connect) IntentionMatch(args *IntentionMatch, q *QueryOptions) (map[str return out, qm, nil } +// IntentionCheck returns whether a given source/destination would be allowed +// or not given the current set of intentions and the configuration of Consul. +func (h *Connect) IntentionCheck(args *IntentionCheck, q *QueryOptions) (bool, *QueryMeta, error) { + r := h.c.newRequest("GET", "/v1/connect/intentions/check") + r.setQueryOptions(q) + r.params.Set("source", args.Source) + r.params.Set("destination", args.Destination) + if args.SourceType != "" { + r.params.Set("source-type", string(args.SourceType)) + } + rtt, resp, err := requireOK(h.c.doRequest(r)) + if err != nil { + return false, nil, err + } + defer resp.Body.Close() + + qm := &QueryMeta{} + parseQueryMeta(resp, qm) + qm.RequestTime = rtt + + var out struct{ Allowed bool } + if err := decodeBody(resp, &out); err != nil { + return false, nil, err + } + return out.Allowed, qm, nil +} + // IntentionCreate will create a new intention. The ID in the given // structure must be empty and a generate ID will be returned on // success. diff --git a/api/connect_intention_test.go b/api/connect_intention_test.go index 0edcf4c49..e6e76071e 100644 --- a/api/connect_intention_test.go +++ b/api/connect_intention_test.go @@ -87,6 +87,55 @@ func TestAPI_ConnectIntentionMatch(t *testing.T) { require.Equal(expected, actual) } +func TestAPI_ConnectIntentionCheck(t *testing.T) { + t.Parallel() + + require := require.New(t) + c, s := makeClient(t) + defer s.Stop() + + connect := c.Connect() + + // Create + { + insert := [][]string{ + {"foo", "*", "foo", "bar"}, + } + + for _, v := range insert { + ixn := testIntention() + ixn.SourceNS = v[0] + ixn.SourceName = v[1] + ixn.DestinationNS = v[2] + ixn.DestinationName = v[3] + ixn.Action = IntentionActionDeny + id, _, err := connect.IntentionCreate(ixn, nil) + require.Nil(err) + require.NotEmpty(id) + } + } + + // Match it + { + result, _, err := connect.IntentionCheck(&IntentionCheck{ + Source: "foo/qux", + Destination: "foo/bar", + }, nil) + require.Nil(err) + require.False(result) + } + + // Match it (non-matching) + { + result, _, err := connect.IntentionCheck(&IntentionCheck{ + Source: "bar/qux", + Destination: "foo/bar", + }, nil) + require.Nil(err) + require.True(result) + } +} + func testIntention() *Intention { return &Intention{ SourceNS: "eng", From a1a7eaa8767e7f201802f9bb900e380a016c4688 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 11 May 2018 19:47:26 -0700 Subject: [PATCH 270/539] command/intention/create --- api/connect_intention.go | 16 ++ command/commands_oss.go | 4 + command/intention/create/create.go | 193 ++++++++++++++++++++++++ command/intention/create/create_test.go | 188 +++++++++++++++++++++++ command/intention/intention.go | 48 ++++++ command/intention/intention_test.go | 13 ++ 6 files changed, 462 insertions(+) create mode 100644 command/intention/create/create.go create mode 100644 command/intention/create/create_test.go create mode 100644 command/intention/intention.go create mode 100644 command/intention/intention_test.go diff --git a/api/connect_intention.go b/api/connect_intention.go index c28c55de1..e43506a8a 100644 --- a/api/connect_intention.go +++ b/api/connect_intention.go @@ -1,6 +1,7 @@ package api import ( + "fmt" "time" ) @@ -50,6 +51,21 @@ type Intention struct { ModifyIndex uint64 } +// String returns human-friendly output describing ths intention. +func (i *Intention) String() string { + source := i.SourceName + if i.SourceNS != "" { + source = i.SourceNS + "/" + source + } + + dest := i.DestinationName + if i.DestinationNS != "" { + dest = i.DestinationNS + "/" + dest + } + + return fmt.Sprintf("%s => %s (%s)", source, dest, i.Action) +} + // IntentionAction is the action that the intention represents. This // can be "allow" or "deny" to whitelist or blacklist intentions. type IntentionAction string diff --git a/command/commands_oss.go b/command/commands_oss.go index c1e3e794a..79636c598 100644 --- a/command/commands_oss.go +++ b/command/commands_oss.go @@ -12,6 +12,8 @@ import ( "github.com/hashicorp/consul/command/exec" "github.com/hashicorp/consul/command/forceleave" "github.com/hashicorp/consul/command/info" + "github.com/hashicorp/consul/command/intention" + ixncreate "github.com/hashicorp/consul/command/intention/create" "github.com/hashicorp/consul/command/join" "github.com/hashicorp/consul/command/keygen" "github.com/hashicorp/consul/command/keyring" @@ -66,6 +68,8 @@ func init() { Register("exec", func(ui cli.Ui) (cli.Command, error) { return exec.New(ui, MakeShutdownCh()), nil }) Register("force-leave", func(ui cli.Ui) (cli.Command, error) { return forceleave.New(ui), nil }) Register("info", func(ui cli.Ui) (cli.Command, error) { return info.New(ui), nil }) + Register("intention", func(ui cli.Ui) (cli.Command, error) { return intention.New(), nil }) + Register("intention create", func(ui cli.Ui) (cli.Command, error) { return ixncreate.New(), nil }) Register("join", func(ui cli.Ui) (cli.Command, error) { return join.New(ui), nil }) Register("keygen", func(ui cli.Ui) (cli.Command, error) { return keygen.New(ui), nil }) Register("keyring", func(ui cli.Ui) (cli.Command, error) { return keyring.New(ui), nil }) diff --git a/command/intention/create/create.go b/command/intention/create/create.go new file mode 100644 index 000000000..b847117a1 --- /dev/null +++ b/command/intention/create/create.go @@ -0,0 +1,193 @@ +package create + +import ( + "encoding/json" + "flag" + "fmt" + "io" + "os" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/command/flags" + "github.com/mitchellh/cli" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + // flags + flagAllow bool + flagDeny bool + flagFile bool + flagReplace bool + flagMeta map[string]string + + // testStdin is the input for testing. + testStdin io.Reader +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + c.flags.BoolVar(&c.flagAllow, "allow", false, + "Create an intention that allows when matched.") + c.flags.BoolVar(&c.flagDeny, "deny", false, + "Create an intention that denies when matched.") + c.flags.BoolVar(&c.flagFile, "file", false, + "Read intention data from one or more files.") + c.flags.BoolVar(&c.flagReplace, "replace", false, + "Replace matching intentions.") + c.flags.Var((*flags.FlagMapValue)(&c.flagMeta), "meta", + "Metadata to set on the intention, formatted as key=value. This flag "+ + "may be specified multiple times to set multiple meta fields.") + + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.ServerFlags()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 1 + } + + // Default to allow + if !c.flagAllow && !c.flagDeny { + c.flagAllow = true + } + + // If both are specified it is an error + if c.flagAllow && c.flagDeny { + c.UI.Error("Only one of -allow or -deny may be specified.") + return 1 + } + + // Check for arg validation + args = c.flags.Args() + ixns, err := c.ixnsFromArgs(args) + if err != nil { + c.UI.Error(fmt.Sprintf("Error: %s", err)) + return 1 + } + + // Create and test the HTTP client + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err)) + return 1 + } + + // Go through and create each intention + for _, ixn := range ixns { + _, _, err := client.Connect().IntentionCreate(ixn, nil) + if err != nil { + c.UI.Error(fmt.Sprintf("Error creating intention %q: %s", ixn, err)) + return 1 + } + + c.UI.Output(fmt.Sprintf("Created: %s", ixn)) + } + + return 0 +} + +// ixnsFromArgs returns the set of intentions to create based on the arguments +// given and the flags set. This will call ixnsFromFiles if the -file flag +// was set. +func (c *cmd) ixnsFromArgs(args []string) ([]*api.Intention, error) { + // If we're in file mode, load from files + if c.flagFile { + return c.ixnsFromFiles(args) + } + + // From args we require exactly two + if len(args) != 2 { + return nil, fmt.Errorf("Must specify two arguments: source and destination") + } + + return []*api.Intention{&api.Intention{ + SourceName: args[0], + DestinationName: args[1], + SourceType: api.IntentionSourceConsul, + Action: c.ixnAction(), + Meta: c.flagMeta, + }}, nil +} + +func (c *cmd) ixnsFromFiles(args []string) ([]*api.Intention, error) { + var result []*api.Intention + for _, path := range args { + f, err := os.Open(path) + if err != nil { + return nil, err + } + + var ixn api.Intention + err = json.NewDecoder(f).Decode(&ixn) + f.Close() + if err != nil { + return nil, err + } + + result = append(result, &ixn) + } + + return result, nil +} + +// ixnAction returns the api.IntentionAction based on the flag set. +func (c *cmd) ixnAction() api.IntentionAction { + if c.flagAllow { + return api.IntentionActionAllow + } + + return api.IntentionActionDeny +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return c.help +} + +const synopsis = "Create intentions for service connections." +const help = ` +Usage: consul intention create [options] SRC DST +Usage: consul intention create [options] -file FILE... + + Create one or more intentions. The data can be specified as a single + source and destination pair or via a set of files when the "-file" flag + is specified. + + $ consul intention create web db + + To consume data from a set of files: + + $ consul intention create -file one.json two.json + + When specifying the "-file" flag, "-" may be used once to read from stdin: + + $ echo "{ ... }" | consul intention create -file - + + An "allow" intention is created by default (whitelist). To create a + "deny" intention, the "-deny" flag should be specified. + + If a conflicting intention is found, creation will fail. To replace any + conflicting intentions, specify the "-replace" flag. This will replace any + conflicting intentions with the intention specified in this command. + Metadata and any other fields of the previous intention will not be + preserved. + + Additional flags and more advanced use cases are detailed below. +` diff --git a/command/intention/create/create_test.go b/command/intention/create/create_test.go new file mode 100644 index 000000000..963a3edc6 --- /dev/null +++ b/command/intention/create/create_test.go @@ -0,0 +1,188 @@ +package create + +import ( + "os" + "strings" + "testing" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/testutil" + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" +) + +func TestCommand_noTabs(t *testing.T) { + t.Parallel() + if strings.ContainsRune(New(nil).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestCommand_Validation(t *testing.T) { + t.Parallel() + + ui := cli.NewMockUi() + c := New(ui) + + cases := map[string]struct { + args []string + output string + }{ + "-allow and -deny": { + []string{"-allow", "-deny", "foo", "bar"}, + "one of -allow", + }, + } + + for name, tc := range cases { + t.Run(name, func(t *testing.T) { + require := require.New(t) + + c.init() + + // Ensure our buffer is always clear + if ui.ErrorWriter != nil { + ui.ErrorWriter.Reset() + } + if ui.OutputWriter != nil { + ui.OutputWriter.Reset() + } + + require.Equal(1, c.Run(tc.args)) + output := ui.ErrorWriter.String() + require.Contains(output, tc.output) + }) + } +} + +func TestCommand(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "foo", "bar", + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + + ixns, _, err := client.Connect().Intentions(nil) + require.NoError(err) + require.Len(ixns, 1) + require.Equal("foo", ixns[0].SourceName) + require.Equal("bar", ixns[0].DestinationName) + require.Equal(api.IntentionActionAllow, ixns[0].Action) +} + +func TestCommand_deny(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-deny", + "foo", "bar", + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + + ixns, _, err := client.Connect().Intentions(nil) + require.NoError(err) + require.Len(ixns, 1) + require.Equal("foo", ixns[0].SourceName) + require.Equal("bar", ixns[0].DestinationName) + require.Equal(api.IntentionActionDeny, ixns[0].Action) +} + +func TestCommand_meta(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-meta", "hello=world", + "foo", "bar", + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + + ixns, _, err := client.Connect().Intentions(nil) + require.NoError(err) + require.Len(ixns, 1) + require.Equal("foo", ixns[0].SourceName) + require.Equal("bar", ixns[0].DestinationName) + require.Equal(map[string]string{"hello": "world"}, ixns[0].Meta) +} + +func TestCommand_File(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + ui := cli.NewMockUi() + c := New(ui) + + contents := `{ "SourceName": "foo", "DestinationName": "bar", "Action": "allow" }` + f := testutil.TempFile(t, "intention-create-command-file") + defer os.Remove(f.Name()) + if _, err := f.WriteString(contents); err != nil { + t.Fatalf("err: %#v", err) + } + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-file", + f.Name(), + } + + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + + ixns, _, err := client.Connect().Intentions(nil) + require.NoError(err) + require.Len(ixns, 1) + require.Equal("foo", ixns[0].SourceName) + require.Equal("bar", ixns[0].DestinationName) + require.Equal(api.IntentionActionAllow, ixns[0].Action) +} + +func TestCommand_FileNoExist(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-file", + "shouldnotexist.txt", + } + + require.Equal(1, c.Run(args), ui.ErrorWriter.String()) + require.Contains(ui.ErrorWriter.String(), "no such file") +} diff --git a/command/intention/intention.go b/command/intention/intention.go new file mode 100644 index 000000000..767e3ff1b --- /dev/null +++ b/command/intention/intention.go @@ -0,0 +1,48 @@ +package intention + +import ( + "github.com/hashicorp/consul/command/flags" + "github.com/mitchellh/cli" +) + +func New() *cmd { + return &cmd{} +} + +type cmd struct{} + +func (c *cmd) Run(args []string) int { + return cli.RunResultHelp +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return flags.Usage(help, nil) +} + +const synopsis = "Interact with Connect service intentions" +const help = ` +Usage: consul intention [options] [args] + + This command has subcommands for interacting with intentions. Intentions + are the permissions for what services are allowed to communicate via + Connect. Here are some simple examples, and more detailed examples are + available in the subcommands or the documentation. + + Create an intention to allow "web" to talk to "db": + + $ consul intention create web db + + Test whether a "web" is allowed to connect to "db": + + $ consul intention check web db + + Find all intentions for communicating to the "db" service: + + $ consul intention match db + + For more examples, ask for subcommand help or view the documentation. +` diff --git a/command/intention/intention_test.go b/command/intention/intention_test.go new file mode 100644 index 000000000..e697f537f --- /dev/null +++ b/command/intention/intention_test.go @@ -0,0 +1,13 @@ +package intention + +import ( + "strings" + "testing" +) + +func TestCommand_noTabs(t *testing.T) { + t.Parallel() + if strings.ContainsRune(New().Help(), '\t') { + t.Fatal("help has tabs") + } +} From 77d0360de15e87ee456cf86e506fe211e2c7d913 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 11 May 2018 21:42:46 -0700 Subject: [PATCH 271/539] command/intention/finder: package for finding based on src/dst --- api/connect_intention.go | 31 +++++++++++----- command/intention/finder/finder.go | 46 +++++++++++++++++++++++ command/intention/finder/finder_test.go | 49 +++++++++++++++++++++++++ 3 files changed, 117 insertions(+), 9 deletions(-) create mode 100644 command/intention/finder/finder.go create mode 100644 command/intention/finder/finder_test.go diff --git a/api/connect_intention.go b/api/connect_intention.go index e43506a8a..b7af3163c 100644 --- a/api/connect_intention.go +++ b/api/connect_intention.go @@ -53,17 +53,30 @@ type Intention struct { // String returns human-friendly output describing ths intention. func (i *Intention) String() string { - source := i.SourceName - if i.SourceNS != "" { - source = i.SourceNS + "/" + source + return fmt.Sprintf("%s => %s (%s)", + i.SourceString(), + i.DestinationString(), + i.Action) +} + +// SourceString returns the namespace/name format for the source, or +// just "name" if the namespace is the default namespace. +func (i *Intention) SourceString() string { + return i.partString(i.SourceNS, i.SourceName) +} + +// DestinationString returns the namespace/name format for the source, or +// just "name" if the namespace is the default namespace. +func (i *Intention) DestinationString() string { + return i.partString(i.DestinationNS, i.DestinationName) +} + +func (i *Intention) partString(ns, n string) string { + if ns != "" { + n = ns + "/" + n } - dest := i.DestinationName - if i.DestinationNS != "" { - dest = i.DestinationNS + "/" + dest - } - - return fmt.Sprintf("%s => %s (%s)", source, dest, i.Action) + return n } // IntentionAction is the action that the intention represents. This diff --git a/command/intention/finder/finder.go b/command/intention/finder/finder.go new file mode 100644 index 000000000..c8db7ba5c --- /dev/null +++ b/command/intention/finder/finder.go @@ -0,0 +1,46 @@ +package finder + +import ( + "sync" + + "github.com/hashicorp/consul/api" +) + +// Finder finds intentions by a src/dst exact match. There is currently +// no direct API to do this so this struct downloads all intentions and +// caches them once, and searches in-memory for this. For now this works since +// even with a very large number of intentions, the size of the data gzipped +// over HTTP will be relatively small. +type Finder struct { + // Client is the API client to use for any requests. + Client *api.Client + + lock sync.Mutex + ixns []*api.Intention // cached list of intentions +} + +// Find finds the intention that matches the given src and dst. This will +// return nil when the result is not found. +func (f *Finder) Find(src, dst string) (*api.Intention, error) { + f.lock.Lock() + defer f.lock.Unlock() + + // If the list of ixns is nil, then we haven't fetched yet, so fetch + if f.ixns == nil { + ixns, _, err := f.Client.Connect().Intentions(nil) + if err != nil { + return nil, err + } + + f.ixns = ixns + } + + // Go through the intentions and find an exact match + for _, ixn := range f.ixns { + if ixn.SourceString() == src && ixn.DestinationString() == dst { + return ixn, nil + } + } + + return nil, nil +} diff --git a/command/intention/finder/finder_test.go b/command/intention/finder/finder_test.go new file mode 100644 index 000000000..eb8c5b99e --- /dev/null +++ b/command/intention/finder/finder_test.go @@ -0,0 +1,49 @@ +package finder + +import ( + "testing" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/stretchr/testify/require" +) + +func TestFinder(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + // Create a set of intentions + var ids []string + { + insert := [][]string{ + []string{"a", "b", "c", "d"}, + } + + for _, v := range insert { + ixn := &api.Intention{ + SourceNS: v[0], + SourceName: v[1], + DestinationNS: v[2], + DestinationName: v[3], + Action: api.IntentionActionAllow, + } + + id, _, err := client.Connect().IntentionCreate(ixn, nil) + require.NoError(err) + ids = append(ids, id) + } + } + + finder := &Finder{Client: client} + ixn, err := finder.Find("a/b", "c/d") + require.NoError(err) + require.Equal(ids[0], ixn.ID) + + ixn, err = finder.Find("a/c", "c/d") + require.NoError(err) + require.Nil(ixn) +} From aead9cd422c5cdff241063b983fb867fd28deb04 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 11 May 2018 22:07:58 -0700 Subject: [PATCH 272/539] command/intention/get: the get command without tests --- api/connect_intention.go | 7 +- command/commands_oss.go | 4 +- command/intention/finder/finder.go | 16 ++++ command/intention/get/get.go | 133 +++++++++++++++++++++++++++++ 4 files changed, 158 insertions(+), 2 deletions(-) create mode 100644 command/intention/get/get.go diff --git a/api/connect_intention.go b/api/connect_intention.go index b7af3163c..95ed335a8 100644 --- a/api/connect_intention.go +++ b/api/connect_intention.go @@ -72,13 +72,18 @@ func (i *Intention) DestinationString() string { } func (i *Intention) partString(ns, n string) string { - if ns != "" { + // For now we omit the default namespace from the output. In the future + // we might want to look at this and show this in a multi-namespace world. + if ns != "" && ns != IntentionDefaultNamespace { n = ns + "/" + n } return n } +// IntentionDefaultNamespace is the default namespace value. +const IntentionDefaultNamespace = "default" + // IntentionAction is the action that the intention represents. This // can be "allow" or "deny" to whitelist or blacklist intentions. type IntentionAction string diff --git a/command/commands_oss.go b/command/commands_oss.go index 79636c598..9eba97b09 100644 --- a/command/commands_oss.go +++ b/command/commands_oss.go @@ -14,6 +14,7 @@ import ( "github.com/hashicorp/consul/command/info" "github.com/hashicorp/consul/command/intention" ixncreate "github.com/hashicorp/consul/command/intention/create" + ixnget "github.com/hashicorp/consul/command/intention/get" "github.com/hashicorp/consul/command/join" "github.com/hashicorp/consul/command/keygen" "github.com/hashicorp/consul/command/keyring" @@ -69,7 +70,8 @@ func init() { Register("force-leave", func(ui cli.Ui) (cli.Command, error) { return forceleave.New(ui), nil }) Register("info", func(ui cli.Ui) (cli.Command, error) { return info.New(ui), nil }) Register("intention", func(ui cli.Ui) (cli.Command, error) { return intention.New(), nil }) - Register("intention create", func(ui cli.Ui) (cli.Command, error) { return ixncreate.New(), nil }) + Register("intention create", func(ui cli.Ui) (cli.Command, error) { return ixncreate.New(ui), nil }) + Register("intention get", func(ui cli.Ui) (cli.Command, error) { return ixnget.New(ui), nil }) Register("join", func(ui cli.Ui) (cli.Command, error) { return join.New(ui), nil }) Register("keygen", func(ui cli.Ui) (cli.Command, error) { return keygen.New(ui), nil }) Register("keyring", func(ui cli.Ui) (cli.Command, error) { return keyring.New(ui), nil }) diff --git a/command/intention/finder/finder.go b/command/intention/finder/finder.go index c8db7ba5c..f4c6109a0 100644 --- a/command/intention/finder/finder.go +++ b/command/intention/finder/finder.go @@ -1,6 +1,7 @@ package finder import ( + "strings" "sync" "github.com/hashicorp/consul/api" @@ -22,6 +23,9 @@ type Finder struct { // Find finds the intention that matches the given src and dst. This will // return nil when the result is not found. func (f *Finder) Find(src, dst string) (*api.Intention, error) { + src = StripDefaultNS(src) + dst = StripDefaultNS(dst) + f.lock.Lock() defer f.lock.Unlock() @@ -44,3 +48,15 @@ func (f *Finder) Find(src, dst string) (*api.Intention, error) { return nil, nil } + +// StripDefaultNS strips the default namespace from an argument. For now, +// the API and lookups strip this value from string output so we strip it. +func StripDefaultNS(v string) string { + if idx := strings.IndexByte(v, '/'); idx > 0 { + if v[:idx] == api.IntentionDefaultNamespace { + return v[:idx+1] + } + } + + return v +} diff --git a/command/intention/get/get.go b/command/intention/get/get.go new file mode 100644 index 000000000..0c0eba77e --- /dev/null +++ b/command/intention/get/get.go @@ -0,0 +1,133 @@ +package create + +import ( + "flag" + "fmt" + "io" + "sort" + "time" + + "github.com/hashicorp/consul/command/flags" + "github.com/hashicorp/consul/command/intention/finder" + "github.com/mitchellh/cli" + "github.com/ryanuber/columnize" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + // testStdin is the input for testing. + testStdin io.Reader +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.ServerFlags()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 1 + } + + // Create and test the HTTP client + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err)) + return 1 + } + + // Get the intention ID to load + var id string + args = c.flags.Args() + switch len(args) { + case 1: + id = args[0] + + case 2: + f := &finder.Finder{Client: client} + ixn, err := f.Find(args[0], args[1]) + if err != nil { + c.UI.Error(fmt.Sprintf("Error looking up intention: %s", err)) + return 1 + } + if ixn == nil { + c.UI.Error(fmt.Sprintf( + "Intention with source %q and destination %q not found.", + args[0], args[1])) + return 1 + } + + id = ixn.ID + + default: + c.UI.Error(fmt.Sprintf("Error: get requires exactly 1 or 2 arguments")) + return 1 + } + + // Read the intention + ixn, _, err := client.Connect().IntentionGet(id, nil) + if err != nil { + c.UI.Error(fmt.Sprintf("Error reading the intention: %s", err)) + return 1 + } + + // Format the tabular data + data := []string{ + fmt.Sprintf("Source:|%s", ixn.SourceString()), + fmt.Sprintf("Destination:|%s", ixn.DestinationString()), + fmt.Sprintf("Action:|%s", ixn.Action), + fmt.Sprintf("ID:|%s", ixn.ID), + } + if v := ixn.Description; v != "" { + data = append(data, fmt.Sprintf("Description:|%s", v)) + } + if len(ixn.Meta) > 0 { + var keys []string + for k := range ixn.Meta { + keys = append(keys, k) + } + sort.Strings(keys) + for _, k := range keys { + data = append(data, fmt.Sprintf("Meta[%s]:|%s", k, ixn.Meta[k])) + } + } + data = append(data, + fmt.Sprintf("Created At:|%s", ixn.CreatedAt.Local().Format(time.RFC850)), + ) + + c.UI.Output(columnize.SimpleFormat(data)) + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return c.help +} + +const synopsis = "Show information about an intention." +const help = ` +Usage: consul intention get [options] SRC DST +Usage: consul intention get [options] ID + + Read and show the details about an intention. The intention can be looked + up via an exact source/destination match or via the unique intention ID. + + $ consul intention get web db + +` From efa82278e205b113bec7bd5dce6a9c29c7e15e47 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 11 May 2018 22:19:21 -0700 Subject: [PATCH 273/539] api: IntentionDelete + tests --- api/connect_intention.go | 24 +++++++++++++++++++++++- api/connect_intention_test.go | 11 ++++++++++- 2 files changed, 33 insertions(+), 2 deletions(-) diff --git a/api/connect_intention.go b/api/connect_intention.go index 95ed335a8..3d10b219c 100644 --- a/api/connect_intention.go +++ b/api/connect_intention.go @@ -154,7 +154,7 @@ func (h *Connect) Intentions(q *QueryOptions) ([]*Intention, *QueryMeta, error) func (h *Connect) IntentionGet(id string, q *QueryOptions) (*Intention, *QueryMeta, error) { r := h.c.newRequest("GET", "/v1/connect/intentions/"+id) r.setQueryOptions(q) - rtt, resp, err := requireOK(h.c.doRequest(r)) + rtt, resp, err := h.c.doRequest(r) if err != nil { return nil, nil, err } @@ -164,6 +164,12 @@ func (h *Connect) IntentionGet(id string, q *QueryOptions) (*Intention, *QueryMe parseQueryMeta(resp, qm) qm.RequestTime = rtt + if resp.StatusCode == 404 { + return nil, qm, nil + } else if resp.StatusCode != 200 { + return nil, nil, fmt.Errorf("Unexpected response code: %d", resp.StatusCode) + } + var out Intention if err := decodeBody(resp, &out); err != nil { return nil, nil, err @@ -171,6 +177,22 @@ func (h *Connect) IntentionGet(id string, q *QueryOptions) (*Intention, *QueryMe return &out, qm, nil } +// IntentionDelete deletes a single intention. +func (h *Connect) IntentionDelete(id string, q *WriteOptions) (*WriteMeta, error) { + r := h.c.newRequest("DELETE", "/v1/connect/intentions/"+id) + r.setWriteOptions(q) + rtt, resp, err := requireOK(h.c.doRequest(r)) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + qm := &WriteMeta{} + qm.RequestTime = rtt + + return qm, nil +} + // IntentionMatch returns the list of intentions that match a given source // or destination. The returned intentions are ordered by precedence where // result[0] is the highest precedence (if that matches, then that rule overrides diff --git a/api/connect_intention_test.go b/api/connect_intention_test.go index e6e76071e..436d0de0c 100644 --- a/api/connect_intention_test.go +++ b/api/connect_intention_test.go @@ -6,7 +6,7 @@ import ( "github.com/stretchr/testify/require" ) -func TestAPI_ConnectIntentionCreateListGet(t *testing.T) { +func TestAPI_ConnectIntentionCreateListGetDelete(t *testing.T) { t.Parallel() require := require.New(t) @@ -38,6 +38,15 @@ func TestAPI_ConnectIntentionCreateListGet(t *testing.T) { actual, _, err = connect.IntentionGet(id, nil) require.Nil(err) require.Equal(ixn, actual) + + // Delete it + _, err = connect.IntentionDelete(id, nil) + require.Nil(err) + + // Get it (should be gone) + actual, _, err = connect.IntentionGet(id, nil) + require.Nil(err) + require.Nil(actual) } func TestAPI_ConnectIntentionMatch(t *testing.T) { From 4caeaaaa218d17fca4c5c721894b0ab51dbc3b2b Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 11 May 2018 22:19:39 -0700 Subject: [PATCH 274/539] command/intentions/delete --- command/commands_oss.go | 2 + command/intention/delete/delete.go | 86 ++++++++++++++++++++++++++++++ command/intention/finder/finder.go | 26 +++++++++ command/intention/get/get.go | 30 ++--------- 4 files changed, 119 insertions(+), 25 deletions(-) create mode 100644 command/intention/delete/delete.go diff --git a/command/commands_oss.go b/command/commands_oss.go index 9eba97b09..81d44fd1a 100644 --- a/command/commands_oss.go +++ b/command/commands_oss.go @@ -14,6 +14,7 @@ import ( "github.com/hashicorp/consul/command/info" "github.com/hashicorp/consul/command/intention" ixncreate "github.com/hashicorp/consul/command/intention/create" + ixndelete "github.com/hashicorp/consul/command/intention/delete" ixnget "github.com/hashicorp/consul/command/intention/get" "github.com/hashicorp/consul/command/join" "github.com/hashicorp/consul/command/keygen" @@ -71,6 +72,7 @@ func init() { Register("info", func(ui cli.Ui) (cli.Command, error) { return info.New(ui), nil }) Register("intention", func(ui cli.Ui) (cli.Command, error) { return intention.New(), nil }) Register("intention create", func(ui cli.Ui) (cli.Command, error) { return ixncreate.New(ui), nil }) + Register("intention delete", func(ui cli.Ui) (cli.Command, error) { return ixndelete.New(ui), nil }) Register("intention get", func(ui cli.Ui) (cli.Command, error) { return ixnget.New(ui), nil }) Register("join", func(ui cli.Ui) (cli.Command, error) { return join.New(ui), nil }) Register("keygen", func(ui cli.Ui) (cli.Command, error) { return keygen.New(ui), nil }) diff --git a/command/intention/delete/delete.go b/command/intention/delete/delete.go new file mode 100644 index 000000000..d7a928d9c --- /dev/null +++ b/command/intention/delete/delete.go @@ -0,0 +1,86 @@ +package delete + +import ( + "flag" + "fmt" + "io" + + "github.com/hashicorp/consul/command/flags" + "github.com/hashicorp/consul/command/intention/finder" + "github.com/mitchellh/cli" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + // testStdin is the input for testing. + testStdin io.Reader +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.ServerFlags()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 1 + } + + // Create and test the HTTP client + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err)) + return 1 + } + + // Get the intention ID to load + f := &finder.Finder{Client: client} + id, err := f.IDFromArgs(c.flags.Args()) + if err != nil { + c.UI.Error(fmt.Sprintf("Error: %s", err)) + return 1 + } + + // Read the intention + _, err = client.Connect().IntentionDelete(id, nil) + if err != nil { + c.UI.Error(fmt.Sprintf("Error reading the intention: %s", err)) + return 1 + } + + c.UI.Output(fmt.Sprintf("Intention deleted.")) + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return c.help +} + +const synopsis = "Delete an intention." +const help = ` +Usage: consul intention delete [options] SRC DST +Usage: consul intention delete [options] ID + + Delete an intention. This cannot be reversed. The intention can be looked + up via an exact source/destination match or via the unique intention ID. + + $ consul intention delete web db + +` diff --git a/command/intention/finder/finder.go b/command/intention/finder/finder.go index f4c6109a0..e16bc02dc 100644 --- a/command/intention/finder/finder.go +++ b/command/intention/finder/finder.go @@ -1,6 +1,7 @@ package finder import ( + "fmt" "strings" "sync" @@ -20,6 +21,31 @@ type Finder struct { ixns []*api.Intention // cached list of intentions } +// ID returns the intention ID for the given CLI args. An error is returned +// if args is not 1 or 2 elements. +func (f *Finder) IDFromArgs(args []string) (string, error) { + switch len(args) { + case 1: + return args[0], nil + + case 2: + ixn, err := f.Find(args[0], args[1]) + if err != nil { + return "", err + } + if ixn == nil { + return "", fmt.Errorf( + "Intention with source %q and destination %q not found.", + args[0], args[1]) + } + + return ixn.ID, nil + + default: + return "", fmt.Errorf("command requires exactly 1 or 2 arguments") + } +} + // Find finds the intention that matches the given src and dst. This will // return nil when the result is not found. func (f *Finder) Find(src, dst string) (*api.Intention, error) { diff --git a/command/intention/get/get.go b/command/intention/get/get.go index 0c0eba77e..bfe4c754c 100644 --- a/command/intention/get/get.go +++ b/command/intention/get/get.go @@ -1,4 +1,4 @@ -package create +package get import ( "flag" @@ -50,30 +50,10 @@ func (c *cmd) Run(args []string) int { } // Get the intention ID to load - var id string - args = c.flags.Args() - switch len(args) { - case 1: - id = args[0] - - case 2: - f := &finder.Finder{Client: client} - ixn, err := f.Find(args[0], args[1]) - if err != nil { - c.UI.Error(fmt.Sprintf("Error looking up intention: %s", err)) - return 1 - } - if ixn == nil { - c.UI.Error(fmt.Sprintf( - "Intention with source %q and destination %q not found.", - args[0], args[1])) - return 1 - } - - id = ixn.ID - - default: - c.UI.Error(fmt.Sprintf("Error: get requires exactly 1 or 2 arguments")) + f := &finder.Finder{Client: client} + id, err := f.IDFromArgs(c.flags.Args()) + if err != nil { + c.UI.Error(fmt.Sprintf("Error: %s", err)) return 1 } From e055f40612e27de7eb2b6fc663ae659013beda04 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 11 May 2018 22:28:59 -0700 Subject: [PATCH 275/539] command/intention/create: -replace flag, jank, we should change to PUT --- command/intention/create/create.go | 32 ++++++++++++- command/intention/create/create_test.go | 63 +++++++++++++++++++++++++ 2 files changed, 94 insertions(+), 1 deletion(-) diff --git a/command/intention/create/create.go b/command/intention/create/create.go index b847117a1..dd5f61565 100644 --- a/command/intention/create/create.go +++ b/command/intention/create/create.go @@ -9,6 +9,7 @@ import ( "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/command/flags" + "github.com/hashicorp/consul/command/intention/finder" "github.com/mitchellh/cli" ) @@ -44,7 +45,8 @@ func (c *cmd) init() { c.flags.BoolVar(&c.flagFile, "file", false, "Read intention data from one or more files.") c.flags.BoolVar(&c.flagReplace, "replace", false, - "Replace matching intentions.") + "Replace matching intentions. This is not an atomic operation. "+ + "If the insert fails, then the previous intention will still be deleted.") c.flags.Var((*flags.FlagMapValue)(&c.flagMeta), "meta", "Metadata to set on the intention, formatted as key=value. This flag "+ "may be specified multiple times to set multiple meta fields.") @@ -86,8 +88,36 @@ func (c *cmd) Run(args []string) int { return 1 } + // Create the finder in case we need it + find := &finder.Finder{Client: client} + // Go through and create each intention for _, ixn := range ixns { + // If replace is set to true, then find this intention and delete it. + if c.flagReplace { + ixn, err := find.Find(ixn.SourceString(), ixn.DestinationString()) + if err != nil { + c.UI.Error(fmt.Sprintf( + "Error looking up intention for replacement with source %q "+ + "and destination %q: %s", + ixn.SourceString(), + ixn.DestinationString(), + err)) + return 1 + } + if ixn != nil { + if _, err := client.Connect().IntentionDelete(ixn.ID, nil); err != nil { + c.UI.Error(fmt.Sprintf( + "Error deleting intention for replacement with source %q "+ + "and destination %q: %s", + ixn.SourceString(), + ixn.DestinationString(), + err)) + return 1 + } + } + } + _, _, err := client.Connect().IntentionCreate(ixn, nil) if err != nil { c.UI.Error(fmt.Sprintf("Error creating intention %q: %s", ixn, err)) diff --git a/command/intention/create/create_test.go b/command/intention/create/create_test.go index 963a3edc6..067d0d6a9 100644 --- a/command/intention/create/create_test.go +++ b/command/intention/create/create_test.go @@ -186,3 +186,66 @@ func TestCommand_FileNoExist(t *testing.T) { require.Equal(1, c.Run(args), ui.ErrorWriter.String()) require.Contains(ui.ErrorWriter.String(), "no such file") } + +func TestCommand_replace(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + // Create the first + { + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "foo", "bar", + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + + ixns, _, err := client.Connect().Intentions(nil) + require.NoError(err) + require.Len(ixns, 1) + require.Equal("foo", ixns[0].SourceName) + require.Equal("bar", ixns[0].DestinationName) + require.Equal(api.IntentionActionAllow, ixns[0].Action) + } + + // Don't replace, should be an error + { + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-deny", + "foo", "bar", + } + require.Equal(1, c.Run(args), ui.ErrorWriter.String()) + require.Contains(ui.ErrorWriter.String(), "duplicate") + } + + // Replace it + { + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-replace", + "-deny", + "foo", "bar", + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + + ixns, _, err := client.Connect().Intentions(nil) + require.NoError(err) + require.Len(ixns, 1) + require.Equal("foo", ixns[0].SourceName) + require.Equal("bar", ixns[0].DestinationName) + require.Equal(api.IntentionActionDeny, ixns[0].Action) + } +} From 5ed57b393c1c8f10e11760a6037b34607a8092bb Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 15 May 2018 06:58:56 -0700 Subject: [PATCH 276/539] command/intentions/check --- command/commands_oss.go | 2 + command/intention/check/check.go | 94 ++++++++++++++++++++++++++++++++ 2 files changed, 96 insertions(+) create mode 100644 command/intention/check/check.go diff --git a/command/commands_oss.go b/command/commands_oss.go index 81d44fd1a..6452a4f6f 100644 --- a/command/commands_oss.go +++ b/command/commands_oss.go @@ -13,6 +13,7 @@ import ( "github.com/hashicorp/consul/command/forceleave" "github.com/hashicorp/consul/command/info" "github.com/hashicorp/consul/command/intention" + ixncheck "github.com/hashicorp/consul/command/intention/check" ixncreate "github.com/hashicorp/consul/command/intention/create" ixndelete "github.com/hashicorp/consul/command/intention/delete" ixnget "github.com/hashicorp/consul/command/intention/get" @@ -71,6 +72,7 @@ func init() { Register("force-leave", func(ui cli.Ui) (cli.Command, error) { return forceleave.New(ui), nil }) Register("info", func(ui cli.Ui) (cli.Command, error) { return info.New(ui), nil }) Register("intention", func(ui cli.Ui) (cli.Command, error) { return intention.New(), nil }) + Register("intention check", func(ui cli.Ui) (cli.Command, error) { return ixncheck.New(ui), nil }) Register("intention create", func(ui cli.Ui) (cli.Command, error) { return ixncreate.New(ui), nil }) Register("intention delete", func(ui cli.Ui) (cli.Command, error) { return ixndelete.New(ui), nil }) Register("intention get", func(ui cli.Ui) (cli.Command, error) { return ixnget.New(ui), nil }) diff --git a/command/intention/check/check.go b/command/intention/check/check.go new file mode 100644 index 000000000..af56a8973 --- /dev/null +++ b/command/intention/check/check.go @@ -0,0 +1,94 @@ +package check + +import ( + "flag" + "fmt" + "io" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/command/flags" + "github.com/mitchellh/cli" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + // testStdin is the input for testing. + testStdin io.Reader +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.ServerFlags()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 2 + } + + args = c.flags.Args() + if len(args) != 2 { + c.UI.Error(fmt.Sprintf("Error: command requires exactly two arguments: src and dst")) + return 2 + } + + // Create and test the HTTP client + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err)) + return 2 + } + + // Check the intention + allowed, _, err := client.Connect().IntentionCheck(&api.IntentionCheck{ + Source: args[0], + Destination: args[1], + SourceType: api.IntentionSourceConsul, + }, nil) + if err != nil { + c.UI.Error(fmt.Sprintf("Error checking the connection: %s", err)) + return 2 + } + + if allowed { + c.UI.Output("Allowed") + return 0 + } else { + c.UI.Output("Denied") + return 1 + } + + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return c.help +} + +const synopsis = "Check whether a connection between two services is allowed." +const help = ` +Usage: consul intention check [options] SRC DST + + Check whether a connection between SRC and DST would be allowed by + Connect given the current Consul configuration. + + $ consul intention check web db + +` From 50e179c3af98f0ecd8219406cefeb1643d6af963 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 15 May 2018 08:44:58 -0700 Subject: [PATCH 277/539] command/intention/match --- command/commands_oss.go | 2 + command/intention/match/match.go | 100 +++++++++++++++++++++++++++++++ 2 files changed, 102 insertions(+) create mode 100644 command/intention/match/match.go diff --git a/command/commands_oss.go b/command/commands_oss.go index 6452a4f6f..f166e82d7 100644 --- a/command/commands_oss.go +++ b/command/commands_oss.go @@ -17,6 +17,7 @@ import ( ixncreate "github.com/hashicorp/consul/command/intention/create" ixndelete "github.com/hashicorp/consul/command/intention/delete" ixnget "github.com/hashicorp/consul/command/intention/get" + ixnmatch "github.com/hashicorp/consul/command/intention/match" "github.com/hashicorp/consul/command/join" "github.com/hashicorp/consul/command/keygen" "github.com/hashicorp/consul/command/keyring" @@ -76,6 +77,7 @@ func init() { Register("intention create", func(ui cli.Ui) (cli.Command, error) { return ixncreate.New(ui), nil }) Register("intention delete", func(ui cli.Ui) (cli.Command, error) { return ixndelete.New(ui), nil }) Register("intention get", func(ui cli.Ui) (cli.Command, error) { return ixnget.New(ui), nil }) + Register("intention match", func(ui cli.Ui) (cli.Command, error) { return ixnmatch.New(ui), nil }) Register("join", func(ui cli.Ui) (cli.Command, error) { return join.New(ui), nil }) Register("keygen", func(ui cli.Ui) (cli.Command, error) { return keygen.New(ui), nil }) Register("keyring", func(ui cli.Ui) (cli.Command, error) { return keyring.New(ui), nil }) diff --git a/command/intention/match/match.go b/command/intention/match/match.go new file mode 100644 index 000000000..651ff8fef --- /dev/null +++ b/command/intention/match/match.go @@ -0,0 +1,100 @@ +package match + +import ( + "flag" + "fmt" + "io" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/command/flags" + "github.com/mitchellh/cli" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + // flags + flagSource bool + flagDestination bool + + // testStdin is the input for testing. + testStdin io.Reader +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + c.flags.BoolVar(&c.flagSource, "source", false, + "Match intentions with the given source.") + c.flags.BoolVar(&c.flagDestination, "destination", false, + "Match intentions with the given destination.") + + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.ServerFlags()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 2 + } + + args = c.flags.Args() + if len(args) != 1 { + c.UI.Error(fmt.Sprintf("Error: command requires exactly one argument: src or dst")) + return 2 + } + + // Create and test the HTTP client + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err)) + return 2 + } + + // Match the intention + matches, _, err := client.Connect().IntentionMatch(&api.IntentionMatch{ + By: api.IntentionMatchDestination, + Names: []string{args[0]}, + }, nil) + if err != nil { + c.UI.Error(fmt.Sprintf("Error matching the connection: %s", err)) + return 2 + } + + for _, ixn := range matches[args[0]] { + c.UI.Output(ixn.String()) + } + + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return c.help +} + +const synopsis = "Show intentions that match a source or destination." +const help = ` +Usage: consul intention match [options] SRC|DST + + Show the list of intentions that would be enforced for a given source + or destination. The intentions are listed in the order they would be + evaluated. + + $ consul intention match db + $ consul intention match -source web + +` From 8df851c1eac0baae067c53d2bda05464da73991d Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 15 May 2018 13:51:49 -0700 Subject: [PATCH 278/539] command/intention/get: tests --- command/intention/get/get_test.go | 124 ++++++++++++++++++++++++++++++ 1 file changed, 124 insertions(+) create mode 100644 command/intention/get/get_test.go diff --git a/command/intention/get/get_test.go b/command/intention/get/get_test.go new file mode 100644 index 000000000..2da243b43 --- /dev/null +++ b/command/intention/get/get_test.go @@ -0,0 +1,124 @@ +package get + +import ( + "strings" + "testing" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" +) + +func TestCommand_noTabs(t *testing.T) { + t.Parallel() + if strings.ContainsRune(New(nil).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestCommand_Validation(t *testing.T) { + t.Parallel() + + ui := cli.NewMockUi() + c := New(ui) + + cases := map[string]struct { + args []string + output string + }{ + "0 args": { + []string{}, + "requires exactly 1 or 2", + }, + + "3 args": { + []string{"a", "b", "c"}, + "requires exactly 1 or 2", + }, + } + + for name, tc := range cases { + t.Run(name, func(t *testing.T) { + require := require.New(t) + + c.init() + + // Ensure our buffer is always clear + if ui.ErrorWriter != nil { + ui.ErrorWriter.Reset() + } + if ui.OutputWriter != nil { + ui.OutputWriter.Reset() + } + + require.Equal(1, c.Run(tc.args)) + output := ui.ErrorWriter.String() + require.Contains(output, tc.output) + }) + } +} + +func TestCommand_id(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + // Create the intention + var id string + { + var err error + id, _, err = client.Connect().IntentionCreate(&api.Intention{ + SourceName: "web", + DestinationName: "db", + Action: api.IntentionActionAllow, + }, nil) + require.NoError(err) + } + + // Get it + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + id, + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + require.Contains(ui.OutputWriter.String(), id) +} + +func TestCommand_srcDst(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + // Create the intention + var id string + { + var err error + id, _, err = client.Connect().IntentionCreate(&api.Intention{ + SourceName: "web", + DestinationName: "db", + Action: api.IntentionActionAllow, + }, nil) + require.NoError(err) + } + + // Get it + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "web", "db", + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + require.Contains(ui.OutputWriter.String(), id) +} From 15ce2643e596102532cf6755cc70d9aa113f1398 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 15 May 2018 22:10:43 -0700 Subject: [PATCH 279/539] command/intention/check: check tests --- command/intention/check/check_test.go | 109 ++++++++++++++++++++++++++ 1 file changed, 109 insertions(+) create mode 100644 command/intention/check/check_test.go diff --git a/command/intention/check/check_test.go b/command/intention/check/check_test.go new file mode 100644 index 000000000..d7ac1840a --- /dev/null +++ b/command/intention/check/check_test.go @@ -0,0 +1,109 @@ +package check + +import ( + "strings" + "testing" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" +) + +func TestCommand_noTabs(t *testing.T) { + t.Parallel() + if strings.ContainsRune(New(nil).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestCommand_Validation(t *testing.T) { + t.Parallel() + + ui := cli.NewMockUi() + c := New(ui) + + cases := map[string]struct { + args []string + output string + }{ + "0 args": { + []string{}, + "requires exactly two", + }, + + "1 args": { + []string{"a"}, + "requires exactly two", + }, + + "3 args": { + []string{"a", "b", "c"}, + "requires exactly two", + }, + } + + for name, tc := range cases { + t.Run(name, func(t *testing.T) { + require := require.New(t) + + c.init() + + // Ensure our buffer is always clear + if ui.ErrorWriter != nil { + ui.ErrorWriter.Reset() + } + if ui.OutputWriter != nil { + ui.OutputWriter.Reset() + } + + require.Equal(2, c.Run(tc.args)) + output := ui.ErrorWriter.String() + require.Contains(output, tc.output) + }) + } +} + +func TestCommand(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + // Create the intention + { + _, _, err := client.Connect().IntentionCreate(&api.Intention{ + SourceName: "web", + DestinationName: "db", + Action: api.IntentionActionDeny, + }, nil) + require.NoError(err) + } + + // Get it + { + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "foo", "db", + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + require.Contains(ui.OutputWriter.String(), "Allow") + } + + { + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "web", "db", + } + require.Equal(1, c.Run(args), ui.ErrorWriter.String()) + require.Contains(ui.OutputWriter.String(), "Denied") + } +} From afbe0c3e6c67e48d5cf9c03180b7efc9ad68df4a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 16 May 2018 08:41:12 -0700 Subject: [PATCH 280/539] command/intention/delete: tests --- command/intention/delete/delete_test.go | 99 +++++++++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 command/intention/delete/delete_test.go diff --git a/command/intention/delete/delete_test.go b/command/intention/delete/delete_test.go new file mode 100644 index 000000000..c5674f771 --- /dev/null +++ b/command/intention/delete/delete_test.go @@ -0,0 +1,99 @@ +package delete + +import ( + "strings" + "testing" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" +) + +func TestCommand_noTabs(t *testing.T) { + t.Parallel() + if strings.ContainsRune(New(nil).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestCommand_Validation(t *testing.T) { + t.Parallel() + + ui := cli.NewMockUi() + c := New(ui) + + cases := map[string]struct { + args []string + output string + }{ + "0 args": { + []string{}, + "requires exactly 1 or 2", + }, + + "3 args": { + []string{"a", "b", "c"}, + "requires exactly 1 or 2", + }, + } + + for name, tc := range cases { + t.Run(name, func(t *testing.T) { + require := require.New(t) + + c.init() + + // Ensure our buffer is always clear + if ui.ErrorWriter != nil { + ui.ErrorWriter.Reset() + } + if ui.OutputWriter != nil { + ui.OutputWriter.Reset() + } + + require.Equal(1, c.Run(tc.args)) + output := ui.ErrorWriter.String() + require.Contains(output, tc.output) + }) + } +} + +func TestCommand(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + // Create the intention + { + _, _, err := client.Connect().IntentionCreate(&api.Intention{ + SourceName: "web", + DestinationName: "db", + Action: api.IntentionActionDeny, + }, nil) + require.NoError(err) + } + + // Delete it + { + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "web", "db", + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + require.Contains(ui.OutputWriter.String(), "deleted") + } + + // Find it (should be gone) + { + ixns, _, err := client.Connect().Intentions(nil) + require.NoError(err) + require.Len(ixns, 0) + } +} From f03fa81e6a3a878506e0db5201bdbf78282ec43a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 16 May 2018 08:47:32 -0700 Subject: [PATCH 281/539] command/intention/match --- command/intention/match/match.go | 18 ++- command/intention/match/match_test.go | 151 ++++++++++++++++++++++++++ 2 files changed, 165 insertions(+), 4 deletions(-) create mode 100644 command/intention/match/match_test.go diff --git a/command/intention/match/match.go b/command/intention/match/match.go index 651ff8fef..5ee6fb463 100644 --- a/command/intention/match/match.go +++ b/command/intention/match/match.go @@ -51,24 +51,34 @@ func (c *cmd) Run(args []string) int { args = c.flags.Args() if len(args) != 1 { c.UI.Error(fmt.Sprintf("Error: command requires exactly one argument: src or dst")) - return 2 + return 1 + } + + if c.flagSource && c.flagDestination { + c.UI.Error(fmt.Sprintf("Error: only one of -source or -destination may be specified")) + return 1 + } + + by := api.IntentionMatchDestination + if c.flagSource { + by = api.IntentionMatchSource } // Create and test the HTTP client client, err := c.http.APIClient() if err != nil { c.UI.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err)) - return 2 + return 1 } // Match the intention matches, _, err := client.Connect().IntentionMatch(&api.IntentionMatch{ - By: api.IntentionMatchDestination, + By: by, Names: []string{args[0]}, }, nil) if err != nil { c.UI.Error(fmt.Sprintf("Error matching the connection: %s", err)) - return 2 + return 1 } for _, ixn := range matches[args[0]] { diff --git a/command/intention/match/match_test.go b/command/intention/match/match_test.go new file mode 100644 index 000000000..937fda3ec --- /dev/null +++ b/command/intention/match/match_test.go @@ -0,0 +1,151 @@ +package match + +import ( + "strings" + "testing" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" +) + +func TestCommand_noTabs(t *testing.T) { + t.Parallel() + if strings.ContainsRune(New(nil).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestCommand_Validation(t *testing.T) { + t.Parallel() + + ui := cli.NewMockUi() + c := New(ui) + + cases := map[string]struct { + args []string + output string + }{ + "0 args": { + []string{}, + "requires exactly one", + }, + + "3 args": { + []string{"a", "b", "c"}, + "requires exactly one", + }, + + "both source and dest": { + []string{"-source", "-destination", "foo"}, + "only one of -source", + }, + } + + for name, tc := range cases { + t.Run(name, func(t *testing.T) { + require := require.New(t) + + c.init() + + // Ensure our buffer is always clear + if ui.ErrorWriter != nil { + ui.ErrorWriter.Reset() + } + if ui.OutputWriter != nil { + ui.OutputWriter.Reset() + } + + require.Equal(1, c.Run(tc.args)) + output := ui.ErrorWriter.String() + require.Contains(output, tc.output) + }) + } +} + +func TestCommand_matchDst(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + // Create some intentions + { + insert := [][]string{ + {"foo", "db"}, + {"web", "db"}, + {"*", "db"}, + } + + for _, v := range insert { + id, _, err := client.Connect().IntentionCreate(&api.Intention{ + SourceName: v[0], + DestinationName: v[1], + Action: api.IntentionActionDeny, + }, nil) + require.NoError(err) + require.NotEmpty(id) + } + } + + // Match it + { + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "db", + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + require.Contains(ui.OutputWriter.String(), "web") + require.Contains(ui.OutputWriter.String(), "db") + require.Contains(ui.OutputWriter.String(), "*") + } +} + +func TestCommand_matchSource(t *testing.T) { + t.Parallel() + + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + // Create some intentions + { + insert := [][]string{ + {"foo", "db"}, + {"web", "db"}, + {"*", "db"}, + } + + for _, v := range insert { + id, _, err := client.Connect().IntentionCreate(&api.Intention{ + SourceName: v[0], + DestinationName: v[1], + Action: api.IntentionActionDeny, + }, nil) + require.NoError(err) + require.NotEmpty(id) + } + } + + // Match it + { + ui := cli.NewMockUi() + c := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-source", + "foo", + } + require.Equal(0, c.Run(args), ui.ErrorWriter.String()) + require.Contains(ui.OutputWriter.String(), "db") + require.NotContains(ui.OutputWriter.String(), "web") + } +} From a316ba7f39fcf72f2f1ef59f82c1879f0168ec53 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 16 May 2018 08:53:33 -0700 Subject: [PATCH 282/539] api: IntentionUpdate API --- api/connect_intention.go | 17 +++++++++++++++++ api/connect_intention_test.go | 14 +++++++++++++- 2 files changed, 30 insertions(+), 1 deletion(-) diff --git a/api/connect_intention.go b/api/connect_intention.go index 3d10b219c..eb7973100 100644 --- a/api/connect_intention.go +++ b/api/connect_intention.go @@ -273,3 +273,20 @@ func (c *Connect) IntentionCreate(ixn *Intention, q *WriteOptions) (string, *Wri } return out.ID, wm, nil } + +// IntentionUpdate will update an existing intention. The ID in the given +// structure must be non-empty. +func (c *Connect) IntentionUpdate(ixn *Intention, q *WriteOptions) (*WriteMeta, error) { + r := c.c.newRequest("PUT", "/v1/connect/intentions/"+ixn.ID) + r.setWriteOptions(q) + r.obj = ixn + rtt, resp, err := requireOK(c.c.doRequest(r)) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + wm := &WriteMeta{} + wm.RequestTime = rtt + return wm, nil +} diff --git a/api/connect_intention_test.go b/api/connect_intention_test.go index 436d0de0c..83f86d3ab 100644 --- a/api/connect_intention_test.go +++ b/api/connect_intention_test.go @@ -6,7 +6,7 @@ import ( "github.com/stretchr/testify/require" ) -func TestAPI_ConnectIntentionCreateListGetDelete(t *testing.T) { +func TestAPI_ConnectIntentionCreateListGetUpdateDelete(t *testing.T) { t.Parallel() require := require.New(t) @@ -39,6 +39,18 @@ func TestAPI_ConnectIntentionCreateListGetDelete(t *testing.T) { require.Nil(err) require.Equal(ixn, actual) + // Update it + ixn.SourceNS = ixn.SourceNS + "-different" + _, err = connect.IntentionUpdate(ixn, nil) + require.NoError(err) + + // Get it + actual, _, err = connect.IntentionGet(id, nil) + require.NoError(err) + ixn.UpdatedAt = actual.UpdatedAt + ixn.ModifyIndex = actual.ModifyIndex + require.Equal(ixn, actual) + // Delete it _, err = connect.IntentionDelete(id, nil) require.Nil(err) From 0fe99f4f1477e8599b4d4ae1ab6e2832b6345884 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 16 May 2018 08:55:33 -0700 Subject: [PATCH 283/539] command/intention/create: -replace does an atomic change --- command/intention/create/create.go | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/command/intention/create/create.go b/command/intention/create/create.go index dd5f61565..de53c743d 100644 --- a/command/intention/create/create.go +++ b/command/intention/create/create.go @@ -45,8 +45,7 @@ func (c *cmd) init() { c.flags.BoolVar(&c.flagFile, "file", false, "Read intention data from one or more files.") c.flags.BoolVar(&c.flagReplace, "replace", false, - "Replace matching intentions. This is not an atomic operation. "+ - "If the insert fails, then the previous intention will still be deleted.") + "Replace matching intentions.") c.flags.Var((*flags.FlagMapValue)(&c.flagMeta), "meta", "Metadata to set on the intention, formatted as key=value. This flag "+ "may be specified multiple times to set multiple meta fields.") @@ -95,7 +94,7 @@ func (c *cmd) Run(args []string) int { for _, ixn := range ixns { // If replace is set to true, then find this intention and delete it. if c.flagReplace { - ixn, err := find.Find(ixn.SourceString(), ixn.DestinationString()) + oldIxn, err := find.Find(ixn.SourceString(), ixn.DestinationString()) if err != nil { c.UI.Error(fmt.Sprintf( "Error looking up intention for replacement with source %q "+ @@ -105,16 +104,22 @@ func (c *cmd) Run(args []string) int { err)) return 1 } - if ixn != nil { - if _, err := client.Connect().IntentionDelete(ixn.ID, nil); err != nil { + if oldIxn != nil { + // We set the ID of our intention so we overwrite it + ixn.ID = oldIxn.ID + + if _, err := client.Connect().IntentionUpdate(ixn, nil); err != nil { c.UI.Error(fmt.Sprintf( - "Error deleting intention for replacement with source %q "+ + "Error replacing intention with source %q "+ "and destination %q: %s", ixn.SourceString(), ixn.DestinationString(), err)) return 1 } + + // Continue since we don't want to try to insert a new intention + continue } } From 787ce3b26996d44c469ee065ac92f2b550a8661c Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 18 May 2018 21:03:10 -0700 Subject: [PATCH 284/539] agent: address feedback --- agent/agent_endpoint.go | 2 +- agent/consul/intention_endpoint.go | 4 +++- agent/intentions_endpoint.go | 2 +- 3 files changed, 5 insertions(+), 3 deletions(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index b52abc732..5b9a5f8ec 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -1107,7 +1107,7 @@ func (s *HTTPServer) agentLocalBlockingQuery(resp http.ResponseWriter, hash stri // // POST /v1/agent/connect/authorize // -// Note: when this logic changes, consider if the Intention.Test RPC method +// Note: when this logic changes, consider if the Intention.Check RPC method // also needs to be updated. func (s *HTTPServer) AgentConnectAuthorize(resp http.ResponseWriter, req *http.Request) (interface{}, error) { // Fetch the token diff --git a/agent/consul/intention_endpoint.go b/agent/consul/intention_endpoint.go index a0d88352f..f363cafd3 100644 --- a/agent/consul/intention_endpoint.go +++ b/agent/consul/intention_endpoint.go @@ -293,7 +293,9 @@ func (s *Intention) Check( return err } - // Perform the ACL check + // Perform the ACL check. For Check we only require ServiceRead and + // NOT IntentionRead because the Check API only returns pass/fail and + // returns no other information about the intentions used. if prefix, ok := query.GetACLPrefix(); ok { if rule != nil && !rule.ServiceRead(prefix) { s.srv.logger.Printf("[WARN] consul.intention: test on intention '%s' denied due to ACLs", prefix) diff --git a/agent/intentions_endpoint.go b/agent/intentions_endpoint.go index 80ddedf24..4720e6f31 100644 --- a/agent/intentions_endpoint.go +++ b/agent/intentions_endpoint.go @@ -122,7 +122,7 @@ func (s *HTTPServer) IntentionMatch(resp http.ResponseWriter, req *http.Request) return response, nil } -// GET /v1/connect/intentions/test +// GET /v1/connect/intentions/check func (s *HTTPServer) IntentionCheck(resp http.ResponseWriter, req *http.Request) (interface{}, error) { // Prepare args args := &structs.IntentionQueryRequest{Check: &structs.IntentionQueryCheck{}} From 1476745bdc912fefb3cc51710176e9b4dc8c642d Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 22 May 2018 10:22:41 -0700 Subject: [PATCH 285/539] command/intention: address comment feedback --- command/intention/create/create.go | 2 +- command/intention/finder/finder.go | 4 ++++ command/intention/intention.go | 2 +- 3 files changed, 6 insertions(+), 2 deletions(-) diff --git a/command/intention/create/create.go b/command/intention/create/create.go index de53c743d..834e2125e 100644 --- a/command/intention/create/create.go +++ b/command/intention/create/create.go @@ -92,7 +92,7 @@ func (c *cmd) Run(args []string) int { // Go through and create each intention for _, ixn := range ixns { - // If replace is set to true, then find this intention and delete it. + // If replace is set to true, then perform an update operation. if c.flagReplace { oldIxn, err := find.Find(ixn.SourceString(), ixn.DestinationString()) if err != nil { diff --git a/command/intention/finder/finder.go b/command/intention/finder/finder.go index e16bc02dc..1f7810cd8 100644 --- a/command/intention/finder/finder.go +++ b/command/intention/finder/finder.go @@ -13,6 +13,10 @@ import ( // caches them once, and searches in-memory for this. For now this works since // even with a very large number of intentions, the size of the data gzipped // over HTTP will be relatively small. +// +// The Finder will only downlaod the intentions one time. This struct is +// not expected to be used over a long period of time. Though it may be +// reused multile times, the intentions list is only downloaded once. type Finder struct { // Client is the API client to use for any requests. Client *api.Client diff --git a/command/intention/intention.go b/command/intention/intention.go index 767e3ff1b..94de26472 100644 --- a/command/intention/intention.go +++ b/command/intention/intention.go @@ -28,7 +28,7 @@ const help = ` Usage: consul intention [options] [args] This command has subcommands for interacting with intentions. Intentions - are the permissions for what services are allowed to communicate via + are permissions describing which services are allowed to communicate via Connect. Here are some simple examples, and more detailed examples are available in the subcommands or the documentation. From 9249662c6c1703429d5d4e59ef6fcedbe937aa68 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 18 May 2018 23:27:02 -0700 Subject: [PATCH 286/539] agent: leaf endpoint accepts name, not service ID This change is important so that requests can made representing a service that may not be registered with the same local agent. --- agent/agent.go | 23 +++---- agent/agent_endpoint.go | 24 +++---- agent/agent_endpoint_test.go | 118 +++++++++++++++++++++++++++++++++-- 3 files changed, 133 insertions(+), 32 deletions(-) diff --git a/agent/agent.go b/agent/agent.go index 622a105e8..6f909e5c7 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -2177,21 +2177,22 @@ func (a *Agent) verifyProxyToken(token, targetService, targetProxy string) (stri // that the service name of the matching proxy matches our target // service. if proxy != nil { - if proxy.Proxy.TargetServiceID != targetService { + // Get the target service since we only have the name. The nil + // check below should never be true since a proxy token always + // represents the existence of a local service. + target := a.State.Service(proxy.Proxy.TargetServiceID) + if target == nil { + return "", fmt.Errorf("proxy target service not found: %q", + proxy.Proxy.TargetServiceID) + } + + if target.Service != targetService { return "", acl.ErrPermissionDenied } // Resolve the actual ACL token used to register the proxy/service and // return that for use in RPC calls. - return a.State.ServiceToken(targetService), nil - } - - // Retrieve the service specified. This should always exist because - // we only call this function for proxies and leaf certs and both can - // only be called for local services. - service := a.State.Service(targetService) - if service == nil { - return "", fmt.Errorf("unknown service ID: %s", targetService) + return a.State.ServiceToken(proxy.Proxy.TargetServiceID), nil } // Doesn't match, we have to do a full token resolution. The required @@ -2202,7 +2203,7 @@ func (a *Agent) verifyProxyToken(token, targetService, targetProxy string) (stri if err != nil { return "", err } - if rule != nil && !rule.ServiceWrite(service.Service, nil) { + if rule != nil && !rule.ServiceWrite(targetService, nil) { return "", acl.ErrPermissionDenied } diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 5b9a5f8ec..239f962a1 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -908,16 +908,10 @@ func (s *HTTPServer) AgentConnectCARoots(resp http.ResponseWriter, req *http.Req // instance. This supports blocking queries to update the returned bundle. func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http.Request) (interface{}, error) { // Get the service ID. Note that this is the ID of a service instance. - id := strings.TrimPrefix(req.URL.Path, "/v1/agent/connect/ca/leaf/") - - // Retrieve the service specified - service := s.agent.State.Service(id) - if service == nil { - return nil, fmt.Errorf("unknown service ID: %s", id) - } + serviceName := strings.TrimPrefix(req.URL.Path, "/v1/agent/connect/ca/leaf/") args := cachetype.ConnectCALeafRequest{ - Service: service.Service, // Need name not ID + Service: serviceName, // Need name not ID } var qOpts structs.QueryOptions // Store DC in the ConnectCALeafRequest but query opts separately @@ -928,7 +922,7 @@ func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http. // Verify the proxy token. This will check both the local proxy token // as well as the ACL if the token isn't local. - effectiveToken, err := s.agent.verifyProxyToken(qOpts.Token, id, "") + effectiveToken, err := s.agent.verifyProxyToken(qOpts.Token, serviceName, "") if err != nil { return nil, err } @@ -983,12 +977,6 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http return "", nil, nil } - // Validate the ACL token - _, err := s.agent.verifyProxyToken(token, proxy.Proxy.TargetServiceID, id) - if err != nil { - return "", nil, err - } - // Lookup the target service as a convenience target := s.agent.State.Service(proxy.Proxy.TargetServiceID) if target == nil { @@ -999,6 +987,12 @@ func (s *HTTPServer) AgentConnectProxyConfig(resp http.ResponseWriter, req *http return "", nil, nil } + // Validate the ACL token + _, err := s.agent.verifyProxyToken(token, target.Service, id) + if err != nil { + return "", nil, err + } + // Watch the proxy for changes ws.Add(proxy.WatchCh) diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 9c10a61ff..62d095fba 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -2238,7 +2238,7 @@ func TestAgentConnectCALeafCert_aclDefaultDeny(t *testing.T) { require.Equal(200, resp.Code, "body: %s", resp.Body.String()) } - req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test-id", nil) + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test", nil) resp := httptest.NewRecorder() _, err := a.srv.AgentConnectCALeafCert(resp, req) require.Error(err) @@ -2284,7 +2284,7 @@ func TestAgentConnectCALeafCert_aclProxyToken(t *testing.T) { token := proxy.ProxyToken require.NotEmpty(token) - req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test-id?token="+token, nil) + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test?token="+token, nil) resp := httptest.NewRecorder() obj, err := a.srv.AgentConnectCALeafCert(resp, req) require.NoError(err) @@ -2355,7 +2355,7 @@ func TestAgentConnectCALeafCert_aclProxyTokenOther(t *testing.T) { token := proxy.ProxyToken require.NotEmpty(token) - req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test-id?token="+token, nil) + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test?token="+token, nil) resp := httptest.NewRecorder() _, err := a.srv.AgentConnectCALeafCert(resp, req) require.Error(err) @@ -2413,7 +2413,7 @@ func TestAgentConnectCALeafCert_aclServiceWrite(t *testing.T) { token = aclResp.ID } - req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test-id?token="+token, nil) + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test?token="+token, nil) resp := httptest.NewRecorder() obj, err := a.srv.AgentConnectCALeafCert(resp, req) require.NoError(err) @@ -2474,7 +2474,7 @@ func TestAgentConnectCALeafCert_aclServiceReadDeny(t *testing.T) { token = aclResp.ID } - req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test-id?token="+token, nil) + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test?token="+token, nil) resp := httptest.NewRecorder() _, err := a.srv.AgentConnectCALeafCert(resp, req) require.Error(err) @@ -2517,7 +2517,113 @@ func TestAgentConnectCALeafCert_good(t *testing.T) { } // List - req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/foo", nil) + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test", nil) + resp := httptest.NewRecorder() + obj, err := a.srv.AgentConnectCALeafCert(resp, req) + require.NoError(err) + + // Get the issued cert + issued, ok := obj.(*structs.IssuedCert) + assert.True(ok) + + // Verify that the cert is signed by the CA + requireLeafValidUnderCA(t, issued, ca1) + + // Verify blocking index + assert.True(issued.ModifyIndex > 0) + assert.Equal(fmt.Sprintf("%d", issued.ModifyIndex), + resp.Header().Get("X-Consul-Index")) + + // That should've been a cache miss, so no hit change + require.Equal(cacheHits, a.cache.Hits()) + + // Test caching + { + // Fetch it again + obj2, err := a.srv.AgentConnectCALeafCert(httptest.NewRecorder(), req) + require.NoError(err) + require.Equal(obj, obj2) + + // Should cache hit this time and not make request + require.Equal(cacheHits+1, a.cache.Hits()) + cacheHits++ + } + + // Test that caching is updated in the background + { + // Set a new CA + ca := connect.TestCAConfigSet(t, a, nil) + + retry.Run(t, func(r *retry.R) { + // Try and sign again (note no index/wait arg since cache should update in + // background even if we aren't actively blocking) + obj, err := a.srv.AgentConnectCALeafCert(httptest.NewRecorder(), req) + r.Check(err) + + issued2 := obj.(*structs.IssuedCert) + if issued.CertPEM == issued2.CertPEM { + r.Fatalf("leaf has not updated") + } + + // Got a new leaf. Sanity check it's a whole new key as well as differnt + // cert. + if issued.PrivateKeyPEM == issued2.PrivateKeyPEM { + r.Fatalf("new leaf has same private key as before") + } + + // Verify that the cert is signed by the new CA + requireLeafValidUnderCA(t, issued2, ca) + }) + + // Should be a cache hit! The data should've updated in the cache + // in the background so this should've been fetched directly from + // the cache. + if v := a.cache.Hits(); v < cacheHits+1 { + t.Fatalf("expected at least one more cache hit, still at %d", v) + } + cacheHits = a.cache.Hits() + } +} + +// Test we can request a leaf cert for a service we have permission for +// but is not local to this agent. +func TestAgentConnectCALeafCert_goodNotLocal(t *testing.T) { + t.Parallel() + + assert := assert.New(t) + require := require.New(t) + a := NewTestAgent(t.Name(), "") + defer a.Shutdown() + + // CA already setup by default by NewTestAgent but force a new one so we can + // verify it was signed easily. + ca1 := connect.TestCAConfigSet(t, a, nil) + + // Grab the initial cache hit count + cacheHits := a.cache.Hits() + + { + // Register a non-local service (central catalog) + args := &structs.RegisterRequest{ + Node: "foo", + Address: "127.0.0.1", + Service: &structs.NodeService{ + Service: "test", + Address: "127.0.0.1", + Port: 8080, + }, + } + req, _ := http.NewRequest("PUT", "/v1/catalog/register", jsonReader(args)) + resp := httptest.NewRecorder() + _, err := a.srv.CatalogRegister(resp, req) + require.NoError(err) + if !assert.Equal(200, resp.Code) { + t.Log("Body: ", resp.Body.String()) + } + } + + // List + req, _ := http.NewRequest("GET", "/v1/agent/connect/ca/leaf/test", nil) resp := httptest.NewRecorder() obj, err := a.srv.AgentConnectCALeafCert(resp, req) require.NoError(err) From b28e2b862237773d00025078d9e8c121629673b6 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 19 May 2018 00:11:51 -0700 Subject: [PATCH 287/539] connect/proxy: don't require proxy ID --- command/connect/proxy/proxy.go | 46 ++++++++++------ connect/proxy/config.go | 47 +++++++++-------- connect/proxy/config_test.go | 7 ++- connect/proxy/proxy.go | 52 +++---------------- connect/proxy/testdata/config-kitchensink.hcl | 3 +- connect/service.go | 36 ++++++------- connect/tls.go | 13 ++--- watch/funcs.go | 6 +-- watch/funcs_test.go | 2 +- 9 files changed, 91 insertions(+), 121 deletions(-) diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index b797177c5..83406e0fb 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -10,6 +10,7 @@ import ( "os" proxyAgent "github.com/hashicorp/consul/agent/proxy" + "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/command/flags" proxyImpl "github.com/hashicorp/consul/connect/proxy" @@ -112,22 +113,17 @@ func (c *cmd) Run(args []string) int { return 1 } - var p *proxyImpl.Proxy - if c.cfgFile != "" { - c.UI.Info("Configuring proxy locally from " + c.cfgFile) + // Get the proper configuration watcher + cfgWatcher, err := c.configWatcher(client) + if err != nil { + c.UI.Error(fmt.Sprintf("Error preparing configuration: %s", err)) + return 1 + } - p, err = proxyImpl.NewFromConfigFile(client, c.cfgFile, c.logger) - if err != nil { - c.UI.Error(fmt.Sprintf("Failed configuring from file: %s", err)) - return 1 - } - - } else { - p, err = proxyImpl.New(client, c.proxyID, c.logger) - if err != nil { - c.UI.Error(fmt.Sprintf("Failed configuring from agent: %s", err)) - return 1 - } + p, err := proxyImpl.New(client, cfgWatcher, c.logger) + if err != nil { + c.UI.Error(fmt.Sprintf("Failed initializing proxy: %s", err)) + return 1 } // Hook the shutdownCh up to close the proxy @@ -151,6 +147,26 @@ func (c *cmd) Run(args []string) int { return 0 } +func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) { + // Manual configuration file is specified. + if c.cfgFile != "" { + cfg, err := proxyImpl.ParseConfigFile(c.cfgFile) + if err != nil { + return nil, err + } + return proxyImpl.NewStaticConfigWatcher(cfg), nil + } + + // Use the configured proxy ID + if c.proxyID == "" { + return nil, fmt.Errorf( + "-service or -proxy-id must be specified so that proxy can " + + "configure itself.") + } + + return proxyImpl.NewAgentConfigWatcher(client, c.proxyID, c.logger) +} + func (c *cmd) Synopsis() string { return synopsis } diff --git a/connect/proxy/config.go b/connect/proxy/config.go index 840afa896..025809a7b 100644 --- a/connect/proxy/config.go +++ b/connect/proxy/config.go @@ -19,18 +19,16 @@ import ( // different locations (e.g. command line, agent config endpoint, agent // certificate endpoints). type Config struct { - // ProxyID is the identifier for this proxy as registered in Consul. It's only - // guaranteed to be unique per agent. - ProxyID string `json:"proxy_id" hcl:"proxy_id"` - // Token is the authentication token provided for queries to the local agent. Token string `json:"token" hcl:"token"` - // ProxiedServiceID is the identifier of the service this proxy is representing. - ProxiedServiceID string `json:"proxied_service_id" hcl:"proxied_service_id"` - + // ProxiedServiceName is the name of the service this proxy is representing. + // This is the service _name_ and not the service _id_. This allows the + // proxy to represent services not present in the local catalog. + // // ProxiedServiceNamespace is the namespace of the service this proxy is // representing. + ProxiedServiceName string `json:"proxied_service_name" hcl:"proxied_service_name"` ProxiedServiceNamespace string `json:"proxied_service_namespace" hcl:"proxied_service_namespace"` // PublicListener configures the mTLS listener. @@ -39,28 +37,34 @@ type Config struct { // Upstreams configures outgoing proxies for remote connect services. Upstreams []UpstreamConfig `json:"upstreams" hcl:"upstreams"` - // DevCAFile allows passing the file path to PEM encoded root certificate - // bundle to be used in development instead of the ones supplied by Connect. - DevCAFile string `json:"dev_ca_file" hcl:"dev_ca_file"` - - // DevServiceCertFile allows passing the file path to PEM encoded service - // certificate (client and server) to be used in development instead of the - // ones supplied by Connect. + // DevCAFile, DevServiceCertFile, and DevServiceKeyFile allow configuring + // the certificate information from a static file. This is only for testing + // purposes. All or none must be specified. + DevCAFile string `json:"dev_ca_file" hcl:"dev_ca_file"` DevServiceCertFile string `json:"dev_service_cert_file" hcl:"dev_service_cert_file"` + DevServiceKeyFile string `json:"dev_service_key_file" hcl:"dev_service_key_file"` +} - // DevServiceKeyFile allows passing the file path to PEM encoded service - // private key to be used in development instead of the ones supplied by - // Connect. - DevServiceKeyFile string `json:"dev_service_key_file" hcl:"dev_service_key_file"` +// Service returns the *connect.Service structure represented by this config. +func (c *Config) Service(client *api.Client, logger *log.Logger) (*connect.Service, error) { + // If we aren't in dev mode, then we return the configured service. + if c.DevCAFile == "" { + return connect.NewServiceWithLogger(c.ProxiedServiceName, client, logger) + } + + // Dev mode + return connect.NewDevServiceFromCertFiles(c.ProxiedServiceName, + logger, c.DevCAFile, c.DevServiceCertFile, c.DevServiceKeyFile) } // PublicListenerConfig contains the parameters needed for the incoming mTLS // listener. type PublicListenerConfig struct { // BindAddress is the host/IP the public mTLS listener will bind to. + // + // BindPort is the port the public listener will bind to. BindAddress string `json:"bind_address" hcl:"bind_address" mapstructure:"bind_address"` - - BindPort int `json:"bind_port" hcl:"bind_port" mapstructure:"bind_port"` + BindPort int `json:"bind_port" hcl:"bind_port" mapstructure:"bind_port"` // LocalServiceAddress is the host:port for the proxied application. This // should be on loopback or otherwise protected as it's plain TCP. @@ -265,9 +269,8 @@ func (w *AgentConfigWatcher) handler(blockVal watch.BlockingParamVal, // Create proxy config from the response cfg := &Config{ - ProxyID: w.proxyID, // Token should be already setup in the client - ProxiedServiceID: resp.TargetServiceID, + ProxiedServiceName: resp.TargetServiceName, ProxiedServiceNamespace: "default", } diff --git a/connect/proxy/config_test.go b/connect/proxy/config_test.go index 1473e8fea..87bb43c81 100644 --- a/connect/proxy/config_test.go +++ b/connect/proxy/config_test.go @@ -19,9 +19,8 @@ func TestParseConfigFile(t *testing.T) { require.Nil(t, err) expect := &Config{ - ProxyID: "foo", Token: "11111111-2222-3333-4444-555555555555", - ProxiedServiceID: "web", + ProxiedServiceName: "web", ProxiedServiceNamespace: "default", PublicListener: PublicListenerConfig{ BindAddress: "127.0.0.1", @@ -117,6 +116,7 @@ func TestUpstreamResolverFromClient(t *testing.T) { func TestAgentConfigWatcher(t *testing.T) { a := agent.NewTestAgent("agent_smith", "") + defer a.Shutdown() client := a.Client() agent := client.Agent() @@ -153,8 +153,7 @@ func TestAgentConfigWatcher(t *testing.T) { cfg := testGetConfigValTimeout(t, w, 500*time.Millisecond) expectCfg := &Config{ - ProxyID: w.proxyID, - ProxiedServiceID: "web", + ProxiedServiceName: "web", ProxiedServiceNamespace: "default", PublicListener: PublicListenerConfig{ BindAddress: "10.10.10.10", diff --git a/connect/proxy/proxy.go b/connect/proxy/proxy.go index 64e098825..fb382288f 100644 --- a/connect/proxy/proxy.go +++ b/connect/proxy/proxy.go @@ -11,7 +11,6 @@ import ( // Proxy implements the built-in connect proxy. type Proxy struct { - proxyID string client *api.Client cfgWatcher ConfigWatcher stopChan chan struct{} @@ -19,51 +18,17 @@ type Proxy struct { service *connect.Service } -// NewFromConfigFile returns a Proxy instance configured just from a local file. -// This is intended mostly for development and bypasses the normal mechanisms -// for fetching config and certificates from the local agent. -func NewFromConfigFile(client *api.Client, filename string, - logger *log.Logger) (*Proxy, error) { - cfg, err := ParseConfigFile(filename) - if err != nil { - return nil, err - } - - service, err := connect.NewDevServiceFromCertFiles(cfg.ProxiedServiceID, - logger, cfg.DevCAFile, cfg.DevServiceCertFile, - cfg.DevServiceKeyFile) - if err != nil { - return nil, err - } - - p := &Proxy{ - proxyID: cfg.ProxyID, - client: client, - cfgWatcher: NewStaticConfigWatcher(cfg), - stopChan: make(chan struct{}), - logger: logger, - service: service, - } - return p, nil -} - -// New returns a Proxy with the given id, consuming the provided (configured) -// agent. It is ready to Run(). -func New(client *api.Client, proxyID string, logger *log.Logger) (*Proxy, error) { - cw, err := NewAgentConfigWatcher(client, proxyID, logger) - if err != nil { - return nil, err - } - p := &Proxy{ - proxyID: proxyID, +// New returns a proxy with the given configuration source. +// +// The ConfigWatcher can be used to update the configuration of the proxy. +// Whenever a new configuration is detected, the proxy will reconfigure itself. +func New(client *api.Client, cw ConfigWatcher, logger *log.Logger) (*Proxy, error) { + return &Proxy{ client: client, cfgWatcher: cw, stopChan: make(chan struct{}), logger: logger, - // Can't load service yet as we only have the proxy's ID not the service's - // until initial config fetch happens. - } - return p, nil + }, nil } // Serve the proxy instance until a fatal error occurs or proxy is closed. @@ -80,8 +45,7 @@ func (p *Proxy) Serve() error { // Initial setup // Setup Service instance now we know target ID etc - service, err := connect.NewServiceWithLogger(newCfg.ProxiedServiceID, - p.client, p.logger) + service, err := cfg.Service(p.client, p.logger) if err != nil { return err } diff --git a/connect/proxy/testdata/config-kitchensink.hcl b/connect/proxy/testdata/config-kitchensink.hcl index fccfdffd0..de3472d0f 100644 --- a/connect/proxy/testdata/config-kitchensink.hcl +++ b/connect/proxy/testdata/config-kitchensink.hcl @@ -1,9 +1,8 @@ # Example proxy config with everything specified -proxy_id = "foo" token = "11111111-2222-3333-4444-555555555555" -proxied_service_id = "web" +proxied_service_name = "web" proxied_service_namespace = "default" # Assumes running consul in dev mode from the repo root... diff --git a/connect/service.go b/connect/service.go index af9fbfcb7..b00775c1c 100644 --- a/connect/service.go +++ b/connect/service.go @@ -27,16 +27,12 @@ import ( // service has been delivered valid certificates. Once built, document that here // too. type Service struct { - // serviceID is the unique ID for this service in the agent-local catalog. - // This is often but not always the service name. This is used to request - // Connect metadata. If the service with this ID doesn't exist on the local - // agent no error will be returned and the Service will retry periodically. - // This allows service startup and registration to happen in either order - // without coordination since they might be performed by separate processes. - serviceID string + // service is the name (not ID) for the Consul service. This is used to request + // Connect metadata. + service string // client is the Consul API client. It must be configured with an appropriate - // Token that has `service:write` policy on the provided ServiceID. If an + // Token that has `service:write` policy on the provided service. If an // insufficient token is provided, the Service will abort further attempts to // fetch certificates and print a loud error message. It will not Close() or // kill the process since that could lead to a crash loop in every service if @@ -74,13 +70,13 @@ func NewService(serviceID string, client *api.Client) (*Service, error) { } // NewServiceWithLogger starts the service with a specified log.Logger. -func NewServiceWithLogger(serviceID string, client *api.Client, +func NewServiceWithLogger(serviceName string, client *api.Client, logger *log.Logger) (*Service, error) { s := &Service{ - serviceID: serviceID, - client: client, - logger: logger, - tlsCfg: newDynamicTLSConfig(defaultTLSConfig()), + service: serviceName, + client: client, + logger: logger, + tlsCfg: newDynamicTLSConfig(defaultTLSConfig()), } // Set up root and leaf watches @@ -94,8 +90,8 @@ func NewServiceWithLogger(serviceID string, client *api.Client, s.rootsWatch.HybridHandler = s.rootsWatchHandler p, err = watch.Parse(map[string]interface{}{ - "type": "connect_leaf", - "service_id": s.serviceID, + "type": "connect_leaf", + "service": s.service, }) if err != nil { return nil, err @@ -123,12 +119,12 @@ func NewDevServiceFromCertFiles(serviceID string, logger *log.Logger, // NewDevServiceWithTLSConfig creates a Service using static TLS config passed. // It's mostly useful for testing. -func NewDevServiceWithTLSConfig(serviceID string, logger *log.Logger, +func NewDevServiceWithTLSConfig(serviceName string, logger *log.Logger, tlsCfg *tls.Config) (*Service, error) { s := &Service{ - serviceID: serviceID, - logger: logger, - tlsCfg: newDynamicTLSConfig(tlsCfg), + service: serviceName, + logger: logger, + tlsCfg: newDynamicTLSConfig(tlsCfg), } return s, nil } @@ -144,7 +140,7 @@ func NewDevServiceWithTLSConfig(serviceID string, logger *log.Logger, // error during renewal. The listener will be able to accept connections again // once connectivity is restored provided the client's Token is valid. func (s *Service) ServerTLSConfig() *tls.Config { - return s.tlsCfg.Get(newServerSideVerifier(s.client, s.serviceID)) + return s.tlsCfg.Get(newServerSideVerifier(s.client, s.service)) } // Dial connects to a remote Connect-enabled server. The passed Resolver is used diff --git a/connect/tls.go b/connect/tls.go index 6f14cd787..db96eb3de 100644 --- a/connect/tls.go +++ b/connect/tls.go @@ -112,9 +112,9 @@ func verifyServerCertMatchesURI(certs []*x509.Certificate, // newServerSideVerifier returns a verifierFunc that wraps the provided // api.Client to verify the TLS chain and perform AuthZ for the server end of -// the connection. The service name provided is used as the target serviceID +// the connection. The service name provided is used as the target service name // for the Authorization. -func newServerSideVerifier(client *api.Client, serviceID string) verifierFunc { +func newServerSideVerifier(client *api.Client, serviceName string) verifierFunc { return func(tlsCfg *tls.Config, rawCerts [][]byte) error { leaf, err := verifyChain(tlsCfg, rawCerts, false) if err != nil { @@ -142,14 +142,7 @@ func newServerSideVerifier(client *api.Client, serviceID string) verifierFunc { // Perform AuthZ req := &api.AgentAuthorizeParams{ - // TODO(banks): this is jank, we have a serviceID from the Service setup - // but this needs to be a service name as the target. For now we are - // relying on them usually being the same but this will break when they - // are not. We either need to make Authorize endpoint optionally accept - // IDs somehow or rethink this as it will require fetching the service - // name sometime ahead of accepting requests (maybe along with TLS certs?) - // which feels gross and will take extra plumbing to expose it to here. - Target: serviceID, + Target: serviceName, ClientCertURI: certURI.URI().String(), ClientCertSerial: connect.HexString(leaf.SerialNumber.Bytes()), } diff --git a/watch/funcs.go b/watch/funcs.go index 3b1b854ed..a87fd63f4 100644 --- a/watch/funcs.go +++ b/watch/funcs.go @@ -258,8 +258,8 @@ func connectRootsWatch(params map[string]interface{}) (WatcherFunc, error) { func connectLeafWatch(params map[string]interface{}) (WatcherFunc, error) { // We don't support stale since certs are cached locally in the agent. - var serviceID string - if err := assignValue(params, "service_id", &serviceID); err != nil { + var serviceName string + if err := assignValue(params, "service", &serviceName); err != nil { return nil, err } @@ -268,7 +268,7 @@ func connectLeafWatch(params map[string]interface{}) (WatcherFunc, error) { opts := makeQueryOptionsWithContext(p, false) defer p.cancelFunc() - leaf, meta, err := agent.ConnectCALeaf(serviceID, &opts) + leaf, meta, err := agent.ConnectCALeaf(serviceName, &opts) if err != nil { return nil, nil, err } diff --git a/watch/funcs_test.go b/watch/funcs_test.go index b304a803f..9632ef206 100644 --- a/watch/funcs_test.go +++ b/watch/funcs_test.go @@ -602,7 +602,7 @@ func TestConnectLeafWatch(t *testing.T) { //invoke := makeInvokeCh() invoke := make(chan error) - plan := mustParse(t, `{"type":"connect_leaf", "service_id":"web"}`) + plan := mustParse(t, `{"type":"connect_leaf", "service":"web"}`) plan.Handler = func(idx uint64, raw interface{}) { if raw == nil { return // ignore From 27aa0743ec6e6ca3912683ab7209c736bf98b5c0 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 19 May 2018 00:20:43 -0700 Subject: [PATCH 288/539] connect/proxy: use the right variable for loading the new service --- connect/proxy/proxy.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/connect/proxy/proxy.go b/connect/proxy/proxy.go index fb382288f..4b740e22f 100644 --- a/connect/proxy/proxy.go +++ b/connect/proxy/proxy.go @@ -45,7 +45,7 @@ func (p *Proxy) Serve() error { // Initial setup // Setup Service instance now we know target ID etc - service, err := cfg.Service(p.client, p.logger) + service, err := newCfg.Service(p.client, p.logger) if err != nil { return err } From 3e8ea58585a4bb603f9658cfe5ab96d4151dafc3 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 19 May 2018 00:43:38 -0700 Subject: [PATCH 289/539] command/connect/proxy: accept -service and -upstream --- command/connect/proxy/flag_upstreams.go | 54 ++++++++++ command/connect/proxy/flag_upstreams_test.go | 106 +++++++++++++++++++ command/connect/proxy/proxy.go | 28 ++++- 3 files changed, 186 insertions(+), 2 deletions(-) create mode 100644 command/connect/proxy/flag_upstreams.go create mode 100644 command/connect/proxy/flag_upstreams_test.go diff --git a/command/connect/proxy/flag_upstreams.go b/command/connect/proxy/flag_upstreams.go new file mode 100644 index 000000000..02ce3a3bb --- /dev/null +++ b/command/connect/proxy/flag_upstreams.go @@ -0,0 +1,54 @@ +package proxy + +import ( + "fmt" + "strconv" + "strings" + + "github.com/hashicorp/consul/connect/proxy" +) + +// FlagUpstreams implements the flag.Value interface and allows specifying +// the -upstream flag multiple times and keeping track of the name of the +// upstream and the local port. +// +// The syntax of the value is "name:addr" where addr can be "port" or +// "host:port". Examples: "db:8181", "db:127.0.0.10:8282", etc. +type FlagUpstreams map[string]proxy.UpstreamConfig + +func (f *FlagUpstreams) String() string { + return fmt.Sprintf("%v", *f) +} + +func (f *FlagUpstreams) Set(value string) error { + idx := strings.Index(value, ":") + if idx == -1 { + return fmt.Errorf("Upstream value should be name:addr in %q", value) + } + + addr := "" + name := value[:idx] + portRaw := value[idx+1:] + if idx := strings.Index(portRaw, ":"); idx != -1 { + addr = portRaw[:idx] + portRaw = portRaw[idx+1:] + } + + port, err := strconv.ParseInt(portRaw, 0, 0) + if err != nil { + return err + } + + if *f == nil { + *f = make(map[string]proxy.UpstreamConfig) + } + + (*f)[name] = proxy.UpstreamConfig{ + LocalBindAddress: addr, + LocalBindPort: int(port), + DestinationName: name, + DestinationType: "service", + } + + return nil +} diff --git a/command/connect/proxy/flag_upstreams_test.go b/command/connect/proxy/flag_upstreams_test.go new file mode 100644 index 000000000..d43c49d03 --- /dev/null +++ b/command/connect/proxy/flag_upstreams_test.go @@ -0,0 +1,106 @@ +package proxy + +import ( + "flag" + "testing" + + "github.com/hashicorp/consul/connect/proxy" + "github.com/stretchr/testify/require" +) + +func TestFlagUpstreams_impl(t *testing.T) { + var _ flag.Value = new(FlagUpstreams) +} + +func TestFlagUpstreams(t *testing.T) { + cases := []struct { + Name string + Input []string + Expected map[string]proxy.UpstreamConfig + Error string + }{ + { + "bad format", + []string{"foo"}, + nil, + "should be name:addr", + }, + + { + "port not int", + []string{"db:hello"}, + nil, + "invalid syntax", + }, + + { + "4 parts", + []string{"db:127.0.0.1:8181:foo"}, + nil, + "invalid syntax", + }, + + { + "single value", + []string{"db:8181"}, + map[string]proxy.UpstreamConfig{ + "db": proxy.UpstreamConfig{ + LocalBindPort: 8181, + DestinationName: "db", + DestinationType: "service", + }, + }, + "", + }, + + { + "address specified", + []string{"db:127.0.0.55:8181"}, + map[string]proxy.UpstreamConfig{ + "db": proxy.UpstreamConfig{ + LocalBindAddress: "127.0.0.55", + LocalBindPort: 8181, + DestinationName: "db", + DestinationType: "service", + }, + }, + "", + }, + + { + "repeat value, overwrite", + []string{"db:8181", "db:8282"}, + map[string]proxy.UpstreamConfig{ + "db": proxy.UpstreamConfig{ + LocalBindPort: 8282, + DestinationName: "db", + DestinationType: "service", + }, + }, + "", + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + require := require.New(t) + + var actual map[string]proxy.UpstreamConfig + f := (*FlagUpstreams)(&actual) + + var err error + for _, input := range tc.Input { + err = f.Set(input) + // Note we only test the last error. This could make some + // test failures confusing but it shouldn't be too bad. + } + if tc.Error != "" { + require.Error(err) + require.Contains(err.Error(), tc.Error) + return + } + + require.Equal(tc.Expected, actual) + }) + } +} diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index 83406e0fb..fc9e65b86 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -42,6 +42,8 @@ type cmd struct { cfgFile string proxyID string pprofAddr string + service string + upstreams map[string]proxyImpl.UpstreamConfig } func (c *cmd) init() { @@ -66,6 +68,14 @@ func (c *cmd) init() { "Enable debugging via pprof. Providing a host:port (or just ':port') "+ "enables profiling HTTP endpoints on that address.") + c.flags.StringVar(&c.service, "service", "", + "Name of the service this proxy is representing.") + + c.flags.Var((*FlagUpstreams)(&c.upstreams), "upstream", + "Upstream service to support connecting to. The format should be "+ + "'name:addr', such as 'db:8181'. This will make 'db' available "+ + "on port 8181.") + c.http = &flags.HTTPFlags{} flags.Merge(c.flags, c.http.ClientFlags()) flags.Merge(c.flags, c.http.ServerFlags()) @@ -158,13 +168,27 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) } // Use the configured proxy ID - if c.proxyID == "" { + if c.proxyID != "" { + return proxyImpl.NewAgentConfigWatcher(client, c.proxyID, c.logger) + } + + // Otherwise, we're representing a manually specified service. + if c.service == "" { return nil, fmt.Errorf( "-service or -proxy-id must be specified so that proxy can " + "configure itself.") } - return proxyImpl.NewAgentConfigWatcher(client, c.proxyID, c.logger) + // Convert our upstreams to a slice of configurations + upstreams := make([]proxyImpl.UpstreamConfig, 0, len(c.upstreams)) + for _, u := range c.upstreams { + upstreams = append(upstreams, u) + } + + return proxyImpl.NewStaticConfigWatcher(&proxyImpl.Config{ + ProxiedServiceName: c.service, + Upstreams: upstreams, + }), nil } func (c *cmd) Synopsis() string { From b88023c607a7f5355b3acd401db5dd4d808d25a8 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 19 May 2018 00:46:06 -0700 Subject: [PATCH 290/539] connect/proxy: don't start public listener if 0 port --- connect/proxy/proxy.go | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/connect/proxy/proxy.go b/connect/proxy/proxy.go index 4b740e22f..b92e4e0b5 100644 --- a/connect/proxy/proxy.go +++ b/connect/proxy/proxy.go @@ -60,11 +60,15 @@ func (p *Proxy) Serve() error { p.logger.Printf("[DEBUG] leaf: %s roots: %s", leaf.URIs[0], bytes.Join(tcfg.RootCAs.Subjects(), []byte(","))) }() - newCfg.PublicListener.applyDefaults() - l := NewPublicListener(p.service, newCfg.PublicListener, p.logger) - err = p.startListener("public listener", l) - if err != nil { - return err + // Only start a listener if we have a port set. This allows + // the configuration to disable our public listener. + if newCfg.PublicListener.BindPort != 0 { + newCfg.PublicListener.applyDefaults() + l := NewPublicListener(p.service, newCfg.PublicListener, p.logger) + err = p.startListener("public listener", l) + if err != nil { + return err + } } } From 99094d70b0f6713334c1d6e911cae8e0824484f8 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 19 May 2018 10:47:30 -0700 Subject: [PATCH 291/539] connect/proxy: add a full proxy test, parallel --- connect/proxy/config_test.go | 6 +++ connect/proxy/conn_test.go | 6 +++ connect/proxy/listener_test.go | 16 ++++--- connect/proxy/proxy_test.go | 79 ++++++++++++++++++++++++++++++++++ connect/proxy/testing.go | 19 +++++--- 5 files changed, 114 insertions(+), 12 deletions(-) create mode 100644 connect/proxy/proxy_test.go diff --git a/connect/proxy/config_test.go b/connect/proxy/config_test.go index 87bb43c81..2e214c6dd 100644 --- a/connect/proxy/config_test.go +++ b/connect/proxy/config_test.go @@ -15,6 +15,8 @@ import ( ) func TestParseConfigFile(t *testing.T) { + t.Parallel() + cfg, err := ParseConfigFile("testdata/config-kitchensink.hcl") require.Nil(t, err) @@ -54,6 +56,8 @@ func TestParseConfigFile(t *testing.T) { } func TestUpstreamResolverFromClient(t *testing.T) { + t.Parallel() + tests := []struct { name string cfg UpstreamConfig @@ -115,6 +119,8 @@ func TestUpstreamResolverFromClient(t *testing.T) { } func TestAgentConfigWatcher(t *testing.T) { + t.Parallel() + a := agent.NewTestAgent("agent_smith", "") defer a.Shutdown() diff --git a/connect/proxy/conn_test.go b/connect/proxy/conn_test.go index 4de428ad0..29dcdd3e3 100644 --- a/connect/proxy/conn_test.go +++ b/connect/proxy/conn_test.go @@ -64,6 +64,8 @@ func testConnPipelineSetup(t *testing.T) (net.Conn, net.Conn, *Conn, func()) { } func TestConn(t *testing.T) { + t.Parallel() + src, dst, c, stop := testConnPipelineSetup(t) defer stop() @@ -117,6 +119,8 @@ func TestConn(t *testing.T) { } func TestConnSrcClosing(t *testing.T) { + t.Parallel() + src, dst, c, stop := testConnPipelineSetup(t) defer stop() @@ -155,6 +159,8 @@ func TestConnSrcClosing(t *testing.T) { } func TestConnDstClosing(t *testing.T) { + t.Parallel() + src, dst, c, stop := testConnPipelineSetup(t) defer stop() diff --git a/connect/proxy/listener_test.go b/connect/proxy/listener_test.go index a0bc640d7..d63f5818b 100644 --- a/connect/proxy/listener_test.go +++ b/connect/proxy/listener_test.go @@ -15,23 +15,23 @@ import ( ) func TestPublicListener(t *testing.T) { + t.Parallel() + ca := agConnect.TestCA(t, nil) - ports := freeport.GetT(t, 2) + ports := freeport.GetT(t, 1) + + testApp := NewTestTCPServer(t) + defer testApp.Close() cfg := PublicListenerConfig{ BindAddress: "127.0.0.1", BindPort: ports[0], - LocalServiceAddress: TestLocalAddr(ports[1]), + LocalServiceAddress: testApp.Addr().String(), HandshakeTimeoutMs: 100, LocalConnectTimeoutMs: 100, } - testApp, err := NewTestTCPServer(t, cfg.LocalServiceAddress) - require.NoError(t, err) - defer testApp.Close() - svc := connect.TestService(t, "db", ca) - l := NewPublicListener(svc, cfg, log.New(os.Stderr, "", log.LstdFlags)) // Run proxy @@ -53,6 +53,8 @@ func TestPublicListener(t *testing.T) { } func TestUpstreamListener(t *testing.T) { + t.Parallel() + ca := agConnect.TestCA(t, nil) ports := freeport.GetT(t, 1) diff --git a/connect/proxy/proxy_test.go b/connect/proxy/proxy_test.go new file mode 100644 index 000000000..681f802c4 --- /dev/null +++ b/connect/proxy/proxy_test.go @@ -0,0 +1,79 @@ +package proxy + +import ( + "context" + "log" + "net" + "os" + "testing" + + "github.com/hashicorp/consul/agent" + agConnect "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/connect" + "github.com/hashicorp/consul/lib/freeport" + "github.com/hashicorp/consul/testutil/retry" + "github.com/stretchr/testify/require" +) + +func TestProxy_public(t *testing.T) { + t.Parallel() + + require := require.New(t) + ports := freeport.GetT(t, 1) + + a := agent.NewTestAgent(t.Name(), "") + defer a.Shutdown() + client := a.Client() + + // Register the service so we can get a leaf cert + _, err := client.Catalog().Register(&api.CatalogRegistration{ + Datacenter: "dc1", + Node: "local", + Address: "127.0.0.1", + Service: &api.AgentService{ + Service: "echo", + }, + }, nil) + require.NoError(err) + + // Start the backend service that is being proxied + testApp := NewTestTCPServer(t) + defer testApp.Close() + + // Start the proxy + p, err := New(client, NewStaticConfigWatcher(&Config{ + ProxiedServiceName: "echo", + PublicListener: PublicListenerConfig{ + BindAddress: "127.0.0.1", + BindPort: ports[0], + LocalServiceAddress: testApp.Addr().String(), + }, + }), testLogger(t)) + require.NoError(err) + defer p.Close() + go p.Serve() + + // Create a test connection to the proxy. We retry here a few times + // since this is dependent on the agent actually starting up and setting + // up the CA. + var conn net.Conn + svc, err := connect.NewService("echo", client) + require.NoError(err) + retry.Run(t, func(r *retry.R) { + conn, err = svc.Dial(context.Background(), &connect.StaticResolver{ + Addr: TestLocalAddr(ports[0]), + CertURI: agConnect.TestSpiffeIDService(t, "echo"), + }) + if err != nil { + r.Fatalf("err: %s", err) + } + }) + + // Connection works, test it is the right one + TestEchoConn(t, conn, "") +} + +func testLogger(t *testing.T) *log.Logger { + return log.New(os.Stderr, "", log.LstdFlags) +} diff --git a/connect/proxy/testing.go b/connect/proxy/testing.go index f986cfe50..6b1b2636f 100644 --- a/connect/proxy/testing.go +++ b/connect/proxy/testing.go @@ -7,6 +7,7 @@ import ( "net" "sync/atomic" + "github.com/hashicorp/consul/lib/freeport" "github.com/mitchellh/go-testing-interface" "github.com/stretchr/testify/require" ) @@ -26,17 +27,20 @@ type TestTCPServer struct { // NewTestTCPServer opens as a listening socket on the given address and returns // a TestTCPServer serving requests to it. The server is already started and can // be stopped by calling Close(). -func NewTestTCPServer(t testing.T, addr string) (*TestTCPServer, error) { +func NewTestTCPServer(t testing.T) *TestTCPServer { + port := freeport.GetT(t, 1) + addr := TestLocalAddr(port[0]) + l, err := net.Listen("tcp", addr) - if err != nil { - return nil, err - } + require.NoError(t, err) + log.Printf("test tcp server listening on %s", addr) s := &TestTCPServer{ l: l, } go s.accept() - return s, nil + + return s } // Close stops the server @@ -47,6 +51,11 @@ func (s *TestTCPServer) Close() { } } +// Addr returns the address that this server is listening on. +func (s *TestTCPServer) Addr() net.Addr { + return s.l.Addr() +} + func (s *TestTCPServer) accept() error { for { conn, err := s.l.Accept() From b531919181649b6b4983b8d84f9cd79717154ef1 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 19 May 2018 11:04:47 -0700 Subject: [PATCH 292/539] command/connect/proxy: tests for configuration --- command/connect/proxy/proxy.go | 30 ++++++++--- command/connect/proxy/proxy_test.go | 83 +++++++++++++++++++++++++++++ 2 files changed, 106 insertions(+), 7 deletions(-) create mode 100644 command/connect/proxy/proxy_test.go diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index fc9e65b86..5d219d3ef 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -8,6 +8,7 @@ import ( "net/http" _ "net/http/pprof" // Expose pprof if configured "os" + "sort" proxyAgent "github.com/hashicorp/consul/agent/proxy" "github.com/hashicorp/consul/api" @@ -44,6 +45,9 @@ type cmd struct { pprofAddr string service string upstreams map[string]proxyImpl.UpstreamConfig + + // test flags + testNoStart bool // don't start the proxy, just exit 0 } func (c *cmd) init() { @@ -86,6 +90,10 @@ func (c *cmd) Run(args []string) int { if err := c.flags.Parse(args); err != nil { return 1 } + if len(c.flags.Args()) > 0 { + c.UI.Error(fmt.Sprintf("Should have no non-flag arguments.")) + return 1 + } // Load the proxy ID and token from env vars if they're set if c.proxyID == "" { @@ -147,10 +155,11 @@ func (c *cmd) Run(args []string) int { c.UI.Output("Log data will now stream in as it occurs:\n") logGate.Flush() - // Run the proxy - err = p.Serve() - if err != nil { - c.UI.Error(fmt.Sprintf("Failed running proxy: %s", err)) + // Run the proxy unless our tests require we don't + if !c.testNoStart { + if err := p.Serve(); err != nil { + c.UI.Error(fmt.Sprintf("Failed running proxy: %s", err)) + } } c.UI.Output("Consul Connect proxy shutdown") @@ -179,10 +188,17 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) "configure itself.") } - // Convert our upstreams to a slice of configurations + // Convert our upstreams to a slice of configurations. We do this + // deterministically by alphabetizing the upstream keys. We do this so + // that tests can compare the upstream values. + upstreamKeys := make([]string, 0, len(c.upstreams)) + for k := range c.upstreams { + upstreamKeys = append(upstreamKeys, k) + } + sort.Strings(upstreamKeys) upstreams := make([]proxyImpl.UpstreamConfig, 0, len(c.upstreams)) - for _, u := range c.upstreams { - upstreams = append(upstreams, u) + for _, k := range upstreamKeys { + upstreams = append(upstreams, c.upstreams[k]) } return proxyImpl.NewStaticConfigWatcher(&proxyImpl.Config{ diff --git a/command/connect/proxy/proxy_test.go b/command/connect/proxy/proxy_test.go new file mode 100644 index 000000000..b99fbfae1 --- /dev/null +++ b/command/connect/proxy/proxy_test.go @@ -0,0 +1,83 @@ +package proxy + +import ( + "testing" + "time" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/connect/proxy" + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" +) + +func TestCommandConfigWatcher(t *testing.T) { + t.Parallel() + + cases := []struct { + Name string + Flags []string + Test func(*testing.T, *proxy.Config) + }{ + { + "-service flag only", + []string{"-service", "web"}, + func(t *testing.T, cfg *proxy.Config) { + require.Equal(t, 0, cfg.PublicListener.BindPort) + require.Len(t, cfg.Upstreams, 0) + }, + }, + + { + "-service flag with upstreams", + []string{ + "-service", "web", + "-upstream", "db:1234", + "-upstream", "db2:2345", + }, + func(t *testing.T, cfg *proxy.Config) { + require.Equal(t, 0, cfg.PublicListener.BindPort) + require.Len(t, cfg.Upstreams, 2) + require.Equal(t, 1234, cfg.Upstreams[0].LocalBindPort) + require.Equal(t, 2345, cfg.Upstreams[1].LocalBindPort) + }, + }, + } + + for _, tc := range cases { + t.Run(tc.Name, func(t *testing.T) { + require := require.New(t) + + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + ui := cli.NewMockUi() + c := New(ui, make(chan struct{})) + c.testNoStart = true + + // Run and purposely fail the command + code := c.Run(append([]string{ + "-http-addr=" + a.HTTPAddr(), + }, tc.Flags...)) + require.Equal(0, code, ui.ErrorWriter.String()) + + // Get the configuration watcher + cw, err := c.configWatcher(client) + require.NoError(err) + tc.Test(t, testConfig(t, cw)) + }) + } +} + +func testConfig(t *testing.T, cw proxy.ConfigWatcher) *proxy.Config { + t.Helper() + + select { + case cfg := <-cw.Watch(): + return cfg + + case <-time.After(1 * time.Second): + t.Fatal("no configuration loaded") + return nil // satisfy compiler + } +} From a750254b28714100c9477deac74f17bb67fa74a1 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 19 May 2018 11:20:20 -0700 Subject: [PATCH 293/539] command/connect/proxy: can set public listener from flags --- command/connect/proxy/proxy.go | 45 ++++++++++++++++++++++++----- command/connect/proxy/proxy_test.go | 26 +++++++++++++++++ 2 files changed, 64 insertions(+), 7 deletions(-) diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index 5d219d3ef..0de7d6670 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -5,10 +5,12 @@ import ( "fmt" "io" "log" + "net" "net/http" _ "net/http/pprof" // Expose pprof if configured "os" "sort" + "strconv" proxyAgent "github.com/hashicorp/consul/agent/proxy" "github.com/hashicorp/consul/api" @@ -39,12 +41,14 @@ type cmd struct { logger *log.Logger // flags - logLevel string - cfgFile string - proxyID string - pprofAddr string - service string - upstreams map[string]proxyImpl.UpstreamConfig + logLevel string + cfgFile string + proxyID string + pprofAddr string + service string + serviceAddr string + upstreams map[string]proxyImpl.UpstreamConfig + listen string // test flags testNoStart bool // don't start the proxy, just exit 0 @@ -78,7 +82,15 @@ func (c *cmd) init() { c.flags.Var((*FlagUpstreams)(&c.upstreams), "upstream", "Upstream service to support connecting to. The format should be "+ "'name:addr', such as 'db:8181'. This will make 'db' available "+ - "on port 8181.") + "on port 8181. This can be repeated multiple times.") + + c.flags.StringVar(&c.serviceAddr, "service-addr", "", + "Address of the local service to proxy. Only useful if -listen "+ + "and -service are both set.") + + c.flags.StringVar(&c.listen, "listen", "", + "Address to listen for inbound connections to the proxied service. "+ + "Must be specified with -service and -service-addr.") c.http = &flags.HTTPFlags{} flags.Merge(c.flags, c.http.ClientFlags()) @@ -201,8 +213,27 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) upstreams = append(upstreams, c.upstreams[k]) } + // Parse out our listener if we have one + var listener proxyImpl.PublicListenerConfig + if c.listen != "" { + host, portRaw, err := net.SplitHostPort(c.listen) + if err != nil { + return nil, err + } + + port, err := strconv.ParseInt(portRaw, 0, 0) + if err != nil { + return nil, err + } + + listener.BindAddress = host + listener.BindPort = int(port) + listener.LocalServiceAddress = c.serviceAddr + } + return proxyImpl.NewStaticConfigWatcher(&proxyImpl.Config{ ProxiedServiceName: c.service, + PublicListener: listener, Upstreams: upstreams, }), nil } diff --git a/command/connect/proxy/proxy_test.go b/command/connect/proxy/proxy_test.go index b99fbfae1..9970a6d7a 100644 --- a/command/connect/proxy/proxy_test.go +++ b/command/connect/proxy/proxy_test.go @@ -41,6 +41,32 @@ func TestCommandConfigWatcher(t *testing.T) { require.Equal(t, 2345, cfg.Upstreams[1].LocalBindPort) }, }, + + { + "-service flag with -service-addr", + []string{"-service", "web"}, + func(t *testing.T, cfg *proxy.Config) { + // -service-addr has no affect since -listen isn't set + require.Equal(t, 0, cfg.PublicListener.BindPort) + require.Len(t, cfg.Upstreams, 0) + }, + }, + + { + "-service, -service-addr, -listen", + []string{ + "-service", "web", + "-service-addr", "127.0.0.1:1234", + "-listen", ":4567", + }, + func(t *testing.T, cfg *proxy.Config) { + require.Len(t, cfg.Upstreams, 0) + + require.Equal(t, "", cfg.PublicListener.BindAddress) + require.Equal(t, 4567, cfg.PublicListener.BindPort) + require.Equal(t, "127.0.0.1:1234", cfg.PublicListener.LocalServiceAddress) + }, + }, } for _, tc := range cases { From 01c35641584faf132f6ba1fefbd5b57365bd0126 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 19 May 2018 11:22:00 -0700 Subject: [PATCH 294/539] command/connect/proxy: -service-addr required for -listen --- command/connect/proxy/proxy.go | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index 0de7d6670..fd6e9c3b3 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -226,6 +226,12 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) return nil, err } + if c.serviceAddr == "" { + return nil, fmt.Errorf( + "-service-addr must be specified with -listen so the proxy " + + "knows the backend service address.") + } + listener.BindAddress = host listener.BindPort = int(port) listener.LocalServiceAddress = c.serviceAddr From 82ba16775772e5ca51bd397dc62bddf8cca552c1 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 19 May 2018 11:34:00 -0700 Subject: [PATCH 295/539] command/connect/proxy: detailed help --- command/connect/proxy/proxy.go | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index fd6e9c3b3..b6b5dcc30 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -254,7 +254,34 @@ func (c *cmd) Help() string { const synopsis = "Runs a Consul Connect proxy" const help = ` -Usage: consul proxy [options] +Usage: consul connect proxy [options] Starts a Consul Connect proxy and runs until an interrupt is received. + The proxy can be used to accept inbound connections for a service, + wrap outbound connections to upstream services, or both. This enables + a non-Connect-aware application to use Connect. + + The proxy requires service:write permissions for the service it represents. + + Consul can automatically start and manage this proxy by specifying the + "proxy" configuration within your service definition. + + The example below shows how to start a local proxy for establishing outbound + connections to "db" representing the frontend service. Once running, any + process that creates a TCP connection to the specified port (8181) will + establish a mutual TLS connection to "db" identified as "frontend". + + $ consul connect proxy -service frontend -upstream db:8181 + + The next example starts a local proxy that also accepts inbound connections + on port 8443, authorizes the connection, then proxies it to port 8080: + + $ consul connect proxy \ + -service frontend \ + -service-addr 127.0.0.1:8080 \ + -listen ':8443' + + A proxy can accept both inbound connections as well as proxy to upstream + services by specifying both the "-listen" and "-upstream" flags. + ` From 351a9585e4cecd8622891857e7a9648bff7b7be7 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 19 May 2018 15:47:25 -0700 Subject: [PATCH 296/539] command/connect/proxy: output information when starting similar to agent --- command/connect/proxy/proxy.go | 36 +++++++++++++++++++++++++++++++--- 1 file changed, 33 insertions(+), 3 deletions(-) diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index b6b5dcc30..f6a5576fa 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -23,6 +23,13 @@ import ( ) func New(ui cli.Ui, shutdownCh <-chan struct{}) *cmd { + ui = &cli.PrefixedUi{ + OutputPrefix: "==> ", + InfoPrefix: " ", + ErrorPrefix: "==> ", + Ui: ui, + } + c := &cmd{UI: ui, shutdownCh: shutdownCh} c.init() return c @@ -49,6 +56,7 @@ type cmd struct { serviceAddr string upstreams map[string]proxyImpl.UpstreamConfig listen string + register bool // test flags testNoStart bool // don't start the proxy, just exit 0 @@ -92,6 +100,10 @@ func (c *cmd) init() { "Address to listen for inbound connections to the proxied service. "+ "Must be specified with -service and -service-addr.") + c.flags.BoolVar(&c.register, "register", false, + "Self-register with the local Consul agent. Only useful with "+ + "-listen.") + c.http = &flags.HTTPFlags{} flags.Merge(c.flags, c.http.ClientFlags()) flags.Merge(c.flags, c.http.ServerFlags()) @@ -143,6 +155,10 @@ func (c *cmd) Run(args []string) int { return 1 } + // Output this first since the config watcher below will output + // other information. + c.UI.Output("Consul Connect proxy starting...") + // Get the proper configuration watcher cfgWatcher, err := c.configWatcher(client) if err != nil { @@ -162,8 +178,7 @@ func (c *cmd) Run(args []string) int { p.Close() }() - c.UI.Output("Consul Connect proxy starting") - + c.UI.Info("") c.UI.Output("Log data will now stream in as it occurs:\n") logGate.Flush() @@ -185,11 +200,15 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) if err != nil { return nil, err } + + c.UI.Info("Configuration mode: File") return proxyImpl.NewStaticConfigWatcher(cfg), nil } // Use the configured proxy ID if c.proxyID != "" { + c.UI.Info("Configuration mode: Agent API") + c.UI.Info(fmt.Sprintf(" Proxy ID: %s", c.proxyID)) return proxyImpl.NewAgentConfigWatcher(client, c.proxyID, c.logger) } @@ -200,6 +219,9 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) "configure itself.") } + c.UI.Info("Configuration mode: Flags") + c.UI.Info(fmt.Sprintf(" Service: %s", c.service)) + // Convert our upstreams to a slice of configurations. We do this // deterministically by alphabetizing the upstream keys. We do this so // that tests can compare the upstream values. @@ -210,7 +232,12 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) sort.Strings(upstreamKeys) upstreams := make([]proxyImpl.UpstreamConfig, 0, len(c.upstreams)) for _, k := range upstreamKeys { - upstreams = append(upstreams, c.upstreams[k]) + config := c.upstreams[k] + + c.UI.Info(fmt.Sprintf( + " Upstream: %s => %s:%d", + k, config.LocalBindAddress, config.LocalBindPort)) + upstreams = append(upstreams, config) } // Parse out our listener if we have one @@ -232,9 +259,12 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) "knows the backend service address.") } + c.UI.Info(fmt.Sprintf(" Public listener: %s:%d => %s", host, int(port), c.serviceAddr)) listener.BindAddress = host listener.BindPort = int(port) listener.LocalServiceAddress = c.serviceAddr + } else { + c.UI.Info(fmt.Sprintf(" Public listener: Disabled")) } return proxyImpl.NewStaticConfigWatcher(&proxyImpl.Config{ From 021782c36b77028ead192795eae4885f5bd45aa3 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 20 May 2018 10:04:29 -0700 Subject: [PATCH 297/539] command/connect/proxy: register monitor and -register flag --- command/connect/proxy/proxy.go | 64 ++++++- command/connect/proxy/register.go | 293 ++++++++++++++++++++++++++++++ 2 files changed, 349 insertions(+), 8 deletions(-) create mode 100644 command/connect/proxy/register.go diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index f6a5576fa..96c6a75cb 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -57,6 +57,7 @@ type cmd struct { upstreams map[string]proxyImpl.UpstreamConfig listen string register bool + registerId string // test flags testNoStart bool // don't start the proxy, just exit 0 @@ -104,6 +105,9 @@ func (c *cmd) init() { "Self-register with the local Consul agent. Only useful with "+ "-listen.") + c.flags.StringVar(&c.registerId, "register-id", "", + "ID suffix for the service. Use this to disambiguate with other proxies.") + c.http = &flags.HTTPFlags{} flags.Merge(c.flags, c.http.ClientFlags()) flags.Merge(c.flags, c.http.ServerFlags()) @@ -178,6 +182,18 @@ func (c *cmd) Run(args []string) int { p.Close() }() + // Register the service if we requested it + if c.register { + monitor, err := c.registerMonitor(client) + if err != nil { + c.UI.Error(fmt.Sprintf("Failed initializing registration: %s", err)) + return 1 + } + + go monitor.Run() + defer monitor.Close() + } + c.UI.Info("") c.UI.Output("Log data will now stream in as it occurs:\n") logGate.Flush() @@ -243,12 +259,7 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) // Parse out our listener if we have one var listener proxyImpl.PublicListenerConfig if c.listen != "" { - host, portRaw, err := net.SplitHostPort(c.listen) - if err != nil { - return nil, err - } - - port, err := strconv.ParseInt(portRaw, 0, 0) + host, port, err := c.listenParts() if err != nil { return nil, err } @@ -259,9 +270,9 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) "knows the backend service address.") } - c.UI.Info(fmt.Sprintf(" Public listener: %s:%d => %s", host, int(port), c.serviceAddr)) + c.UI.Info(fmt.Sprintf(" Public listener: %s:%d => %s", host, port, c.serviceAddr)) listener.BindAddress = host - listener.BindPort = int(port) + listener.BindPort = port listener.LocalServiceAddress = c.serviceAddr } else { c.UI.Info(fmt.Sprintf(" Public listener: Disabled")) @@ -274,6 +285,43 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) }), nil } +// registerMonitor returns the registration monitor ready to be started. +func (c *cmd) registerMonitor(client *api.Client) (*RegisterMonitor, error) { + if c.service == "" || c.listen == "" { + return nil, fmt.Errorf("-register may only be specified with -service and -listen") + } + + host, port, err := c.listenParts() + if err != nil { + return nil, err + } + + m := NewRegisterMonitor() + m.Logger = c.logger + m.Client = client + m.Service = c.service + m.IDSuffix = c.registerId + m.LocalAddress = host + m.LocalPort = port + return m, nil +} + +// listenParts returns the host and port parts of the -listen flag. The +// -listen flag must be non-empty prior to calling this. +func (c *cmd) listenParts() (string, int, error) { + host, portRaw, err := net.SplitHostPort(c.listen) + if err != nil { + return "", 0, err + } + + port, err := strconv.ParseInt(portRaw, 0, 0) + if err != nil { + return "", 0, err + } + + return host, int(port), nil +} + func (c *cmd) Synopsis() string { return synopsis } diff --git a/command/connect/proxy/register.go b/command/connect/proxy/register.go new file mode 100644 index 000000000..09f7e0fd4 --- /dev/null +++ b/command/connect/proxy/register.go @@ -0,0 +1,293 @@ +package proxy + +import ( + "fmt" + "log" + "os" + "sync" + "time" + + "github.com/hashicorp/consul/api" +) + +const ( + // RegisterReconcilePeriod is how often the monitor will attempt to + // reconcile the expected service state with the remote Consul server. + RegisterReconcilePeriod = 30 * time.Second + + // RegisterTTLPeriod is the TTL setting for the health check of the + // service. The monitor will automatically pass the health check + // three times per this period to be more resilient to failures. + RegisterTTLPeriod = 30 * time.Second +) + +// RegisterMonitor registers the proxy with the local Consul agent with a TTL +// health check that is kept alive. +// +// This struct should be intialized with NewRegisterMonitor instead of being +// allocated directly. Using this struct without calling NewRegisterMonitor +// will result in panics. +type RegisterMonitor struct { + // Logger is the logger for the monitor. + Logger *log.Logger + + // Client is the API client to a specific Consul agent. This agent is + // where the service will be registered. + Client *api.Client + + // Service is the name of the service being proxied. + Service string + + // LocalAddress and LocalPort are the address and port of the proxy + // itself, NOT the service being proxied. + LocalAddress string + LocalPort int + + // IDSuffix is a unique ID that is appended to the end of the service + // name. This helps the service be unique. By default the service ID + // is just the proxied service name followed by "-proxy". + IDSuffix string + + // The fields below are related to timing settings. See the default + // constants for more documentation on what they set. + ReconcilePeriod time.Duration + TTLPeriod time.Duration + + // lock is held while reading/writing any internal state of the monitor. + // cond is a condition variable on lock that is broadcasted for runState + // changes. + lock *sync.Mutex + cond *sync.Cond + + // runState is the current state of the monitor. To read this the + // lock must be held. The condition variable cond can be waited on + // for changes to this value. + runState registerRunState +} + +// registerState is the state of the RegisterMonitor. +// +// This is a basic state machine with the following transitions: +// +// * idle => running, stopped +// * running => stopping, stopped +// * stopping => stopped +// * stopped => <> +// +type registerRunState uint8 + +const ( + registerStateIdle registerRunState = iota + registerStateRunning + registerStateStopping + registerStateStopped +) + +// NewRegisterMonitor initializes a RegisterMonitor. After initialization, +// the exported fields should be configured as desired. To start the monitor, +// execute Run in a goroutine. +func NewRegisterMonitor() *RegisterMonitor { + var lock sync.Mutex + return &RegisterMonitor{ + Logger: log.New(os.Stderr, "", log.LstdFlags), // default logger + ReconcilePeriod: RegisterReconcilePeriod, + TTLPeriod: RegisterTTLPeriod, + lock: &lock, + cond: sync.NewCond(&lock), + } +} + +// Run should be started in a goroutine and will keep Consul updated +// in the background with the state of this proxy. If registration fails +// this will continue to retry. +func (r *RegisterMonitor) Run() { + // Grab the lock and set our state. If we're not idle, then we return + // immediately since the monitor is only allowed to run once. + r.lock.Lock() + if r.runState != registerStateIdle { + r.lock.Unlock() + return + } + r.runState = registerStateRunning + r.lock.Unlock() + + // Start a goroutine that just waits for a stop request + stopCh := make(chan struct{}) + go func() { + defer close(stopCh) + r.lock.Lock() + defer r.lock.Unlock() + + // We wait for anything not running, just so we're more resilient + // in the face of state machine issues. Basically any state change + // will cause us to quit. + for r.runState == registerStateRunning { + r.cond.Wait() + } + }() + + // When we exit, we set the state to stopped and broadcast to any + // waiting Close functions that they can return. + defer func() { + r.lock.Lock() + r.runState = registerStateStopped + r.cond.Broadcast() + r.lock.Unlock() + }() + + // Run the first registration optimistically. If this fails then its + // okay since we'll just retry shortly. + r.register() + + // Create the timers for trigger events. We don't use tickers because + // we don't want the events to pile on. + reconcileTimer := time.NewTimer(r.ReconcilePeriod) + heartbeatTimer := time.NewTimer(r.TTLPeriod / 3) + + for { + select { + case <-reconcileTimer.C: + r.register() + reconcileTimer.Reset(r.ReconcilePeriod) + + case <-heartbeatTimer.C: + r.heartbeat() + heartbeatTimer.Reset(r.TTLPeriod / 3) + + case <-stopCh: + r.Logger.Printf("[INFO] proxy: stop request received, deregistering") + r.deregister() + return + } + } +} + +// register queries the Consul agent to determine if we've already registered. +// If we haven't or the registered service differs from what we're trying to +// register, then we attempt to register our service. +func (r *RegisterMonitor) register() { + catalog := r.Client.Catalog() + serviceID := r.serviceID() + serviceName := r.serviceName() + + // Determine the current state of this service in Consul + var currentService *api.CatalogService + services, _, err := catalog.Service( + serviceName, "", + &api.QueryOptions{AllowStale: true}) + if err == nil { + for _, service := range services { + if serviceID == service.ServiceID { + currentService = service + break + } + } + } + + // If we have a matching service, then do nothing + if currentService != nil { + r.Logger.Printf("[DEBUG] proxy: service already registered, not re-registering") + return + } + + // If we're here, then we're registering the service. + err = r.Client.Agent().ServiceRegister(&api.AgentServiceRegistration{ + Kind: api.ServiceKindConnectProxy, + ProxyDestination: r.Service, + ID: serviceID, + Name: serviceName, + Address: r.LocalAddress, + Port: r.LocalPort, + Check: &api.AgentServiceCheck{ + CheckID: r.checkID(), + Name: "proxy heartbeat", + TTL: "30s", + Notes: "Built-in proxy will heartbeat this check.", + Status: "passing", + }, + }) + if err != nil { + r.Logger.Printf("[WARN] proxy: Failed to register Consul service: %s", err) + return + } + + r.Logger.Printf("[INFO] proxy: registered Consul service: %s", serviceID) +} + +// heartbeat just pings the TTL check for our service. +func (r *RegisterMonitor) heartbeat() { + // Trigger the health check passing. We don't need to retry this + // since we do a couple tries within the TTL period. + if err := r.Client.Agent().PassTTL(r.checkID(), ""); err != nil { + r.Logger.Printf("[WARN] proxy: heartbeat failed: %s", err) + } +} + +// deregister deregisters the service. +func (r *RegisterMonitor) deregister() { + // Basic retry loop, no backoff for now. But we want to retry a few + // times just in case there are basic ephemeral issues. + for i := 0; i < 3; i++ { + err := r.Client.Agent().ServiceDeregister(r.serviceID()) + if err == nil { + return + } + + r.Logger.Printf("[WARN] proxy: service deregister failed: %s", err) + time.Sleep(500 * time.Millisecond) + } +} + +// Close stops the register goroutines and deregisters the service. Once +// Close is called, the monitor can no longer be used again. It is safe to +// call Close multiple times and concurrently. +func (r *RegisterMonitor) Close() error { + r.lock.Lock() + defer r.lock.Unlock() + + for { + switch r.runState { + case registerStateIdle: + // Idle so just set it to stopped and return. We notify + // the condition variable in case others are waiting. + r.runState = registerStateStopped + r.cond.Broadcast() + return nil + + case registerStateRunning: + // Set the state to stopping and broadcast to all waiters, + // since Run is sitting on cond.Wait. + r.runState = registerStateStopping + r.cond.Broadcast() + r.cond.Wait() // Wait on the stopping event + + case registerStateStopping: + // Still stopping, wait... + r.cond.Wait() + + case registerStateStopped: + // Stopped, target state reached + return nil + } + } +} + +// serviceID returns the unique ID for this proxy service. +func (r *RegisterMonitor) serviceID() string { + id := fmt.Sprintf("%s-proxy", r.Service) + if r.IDSuffix != "" { + id += "-" + r.IDSuffix + } + + return id +} + +// serviceName returns the non-unique name of this proxy service. +func (r *RegisterMonitor) serviceName() string { + return fmt.Sprintf("%s-proxy", r.Service) +} + +// checkID is the unique ID for the registered health check. +func (r *RegisterMonitor) checkID() string { + return fmt.Sprintf("%s-ttl", r.serviceID()) +} From 771842255abf9a321de813d2fac677962ced6551 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 22 May 2018 10:33:14 -0700 Subject: [PATCH 298/539] address comment feedback --- agent/agent_endpoint.go | 3 ++- command/connect/proxy/proxy.go | 2 ++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/agent/agent_endpoint.go b/agent/agent_endpoint.go index 239f962a1..1318e8f86 100644 --- a/agent/agent_endpoint.go +++ b/agent/agent_endpoint.go @@ -907,7 +907,8 @@ func (s *HTTPServer) AgentConnectCARoots(resp http.ResponseWriter, req *http.Req // AgentConnectCALeafCert returns the certificate bundle for a service // instance. This supports blocking queries to update the returned bundle. func (s *HTTPServer) AgentConnectCALeafCert(resp http.ResponseWriter, req *http.Request) (interface{}, error) { - // Get the service ID. Note that this is the ID of a service instance. + // Get the service name. Note that this is the name of the sevice, + // not the ID of the service instance. serviceName := strings.TrimPrefix(req.URL.Path, "/v1/agent/connect/ca/leaf/") args := cachetype.ConnectCALeafRequest{ diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index 96c6a75cb..91df04025 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -340,6 +340,8 @@ Usage: consul connect proxy [options] a non-Connect-aware application to use Connect. The proxy requires service:write permissions for the service it represents. + The token may be passed via the CLI or the CONSUL_TOKEN environment + variable. Consul can automatically start and manage this proxy by specifying the "proxy" configuration within your service definition. From 510a8a6a6c76121d66e26f9b085a73077b9b65f9 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 22 May 2018 10:45:37 -0700 Subject: [PATCH 299/539] connect/proxy: remove dev CA settings --- connect/proxy/config.go | 16 +--------------- connect/proxy/config_test.go | 3 --- connect/proxy/testdata/config-kitchensink.hcl | 5 ----- 3 files changed, 1 insertion(+), 23 deletions(-) diff --git a/connect/proxy/config.go b/connect/proxy/config.go index 025809a7b..b0e020ab2 100644 --- a/connect/proxy/config.go +++ b/connect/proxy/config.go @@ -36,25 +36,11 @@ type Config struct { // Upstreams configures outgoing proxies for remote connect services. Upstreams []UpstreamConfig `json:"upstreams" hcl:"upstreams"` - - // DevCAFile, DevServiceCertFile, and DevServiceKeyFile allow configuring - // the certificate information from a static file. This is only for testing - // purposes. All or none must be specified. - DevCAFile string `json:"dev_ca_file" hcl:"dev_ca_file"` - DevServiceCertFile string `json:"dev_service_cert_file" hcl:"dev_service_cert_file"` - DevServiceKeyFile string `json:"dev_service_key_file" hcl:"dev_service_key_file"` } // Service returns the *connect.Service structure represented by this config. func (c *Config) Service(client *api.Client, logger *log.Logger) (*connect.Service, error) { - // If we aren't in dev mode, then we return the configured service. - if c.DevCAFile == "" { - return connect.NewServiceWithLogger(c.ProxiedServiceName, client, logger) - } - - // Dev mode - return connect.NewDevServiceFromCertFiles(c.ProxiedServiceName, - logger, c.DevCAFile, c.DevServiceCertFile, c.DevServiceKeyFile) + return connect.NewServiceWithLogger(c.ProxiedServiceName, client, logger) } // PublicListenerConfig contains the parameters needed for the incoming mTLS diff --git a/connect/proxy/config_test.go b/connect/proxy/config_test.go index 2e214c6dd..d395c9767 100644 --- a/connect/proxy/config_test.go +++ b/connect/proxy/config_test.go @@ -47,9 +47,6 @@ func TestParseConfigFile(t *testing.T) { ConnectTimeoutMs: 10000, }, }, - DevCAFile: "connect/testdata/ca1-ca-consul-internal.cert.pem", - DevServiceCertFile: "connect/testdata/ca1-svc-web.cert.pem", - DevServiceKeyFile: "connect/testdata/ca1-svc-web.key.pem", } require.Equal(t, expect, cfg) diff --git a/connect/proxy/testdata/config-kitchensink.hcl b/connect/proxy/testdata/config-kitchensink.hcl index de3472d0f..2276190ca 100644 --- a/connect/proxy/testdata/config-kitchensink.hcl +++ b/connect/proxy/testdata/config-kitchensink.hcl @@ -5,11 +5,6 @@ token = "11111111-2222-3333-4444-555555555555" proxied_service_name = "web" proxied_service_namespace = "default" -# Assumes running consul in dev mode from the repo root... -dev_ca_file = "connect/testdata/ca1-ca-consul-internal.cert.pem" -dev_service_cert_file = "connect/testdata/ca1-svc-web.cert.pem" -dev_service_key_file = "connect/testdata/ca1-svc-web.key.pem" - public_listener { bind_address = "127.0.0.1" bind_port= "9999" From 118aa0f00a7381b3ae2c12b059aeafa2771f4a53 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 22 May 2018 11:40:08 -0700 Subject: [PATCH 300/539] command/connect/proxy: register monitor tests --- command/connect/proxy/register.go | 7 +- command/connect/proxy/register_test.go | 106 +++++++++++++++++++++++++ 2 files changed, 111 insertions(+), 2 deletions(-) create mode 100644 command/connect/proxy/register_test.go diff --git a/command/connect/proxy/register.go b/command/connect/proxy/register.go index 09f7e0fd4..332607b8a 100644 --- a/command/connect/proxy/register.go +++ b/command/connect/proxy/register.go @@ -184,8 +184,11 @@ func (r *RegisterMonitor) register() { } } - // If we have a matching service, then do nothing - if currentService != nil { + // If we have a matching service, then we verify if we need to reregister + // by comparing if it matches what we expect. + if currentService != nil && + currentService.ServiceAddress == r.LocalAddress && + currentService.ServicePort == r.LocalPort { r.Logger.Printf("[DEBUG] proxy: service already registered, not re-registering") return } diff --git a/command/connect/proxy/register_test.go b/command/connect/proxy/register_test.go new file mode 100644 index 000000000..3a7354247 --- /dev/null +++ b/command/connect/proxy/register_test.go @@ -0,0 +1,106 @@ +package proxy + +import ( + "testing" + "time" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/testutil/retry" + "github.com/stretchr/testify/require" +) + +func TestRegisterMonitor_good(t *testing.T) { + t.Parallel() + require := require.New(t) + + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + m, service := testMonitor(t, client) + defer m.Close() + + // Verify the settings + require.Equal(api.ServiceKindConnectProxy, service.Kind) + require.Equal("foo", service.ProxyDestination) + require.Equal("127.0.0.1", service.Address) + require.Equal(1234, service.Port) + + // Stop should deregister the service + require.NoError(m.Close()) + services, err := client.Agent().Services() + require.NoError(err) + require.NotContains(services, m.serviceID()) +} + +func TestRegisterMonitor_heartbeat(t *testing.T) { + t.Parallel() + require := require.New(t) + + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + client := a.Client() + + m, _ := testMonitor(t, client) + defer m.Close() + + // Get the check and verify that it is passing + checks, err := client.Agent().Checks() + require.NoError(err) + require.Contains(checks, m.checkID()) + require.Equal("passing", checks[m.checkID()].Status) + + // Purposely fail the TTL check, verify it becomes healthy again + require.NoError(client.Agent().FailTTL(m.checkID(), "")) + retry.Run(t, func(r *retry.R) { + checks, err := client.Agent().Checks() + if err != nil { + r.Fatalf("err: %s", err) + } + + check, ok := checks[m.checkID()] + if !ok { + r.Fatal("check not found") + } + + if check.Status != "passing" { + r.Fatalf("check status is bad: %s", check.Status) + } + }) +} + +// testMonitor creates a RegisterMonitor, configures it, and starts it. +// It waits until the service appears in the catalog and then returns. +func testMonitor(t *testing.T, client *api.Client) (*RegisterMonitor, *api.AgentService) { + // Setup the monitor + m := NewRegisterMonitor() + m.Client = client + m.Service = "foo" + m.LocalAddress = "127.0.0.1" + m.LocalPort = 1234 + + // We want shorter periods so we can test things + m.ReconcilePeriod = 400 * time.Millisecond + m.TTLPeriod = 200 * time.Millisecond + + // Start the monitor + go m.Run() + + // The service should be registered + var service *api.AgentService + retry.Run(t, func(r *retry.R) { + services, err := client.Agent().Services() + if err != nil { + r.Fatalf("err: %s", err) + } + + var ok bool + service, ok = services[m.serviceID()] + if !ok { + r.Fatal("service not found") + } + }) + + return m, service +} From 4d46bba2c4ed26236349ad7a8c65480d2df611af Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Wed, 23 May 2018 14:43:40 -0700 Subject: [PATCH 301/539] Support giving the duration as a string in CA config --- agent/connect/ca/provider.go | 2 +- agent/connect/ca/provider_consul.go | 16 +---- agent/connect/ca/provider_consul_config.go | 77 ++++++++++++++++++++++ agent/connect/ca/provider_consul_test.go | 2 +- agent/connect_ca_endpoint.go | 23 +++++++ 5 files changed, 103 insertions(+), 17 deletions(-) create mode 100644 agent/connect/ca/provider_consul_config.go diff --git a/agent/connect/ca/provider.go b/agent/connect/ca/provider.go index d557d289c..0fdd3e41b 100644 --- a/agent/connect/ca/provider.go +++ b/agent/connect/ca/provider.go @@ -1,4 +1,4 @@ -package connect +package ca import ( "crypto/x509" diff --git a/agent/connect/ca/provider_consul.go b/agent/connect/ca/provider_consul.go index 20641a16c..afe99db79 100644 --- a/agent/connect/ca/provider_consul.go +++ b/agent/connect/ca/provider_consul.go @@ -1,4 +1,4 @@ -package connect +package ca import ( "bytes" @@ -15,7 +15,6 @@ import ( "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/consul/state" "github.com/hashicorp/consul/agent/structs" - "github.com/mitchellh/mapstructure" ) type ConsulProvider struct { @@ -111,19 +110,6 @@ func NewConsulProvider(rawConfig map[string]interface{}, delegate ConsulProvider return provider, nil } -func ParseConsulCAConfig(raw map[string]interface{}) (*structs.ConsulCAProviderConfig, error) { - var config structs.ConsulCAProviderConfig - if err := mapstructure.WeakDecode(raw, &config); err != nil { - return nil, fmt.Errorf("error decoding config: %s", err) - } - - if config.PrivateKey == "" && config.RootCert != "" { - return nil, fmt.Errorf("must provide a private key when providing a root cert") - } - - return &config, nil -} - // Return the active root CA and generate a new one if needed func (c *ConsulProvider) ActiveRoot() (string, error) { state := c.delegate.State() diff --git a/agent/connect/ca/provider_consul_config.go b/agent/connect/ca/provider_consul_config.go new file mode 100644 index 000000000..e0112b3e2 --- /dev/null +++ b/agent/connect/ca/provider_consul_config.go @@ -0,0 +1,77 @@ +package ca + +import ( + "fmt" + "reflect" + "time" + + "github.com/hashicorp/consul/agent/structs" + "github.com/mitchellh/mapstructure" +) + +func ParseConsulCAConfig(raw map[string]interface{}) (*structs.ConsulCAProviderConfig, error) { + var config structs.ConsulCAProviderConfig + decodeConf := &mapstructure.DecoderConfig{ + DecodeHook: ParseDurationFunc(), + ErrorUnused: true, + Result: &config, + WeaklyTypedInput: true, + } + + decoder, err := mapstructure.NewDecoder(decodeConf) + if err != nil { + return nil, err + } + + if err := decoder.Decode(raw); err != nil { + return nil, fmt.Errorf("error decoding config: %s", err) + } + + if config.PrivateKey == "" && config.RootCert != "" { + return nil, fmt.Errorf("must provide a private key when providing a root cert") + } + + return &config, nil +} + +// ParseDurationFunc is a mapstructure hook for decoding a string or +// []uint8 into a time.Duration value. +func ParseDurationFunc() mapstructure.DecodeHookFunc { + uint8ToString := func(bs []uint8) string { + b := make([]byte, len(bs)) + for i, v := range bs { + b[i] = byte(v) + } + return string(b) + } + + return func( + f reflect.Type, + t reflect.Type, + data interface{}) (interface{}, error) { + var v time.Duration + if t != reflect.TypeOf(v) { + return data, nil + } + + switch { + case f.Kind() == reflect.String: + if dur, err := time.ParseDuration(data.(string)); err != nil { + return nil, err + } else { + v = dur + } + return v, nil + case f == reflect.SliceOf(reflect.TypeOf(uint8(0))): + s := uint8ToString(data.([]uint8)) + if dur, err := time.ParseDuration(s); err != nil { + return nil, err + } else { + v = dur + } + return v, nil + default: + return data, nil + } + } +} diff --git a/agent/connect/ca/provider_consul_test.go b/agent/connect/ca/provider_consul_test.go index 9f8cc04b4..c3b375fc2 100644 --- a/agent/connect/ca/provider_consul_test.go +++ b/agent/connect/ca/provider_consul_test.go @@ -1,4 +1,4 @@ -package connect +package ca import ( "fmt" diff --git a/agent/connect_ca_endpoint.go b/agent/connect_ca_endpoint.go index 979005df1..f7f83b13a 100644 --- a/agent/connect_ca_endpoint.go +++ b/agent/connect_ca_endpoint.go @@ -47,6 +47,7 @@ func (s *HTTPServer) ConnectCAConfigurationGet(resp http.ResponseWriter, req *ht var reply structs.CAConfiguration err := s.agent.RPC("ConnectCA.ConfigurationGet", &args, &reply) + fixupConfig(&reply) return reply, err } @@ -67,3 +68,25 @@ func (s *HTTPServer) ConnectCAConfigurationSet(resp http.ResponseWriter, req *ht err := s.agent.RPC("ConnectCA.ConfigurationSet", &args, &reply) return nil, err } + +// A hack to fix up the config types inside of the map[string]interface{} +// so that they get formatted correctly during json.Marshal. Without this, +// duration values given as text like "24h" end up getting output back +// to the user in base64-encoded form. +func fixupConfig(conf *structs.CAConfiguration) { + if conf.Provider == structs.ConsulCAProvider { + if v, ok := conf.Config["RotationPeriod"]; ok { + if raw, ok := v.([]uint8); ok { + conf.Config["RotationPeriod"] = uint8ToString(raw) + } + } + } +} + +func uint8ToString(bs []uint8) string { + b := make([]byte, len(bs)) + for i, v := range bs { + b[i] = byte(v) + } + return string(b) +} From 1a1090aebfef1b0141530a5ba13404fb6be5b66c Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Wed, 23 May 2018 14:44:24 -0700 Subject: [PATCH 302/539] Add client api support for CA config endpoints --- api/connect_ca.go | 72 ++++++++++++++++++++++++++++++++++++++++++ api/connect_ca_test.go | 41 ++++++++++++++++++++++++ 2 files changed, 113 insertions(+) diff --git a/api/connect_ca.go b/api/connect_ca.go index ed0ac5e8f..c43339969 100644 --- a/api/connect_ca.go +++ b/api/connect_ca.go @@ -1,9 +1,44 @@ package api import ( + "fmt" "time" + + "github.com/mitchellh/mapstructure" ) +// CAConfig is the structure for the Connect CA configuration. +type CAConfig struct { + // Provider is the CA provider implementation to use. + Provider string + + // Configuration is arbitrary configuration for the provider. This + // should only contain primitive values and containers (such as lists + // and maps). + Config map[string]interface{} + + CreateIndex uint64 + ModifyIndex uint64 +} + +// ConsulCAProviderConfig is the config for the built-in Consul CA provider. +type ConsulCAProviderConfig struct { + PrivateKey string + RootCert string + RotationPeriod time.Duration +} + +// ParseConsulCAConfig takes a raw config map and returns a parsed +// ConsulCAProviderConfig. +func ParseConsulCAConfig(raw map[string]interface{}) (*ConsulCAProviderConfig, error) { + var config ConsulCAProviderConfig + if err := mapstructure.WeakDecode(raw, &config); err != nil { + return nil, fmt.Errorf("error decoding config: %s", err) + } + + return &config, nil +} + // CARootList is the structure for the results of listing roots. type CARootList struct { ActiveRootID string @@ -79,3 +114,40 @@ func (h *Connect) CARoots(q *QueryOptions) (*CARootList, *QueryMeta, error) { } return &out, qm, nil } + +// CAGetConfig returns the current CA configuration. +func (h *Connect) CAGetConfig(q *QueryOptions) (*CAConfig, *QueryMeta, error) { + r := h.c.newRequest("GET", "/v1/connect/ca/configuration") + r.setQueryOptions(q) + rtt, resp, err := requireOK(h.c.doRequest(r)) + if err != nil { + return nil, nil, err + } + defer resp.Body.Close() + + qm := &QueryMeta{} + parseQueryMeta(resp, qm) + qm.RequestTime = rtt + + var out CAConfig + if err := decodeBody(resp, &out); err != nil { + return nil, nil, err + } + return &out, qm, nil +} + +// CASetConfig sets the current CA configuration. +func (h *Connect) CASetConfig(conf *CAConfig, q *WriteOptions) (*WriteMeta, error) { + r := h.c.newRequest("PUT", "/v1/connect/ca/configuration") + r.setWriteOptions(q) + r.obj = conf + rtt, resp, err := requireOK(h.c.doRequest(r)) + if err != nil { + return nil, err + } + defer resp.Body.Close() + + wm := &WriteMeta{} + wm.RequestTime = rtt + return wm, nil +} diff --git a/api/connect_ca_test.go b/api/connect_ca_test.go index 36fb12b56..9cfaa9a0a 100644 --- a/api/connect_ca_test.go +++ b/api/connect_ca_test.go @@ -2,6 +2,9 @@ package api import ( "testing" + "time" + + "github.com/pascaldekloe/goe/verify" "github.com/hashicorp/consul/testutil" "github.com/hashicorp/consul/testutil/retry" @@ -51,3 +54,41 @@ func TestAPI_ConnectCARoots_list(t *testing.T) { }) } + +func TestAPI_ConnectCAConfig_get_set(t *testing.T) { + t.Parallel() + + c, s := makeClient(t) + defer s.Stop() + + expected := &ConsulCAProviderConfig{ + RotationPeriod: 90 * 24 * time.Hour, + } + + // This fails occasionally if server doesn't have time to bootstrap CA so + // retry + retry.Run(t, func(r *retry.R) { + connect := c.Connect() + + conf, _, err := connect.CAGetConfig(nil) + r.Check(err) + if conf.Provider != "consul" { + r.Fatalf("expected default provider, got %q", conf.Provider) + } + parsed, err := ParseConsulCAConfig(conf.Config) + r.Check(err) + verify.Values(r, "", parsed, expected) + + // Change a config value and update + conf.Config["RotationPeriod"] = 120 * 24 * time.Hour + _, err = connect.CASetConfig(conf, nil) + r.Check(err) + + updated, _, err := connect.CAGetConfig(nil) + r.Check(err) + expected.RotationPeriod = 120 * 24 * time.Hour + parsed, err = ParseConsulCAConfig(updated.Config) + r.Check(err) + verify.Values(r, "", parsed, expected) + }) +} From 96f4ff961cf9cc513caaa59be602bd635a5a8ebe Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Wed, 23 May 2018 14:44:41 -0700 Subject: [PATCH 303/539] Add CA CLI commands for getting/setting config --- command/commands_oss.go | 6 ++ command/connect/ca/ca.go | 44 +++++++++ command/connect/ca/ca_test.go | 13 +++ command/connect/ca/get/connect_ca_get.go | 81 ++++++++++++++++ command/connect/ca/get/connect_ca_get_test.go | 35 +++++++ command/connect/ca/set/connect_ca_set.go | 96 +++++++++++++++++++ command/connect/ca/set/connect_ca_set_test.go | 51 ++++++++++ .../ca/set/test-fixtures/ca_config.json | 8 ++ 8 files changed, 334 insertions(+) create mode 100644 command/connect/ca/ca.go create mode 100644 command/connect/ca/ca_test.go create mode 100644 command/connect/ca/get/connect_ca_get.go create mode 100644 command/connect/ca/get/connect_ca_get_test.go create mode 100644 command/connect/ca/set/connect_ca_set.go create mode 100644 command/connect/ca/set/connect_ca_set_test.go create mode 100644 command/connect/ca/set/test-fixtures/ca_config.json diff --git a/command/commands_oss.go b/command/commands_oss.go index f166e82d7..8e95282aa 100644 --- a/command/commands_oss.go +++ b/command/commands_oss.go @@ -7,6 +7,9 @@ import ( catlistnodes "github.com/hashicorp/consul/command/catalog/list/nodes" catlistsvc "github.com/hashicorp/consul/command/catalog/list/services" "github.com/hashicorp/consul/command/connect" + "github.com/hashicorp/consul/command/connect/ca" + caget "github.com/hashicorp/consul/command/connect/ca/get" + caset "github.com/hashicorp/consul/command/connect/ca/set" "github.com/hashicorp/consul/command/connect/proxy" "github.com/hashicorp/consul/command/event" "github.com/hashicorp/consul/command/exec" @@ -67,6 +70,9 @@ func init() { Register("catalog nodes", func(ui cli.Ui) (cli.Command, error) { return catlistnodes.New(ui), nil }) Register("catalog services", func(ui cli.Ui) (cli.Command, error) { return catlistsvc.New(ui), nil }) Register("connect", func(ui cli.Ui) (cli.Command, error) { return connect.New(), nil }) + Register("connect ca", func(ui cli.Ui) (cli.Command, error) { return ca.New(), nil }) + Register("connect ca get-config", func(ui cli.Ui) (cli.Command, error) { return caget.New(ui), nil }) + Register("connect ca set-config", func(ui cli.Ui) (cli.Command, error) { return caset.New(ui), nil }) Register("connect proxy", func(ui cli.Ui) (cli.Command, error) { return proxy.New(ui, MakeShutdownCh()), nil }) Register("event", func(ui cli.Ui) (cli.Command, error) { return event.New(ui), nil }) Register("exec", func(ui cli.Ui) (cli.Command, error) { return exec.New(ui, MakeShutdownCh()), nil }) diff --git a/command/connect/ca/ca.go b/command/connect/ca/ca.go new file mode 100644 index 000000000..918f4a254 --- /dev/null +++ b/command/connect/ca/ca.go @@ -0,0 +1,44 @@ +package ca + +import ( + "github.com/hashicorp/consul/command/flags" + "github.com/mitchellh/cli" +) + +func New() *cmd { + return &cmd{} +} + +type cmd struct{} + +func (c *cmd) Run(args []string) int { + return cli.RunResultHelp +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return flags.Usage(help, nil) +} + +const synopsis = "Interact with the Consul Connect CA" +const help = ` +Usage: consul connect ca [options] [args] + + This command has subcommands for interacting with Consul Connect's CA. + + Here are some simple examples, and more detailed examples are available + in the subcommands or the documentation. + + Get the configuration: + + $ consul connect ca get-config + + Update the configuration: + + $ consul connect ca set-config -config-file ca.json + + For more examples, ask for subcommand help or view the documentation. +` diff --git a/command/connect/ca/ca_test.go b/command/connect/ca/ca_test.go new file mode 100644 index 000000000..31febd342 --- /dev/null +++ b/command/connect/ca/ca_test.go @@ -0,0 +1,13 @@ +package ca + +import ( + "strings" + "testing" +) + +func TestCatalogCommand_noTabs(t *testing.T) { + t.Parallel() + if strings.ContainsRune(New().Help(), '\t') { + t.Fatal("help has tabs") + } +} diff --git a/command/connect/ca/get/connect_ca_get.go b/command/connect/ca/get/connect_ca_get.go new file mode 100644 index 000000000..c255bbcb8 --- /dev/null +++ b/command/connect/ca/get/connect_ca_get.go @@ -0,0 +1,81 @@ +package get + +import ( + "encoding/json" + "flag" + "fmt" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/command/flags" + "github.com/mitchellh/cli" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.ServerFlags()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + if err == flag.ErrHelp { + return 0 + } + c.UI.Error(fmt.Sprintf("Failed to parse args: %v", err)) + return 1 + } + + // Set up a client. + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error initializing client: %s", err)) + return 1 + } + + // Fetch the current configuration. + opts := &api.QueryOptions{ + AllowStale: c.http.Stale(), + } + config, _, err := client.Connect().CAGetConfig(opts) + if err != nil { + c.UI.Error(fmt.Sprintf("Error querying CA configuration: %s", err)) + return 1 + } + output, err := json.MarshalIndent(config, "", "\t") + if err != nil { + c.UI.Error(fmt.Sprintf("Error formatting CA configuration: %s", err)) + } + c.UI.Output(string(output)) + + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return c.help +} + +const synopsis = "Display the current Connect CA configuration" +const help = ` +Usage: consul connect ca get-config [options] + + Displays the current Connect CA configuration. +` diff --git a/command/connect/ca/get/connect_ca_get_test.go b/command/connect/ca/get/connect_ca_get_test.go new file mode 100644 index 000000000..660c6a29b --- /dev/null +++ b/command/connect/ca/get/connect_ca_get_test.go @@ -0,0 +1,35 @@ +package get + +import ( + "strings" + "testing" + + "github.com/hashicorp/consul/agent" + "github.com/mitchellh/cli" +) + +func TestConnectCAGetConfigCommand_noTabs(t *testing.T) { + t.Parallel() + if strings.ContainsRune(New(cli.NewMockUi()).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestConnectCAGetConfigCommand(t *testing.T) { + t.Parallel() + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + + ui := cli.NewMockUi() + c := New(ui) + args := []string{"-http-addr=" + a.HTTPAddr()} + + code := c.Run(args) + if code != 0 { + t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) + } + output := strings.TrimSpace(ui.OutputWriter.String()) + if !strings.Contains(output, `"Provider": "consul"`) { + t.Fatalf("bad: %s", output) + } +} diff --git a/command/connect/ca/set/connect_ca_set.go b/command/connect/ca/set/connect_ca_set.go new file mode 100644 index 000000000..e98d5923e --- /dev/null +++ b/command/connect/ca/set/connect_ca_set.go @@ -0,0 +1,96 @@ +package set + +import ( + "encoding/json" + "flag" + "fmt" + "io/ioutil" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/command/flags" + "github.com/mitchellh/cli" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + // flags + configFile flags.StringValue +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + c.flags.Var(&c.configFile, "config-file", + "The path to the config file to use.") + + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.ServerFlags()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + if err == flag.ErrHelp { + return 0 + } + c.UI.Error(fmt.Sprintf("Failed to parse args: %v", err)) + return 1 + } + + // Set up a client. + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error initializing client: %s", err)) + return 1 + } + + if c.configFile.String() == "" { + c.UI.Error("The -config-file flag is required") + return 1 + } + + bytes, err := ioutil.ReadFile(c.configFile.String()) + if err != nil { + c.UI.Error(fmt.Sprintf("Error reading config file: %s", err)) + return 1 + } + + var config api.CAConfig + if err := json.Unmarshal(bytes, &config); err != nil { + c.UI.Error(fmt.Sprintf("Error parsing config file: %s", err)) + return 1 + } + + // Set the new configuration. + if _, err := client.Connect().CASetConfig(&config, nil); err != nil { + c.UI.Error(fmt.Sprintf("Error setting CA configuration: %s", err)) + return 1 + } + c.UI.Output("Configuration updated!") + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return c.help +} + +const synopsis = "Modify the current Connect CA configuration" +const help = ` +Usage: consul connect ca set-config [options] + + Modifies the current Connect CA configuration. +` diff --git a/command/connect/ca/set/connect_ca_set_test.go b/command/connect/ca/set/connect_ca_set_test.go new file mode 100644 index 000000000..095d21d17 --- /dev/null +++ b/command/connect/ca/set/connect_ca_set_test.go @@ -0,0 +1,51 @@ +package set + +import ( + "strings" + "testing" + "time" + + "github.com/stretchr/testify/require" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/agent/connect/ca" + "github.com/hashicorp/consul/agent/structs" + "github.com/mitchellh/cli" +) + +func TestConnectCASetConfigCommand_noTabs(t *testing.T) { + t.Parallel() + if strings.ContainsRune(New(cli.NewMockUi()).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestConnectCASetConfigCommand(t *testing.T) { + t.Parallel() + require := require.New(t) + a := agent.NewTestAgent(t.Name(), ``) + defer a.Shutdown() + + ui := cli.NewMockUi() + c := New(ui) + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-config-file=test-fixtures/ca_config.json", + } + + code := c.Run(args) + if code != 0 { + t.Fatalf("bad: %d. %#v", code, ui.ErrorWriter.String()) + } + + req := structs.DCSpecificRequest{ + Datacenter: "dc1", + } + var reply structs.CAConfiguration + require.NoError(a.RPC("ConnectCA.ConfigurationGet", &req, &reply)) + require.Equal("consul", reply.Provider) + + parsed, err := ca.ParseConsulCAConfig(reply.Config) + require.NoError(err) + require.Equal(24*time.Hour, parsed.RotationPeriod) +} diff --git a/command/connect/ca/set/test-fixtures/ca_config.json b/command/connect/ca/set/test-fixtures/ca_config.json new file mode 100644 index 000000000..d29b25e8d --- /dev/null +++ b/command/connect/ca/set/test-fixtures/ca_config.json @@ -0,0 +1,8 @@ +{ + "Provider": "consul", + "Config": { + "PrivateKey": "", + "RootCert": "", + "RotationPeriod": "24h" + } +} \ No newline at end of file From 33d1d01374fbb1da8998d692d84f58f150eedc63 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 25 May 2018 10:27:04 -0700 Subject: [PATCH 304/539] Clarify CA commands' help text --- command/connect/ca/ca.go | 7 ++++--- command/connect/ca/get/connect_ca_get.go | 4 ++-- command/connect/ca/set/connect_ca_set.go | 2 +- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/command/connect/ca/ca.go b/command/connect/ca/ca.go index 918f4a254..9e9df7ad6 100644 --- a/command/connect/ca/ca.go +++ b/command/connect/ca/ca.go @@ -23,11 +23,12 @@ func (c *cmd) Help() string { return flags.Usage(help, nil) } -const synopsis = "Interact with the Consul Connect CA" +const synopsis = "Interact with the Consul Connect Certificate Authority (CA)" const help = ` -Usage: consul connect ca [options] [args] +Usage: consul connect ca [options] [args] - This command has subcommands for interacting with Consul Connect's CA. + This command has subcommands for interacting with Consul Connect's + Certificate Authority (CA). Here are some simple examples, and more detailed examples are available in the subcommands or the documentation. diff --git a/command/connect/ca/get/connect_ca_get.go b/command/connect/ca/get/connect_ca_get.go index c255bbcb8..26bcb5824 100644 --- a/command/connect/ca/get/connect_ca_get.go +++ b/command/connect/ca/get/connect_ca_get.go @@ -73,9 +73,9 @@ func (c *cmd) Help() string { return c.help } -const synopsis = "Display the current Connect CA configuration" +const synopsis = "Display the current Connect Certificate Authority (CA) configuration" const help = ` Usage: consul connect ca get-config [options] - Displays the current Connect CA configuration. + Displays the current Connect Certificate Authority (CA) configuration. ` diff --git a/command/connect/ca/set/connect_ca_set.go b/command/connect/ca/set/connect_ca_set.go index e98d5923e..696b894c0 100644 --- a/command/connect/ca/set/connect_ca_set.go +++ b/command/connect/ca/set/connect_ca_set.go @@ -92,5 +92,5 @@ const synopsis = "Modify the current Connect CA configuration" const help = ` Usage: consul connect ca set-config [options] - Modifies the current Connect CA configuration. + Modifies the current Connect Certificate Authority (CA) configuration. ` From 54bc937feda62c17f09f452650e599821b594432 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Fri, 25 May 2018 10:28:18 -0700 Subject: [PATCH 305/539] Re-use uint8ToString --- agent/connect/ca/provider_consul_config.go | 18 +++++++++--------- agent/connect_ca_endpoint.go | 17 +++++++---------- 2 files changed, 16 insertions(+), 19 deletions(-) diff --git a/agent/connect/ca/provider_consul_config.go b/agent/connect/ca/provider_consul_config.go index e0112b3e2..9eae88610 100644 --- a/agent/connect/ca/provider_consul_config.go +++ b/agent/connect/ca/provider_consul_config.go @@ -37,14 +37,6 @@ func ParseConsulCAConfig(raw map[string]interface{}) (*structs.ConsulCAProviderC // ParseDurationFunc is a mapstructure hook for decoding a string or // []uint8 into a time.Duration value. func ParseDurationFunc() mapstructure.DecodeHookFunc { - uint8ToString := func(bs []uint8) string { - b := make([]byte, len(bs)) - for i, v := range bs { - b[i] = byte(v) - } - return string(b) - } - return func( f reflect.Type, t reflect.Type, @@ -63,7 +55,7 @@ func ParseDurationFunc() mapstructure.DecodeHookFunc { } return v, nil case f == reflect.SliceOf(reflect.TypeOf(uint8(0))): - s := uint8ToString(data.([]uint8)) + s := Uint8ToString(data.([]uint8)) if dur, err := time.ParseDuration(s); err != nil { return nil, err } else { @@ -75,3 +67,11 @@ func ParseDurationFunc() mapstructure.DecodeHookFunc { } } } + +func Uint8ToString(bs []uint8) string { + b := make([]byte, len(bs)) + for i, v := range bs { + b[i] = byte(v) + } + return string(b) +} diff --git a/agent/connect_ca_endpoint.go b/agent/connect_ca_endpoint.go index f7f83b13a..49851baac 100644 --- a/agent/connect_ca_endpoint.go +++ b/agent/connect_ca_endpoint.go @@ -4,6 +4,7 @@ import ( "fmt" "net/http" + "github.com/hashicorp/consul/agent/connect/ca" "github.com/hashicorp/consul/agent/structs" ) @@ -47,8 +48,12 @@ func (s *HTTPServer) ConnectCAConfigurationGet(resp http.ResponseWriter, req *ht var reply structs.CAConfiguration err := s.agent.RPC("ConnectCA.ConfigurationGet", &args, &reply) + if err != nil { + return nil, err + } + fixupConfig(&reply) - return reply, err + return reply, nil } // PUT /v1/connect/ca/configuration @@ -77,16 +82,8 @@ func fixupConfig(conf *structs.CAConfiguration) { if conf.Provider == structs.ConsulCAProvider { if v, ok := conf.Config["RotationPeriod"]; ok { if raw, ok := v.([]uint8); ok { - conf.Config["RotationPeriod"] = uint8ToString(raw) + conf.Config["RotationPeriod"] = ca.Uint8ToString(raw) } } } } - -func uint8ToString(bs []uint8) string { - b := make([]byte, len(bs)) - for i, v := range bs { - b[i] = byte(v) - } - return string(b) -} From 8dbe0017bbe26d734bcfea54e0dbf7dcf87c3672 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Tue, 29 May 2018 14:07:40 -0700 Subject: [PATCH 306/539] Starting Docs (#46) * website: first stab at Connect docs * website: lots more various stuff (bad commit messages) * website: getting started page for Connect * website: intentions * website: intention APIs * website: agent API docs * website: document agent/catalog proxy kind service values * website: /v1/catalog/connect/:service * website: intention CLI docs * website: custom proxy docs * website: remove dedicated getting started guide * website: add docs for CA API endpoints * website: add docs for connect ca commands * website: add proxy CLI docs * website: clean up proxy command, add dev docs * website: todo pages * website: connect security --- website/report.xml | 36 ++ website/source/api/agent/connect.html.md | 293 ++++++++++++++ website/source/api/agent/service.html.md | 13 + website/source/api/catalog.html.md | 80 ++++ website/source/api/connect.html.md | 21 + website/source/api/connect/ca.html.md | 150 +++++++ website/source/api/connect/intentions.html.md | 383 ++++++++++++++++++ website/source/assets/stylesheets/_inner.scss | 2 +- website/source/docs/agent/dns.html.md | 31 +- website/source/docs/commands/connect.html.md | 43 ++ .../docs/commands/connect/ca.html.md.erb | 91 +++++ .../docs/commands/connect/proxy.html.md.erb | 82 ++++ .../source/docs/commands/intention.html.md | 54 +++ .../docs/commands/intention/check.html.md.erb | 37 ++ .../commands/intention/create.html.md.erb | 51 +++ .../commands/intention/delete.html.md.erb | 36 ++ .../docs/commands/intention/get.html.md.erb | 33 ++ .../docs/commands/intention/match.html.md.erb | 38 ++ website/source/docs/connect/ca.html.md | 11 + .../source/docs/connect/configuration.html.md | 41 ++ website/source/docs/connect/dev.html.md | 67 +++ website/source/docs/connect/index.html.md | 43 ++ .../source/docs/connect/intentions.html.md | 134 ++++++ website/source/docs/connect/native.html.md | 11 + website/source/docs/connect/proxies.html.md | 176 ++++++++ .../docs/connect/proxies/integrate.html.md | 63 +++ website/source/docs/connect/security.html.md | 96 +++++ .../intro/getting-started/connect.html.md | 192 +++++++++ website/source/layouts/api.erb | 14 + website/source/layouts/docs.erb | 64 +++ website/source/layouts/intro.erb | 3 + 31 files changed, 2383 insertions(+), 6 deletions(-) create mode 100644 website/report.xml create mode 100644 website/source/api/agent/connect.html.md create mode 100644 website/source/api/connect.html.md create mode 100644 website/source/api/connect/ca.html.md create mode 100644 website/source/api/connect/intentions.html.md create mode 100644 website/source/docs/commands/connect.html.md create mode 100644 website/source/docs/commands/connect/ca.html.md.erb create mode 100644 website/source/docs/commands/connect/proxy.html.md.erb create mode 100644 website/source/docs/commands/intention.html.md create mode 100644 website/source/docs/commands/intention/check.html.md.erb create mode 100644 website/source/docs/commands/intention/create.html.md.erb create mode 100644 website/source/docs/commands/intention/delete.html.md.erb create mode 100644 website/source/docs/commands/intention/get.html.md.erb create mode 100644 website/source/docs/commands/intention/match.html.md.erb create mode 100644 website/source/docs/connect/ca.html.md create mode 100644 website/source/docs/connect/configuration.html.md create mode 100644 website/source/docs/connect/dev.html.md create mode 100644 website/source/docs/connect/index.html.md create mode 100644 website/source/docs/connect/intentions.html.md create mode 100644 website/source/docs/connect/native.html.md create mode 100644 website/source/docs/connect/proxies.html.md create mode 100644 website/source/docs/connect/proxies/integrate.html.md create mode 100644 website/source/docs/connect/security.html.md create mode 100644 website/source/intro/getting-started/connect.html.md diff --git a/website/report.xml b/website/report.xml new file mode 100644 index 000000000..3cee16339 --- /dev/null +++ b/website/report.xml @@ -0,0 +1,36 @@ + + + + Feature Extraction + + + TCPFLOW + 1.4.5 + + 4.2.1 (4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)) + -D_THREAD_SAFE -pthread -I/usr/local/include -I/usr/local/include -DUTC_OFFSET=+0100 + -g -D_THREAD_SAFE -pthread -g -O3 -MD -Wpointer-arith -Wmissing-declarations -Wmissing-prototypes -Wshadow -Wwrite-strings -Wcast-align -Waggregate-return -Wbad-function-cast -Wcast-qual -Wundef -Wredundant-decls -Wdisabled-optimization -Wfloat-equal -Wmultichar -Wc++-compat -Wmissing-noreturn -Wall -Wstrict-prototypes -MD -D_FORTIFY_SOURCE=2 -Wpointer-arith -Wmissing-declarations -Wmissing-prototypes -Wshadow -Wwrite-strings -Wcast-align -Waggregate-return -Wbad-function-cast -Wcast-qual -Wundef -Wredundant-decls -Wdisabled-optimization -Wfloat-equal -Wmultichar -Wc++-compat -Wmissing-noreturn -Wall -Wstrict-prototypes + -g -D_THREAD_SAFE -pthread -g -O3 -Wall -MD -D_FORTIFY_SOURCE=2 -Wpointer-arith -Wshadow -Wwrite-strings -Wcast-align -Wredundant-decls -Wdisabled-optimization -Wfloat-equal -Wmultichar -Wmissing-noreturn -Woverloaded-virtual -Wsign-promo -funit-at-a-time -Weffc++ -std=c++11 -Wall -MD -D_FORTIFY_SOURCE=2 -Wpointer-arith -Wshadow -Wwrite-strings -Wcast-align -Wredundant-decls -Wdisabled-optimization -Wfloat-equal -Wmultichar -Wmissing-noreturn -Woverloaded-virtual -Wsign-promo -funit-at-a-time -Weffc++ + -L/usr/local/lib -L/usr/local/lib + -lpcap -lbz2 -lexpat -lsqlite3 -lssl -lcrypto -lssl -lcrypto -ldl -lz + 2017-09-21T17:38:31 + + + + + Darwin + 17.5.0 + Darwin Kernel Version 17.5.0: Fri Apr 13 19:32:32 PDT 2018; root:xnu-4570.51.2~1/RELEASE_X86_64 + Mitchells-MacBook-Pro.local + x86_64 + tcpflow -i lo0 -c port 8181 + 0 + 2018-05-23T20:06:07Z + + + + + 0 diff --git a/website/source/api/agent/connect.html.md b/website/source/api/agent/connect.html.md new file mode 100644 index 000000000..cb2cccc9d --- /dev/null +++ b/website/source/api/agent/connect.html.md @@ -0,0 +1,293 @@ +--- +layout: api +page_title: Connect - Agent - HTTP API +sidebar_current: api-agent-connect +description: |- + The /agent/connect endpoints interact with Connect with agent-local operations. +--- + +# Connect - Agent HTTP API + +The `/agent/connect` endpoints interact with [Connect](/docs/connect/index.html) +with agent-local operations. + +These endpoints may mirror the [non-agent Connect endpoints](/api/connect.html) +in some cases. Almost all agent-local Connect endpoints perform local caching +to optimize performance of Connect without having to make requests to the server. + +## Authorize + +This endpoint tests whether a connection attempt is authorized between +two services. This is the primary API that must be implemented by +[proxies](/docs/connect/proxies.html) or +[native integrations](/docs/connect/native.html) +that wish to integrate with Connect. Prior to calling this API, it is expected +that the client TLS certificate has been properly verified against the +current CA roots. + +The implementation of this API uses locally cached data +and doesn't require any request forwarding to a server. Therefore, the +response typically occurs in microseconds to impose minimal overhead on the +connection attempt. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `POST` | `/agent/connect/authorize` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | ------------------------ | +| `NO` | `none` | `service:write` | + +### Parameters + +- `Target` `(string: )` - The name of the service that is being + requested. + +- `ClientCertURI` `(string: )` - The unique identifier for the + requesting client. This is currently the URI SAN from the TLS client + certificate. + +- `ClientCertSerial` `(string: )` - The colon-hex-encoded serial + number for the requesting client cert. This is used to check against + revocation lists. + +### Sample Payload + +```json +{ + "Target": "db", + "ClientCertURI": "spiffe://dc1-7e567ac2-551d-463f-8497-f78972856fc1.consul/ns/default/dc/dc1/svc/web", + "ClientCertSerial": "04:00:00:00:00:01:15:4b:5a:c3:94" +} +``` + +### Sample Request + +```text +$ curl \ + --request POST \ + --data @payload.json \ + https://consul.rocks/v1/agent/connect/authorize +``` + +### Sample Response + +```json +{ + "Authorized": true, + "Reason": "Matched intention: web => db (allow)" +} +``` + +## Certificate Authority (CA) Roots + +This endpoint returns the trusted certificate authority (CA) root certificates. +This is used by [proxies](/docs/connect/proxies.html) or +[native integrations](/docs/connect/native.html) to verify served client +or server certificates are valid. + +This is equivalent to the [non-Agent Connect endpoint](/api/connect.html), +but the response of this request is cached locally at the agent. This allows +for very fast response times and for fail open behavior if the server is +unavailable. This endpoint should be used by proxies and native integrations. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `GET` | `/agent/connect/ca/roots` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | -------------------------- | +| `YES` | `all` | `none` | + +### Sample Request + +```text +$ curl \ + https://consul.rocks/v1/connect/ca/roots +``` + +### Sample Response + +```json +{ + "ActiveRootID": "15:bf:3a:7d:ff:ea:c1:8c:46:67:6c:db:b8:81:18:36:ad:e5:d0:c7", + "Roots": [ + { + "ID": "15:bf:3a:7d:ff:ea:c1:8c:46:67:6c:db:b8:81:18:36:ad:e5:d0:c7", + "Name": "Consul CA Root Cert", + "SerialNumber": 7, + "SigningKeyID": "31:66:3a:39:31:3a:63:61:3a:34:31:3a:38:66:3a:61:63:3a:36:37:3a:62:66:3a:35:39:3a:63:32:3a:66:61:3a:34:65:3a:37:35:3a:35:63:3a:64:38:3a:66:30:3a:35:35:3a:64:65:3a:62:65:3a:37:35:3a:62:38:3a:33:33:3a:33:31:3a:64:35:3a:32:34:3a:62:30:3a:30:34:3a:62:33:3a:65:38:3a:39:37:3a:35:62:3a:37:65", + "NotBefore": "2018-05-21T16:33:28Z", + "NotAfter": "2028-05-18T16:33:28Z", + "RootCert": "-----BEGIN CERTIFICATE-----\nMIICmDCCAj6gAwIBAgIBBzAKBggqhkjOPQQDAjAWMRQwEgYDVQQDEwtDb25zdWwg\nQ0EgNzAeFw0xODA1MjExNjMzMjhaFw0yODA1MTgxNjMzMjhaMBYxFDASBgNVBAMT\nC0NvbnN1bCBDQSA3MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAER0qlxjnRcMEr\niSGlH7G7dYU7lzBEmLUSMZkyBbClmyV8+e8WANemjn+PLnCr40If9cmpr7RnC9Qk\nGTaLnLiF16OCAXswggF3MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/\nMGgGA1UdDgRhBF8xZjo5MTpjYTo0MTo4ZjphYzo2NzpiZjo1OTpjMjpmYTo0ZTo3\nNTo1YzpkODpmMDo1NTpkZTpiZTo3NTpiODozMzozMTpkNToyNDpiMDowNDpiMzpl\nODo5Nzo1Yjo3ZTBqBgNVHSMEYzBhgF8xZjo5MTpjYTo0MTo4ZjphYzo2NzpiZjo1\nOTpjMjpmYTo0ZTo3NTo1YzpkODpmMDo1NTpkZTpiZTo3NTpiODozMzozMTpkNToy\nNDpiMDowNDpiMzplODo5Nzo1Yjo3ZTA/BgNVHREEODA2hjRzcGlmZmU6Ly8xMjRk\nZjVhMC05ODIwLTc2YzMtOWFhOS02ZjYyMTY0YmExYzIuY29uc3VsMD0GA1UdHgEB\n/wQzMDGgLzAtgisxMjRkZjVhMC05ODIwLTc2YzMtOWFhOS02ZjYyMTY0YmExYzIu\nY29uc3VsMAoGCCqGSM49BAMCA0gAMEUCIQDzkkI7R+0U12a+zq2EQhP/n2mHmta+\nfs2hBxWIELGwTAIgLdO7RRw+z9nnxCIA6kNl//mIQb+PGItespiHZKAz74Q=\n-----END CERTIFICATE-----\n", + "IntermediateCerts": null, + "Active": true, + "CreateIndex": 8, + "ModifyIndex": 8 + } + ] +} +``` + +## Service Leaf Certificate + +This endpoint returns the leaf certificate representing a single service. +This certificate is used as a server certificate for accepting inbound +connections and is also used as the client certificate for establishing +outbound connections to other services. + +The agent generates a CSR locally and calls the +[CA sign API](/api/connect/ca.html) to sign it. The resulting certificate +is cached and returned by this API until it is near expiry or the root +certificates change. + +This API supports blocking queries. The blocking query will block until +a new certificate is necessary because the existing certificate will expire +or the root certificate is being rotated. This blocking behavior allows +clients to efficiently wait for certificate rotations. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `GET` | `/agent/connect/ca/leaf/:service` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | -------------------------- | +| `YES` | `all` | `service:write` | + +### Parameters + +- `Service` `(string: )` - The name of the service for the leaf + certificate. This is specified in the URL. The service does not need to + exist in the catalog, but the proper ACL permissions must be available. + +### Sample Request + +```text +$ curl \ + https://consul.rocks/v1/connect/ca/leaf/web +``` + +### Sample Response + +```json +{ + "SerialNumber": "08", + "CertPEM": "-----BEGIN CERTIFICATE-----\nMIIChjCCAi2gAwIBAgIBCDAKBggqhkjOPQQDAjAWMRQwEgYDVQQDEwtDb25zdWwg\nQ0EgNzAeFw0xODA1MjExNjMzMjhaFw0xODA1MjQxNjMzMjhaMA4xDDAKBgNVBAMT\nA3dlYjBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABJdLqRKd1SRycFOFceMHOBZK\nQW8HHO8jZ5C8dRswD+IwTd/otJPiaPrVzGOAi4MsaEUgDMemvN1jiywHt3II08mj\nggFyMIIBbjAOBgNVHQ8BAf8EBAMCA7gwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsG\nAQUFBwMBMAwGA1UdEwEB/wQCMAAwaAYDVR0OBGEEXzFmOjkxOmNhOjQxOjhmOmFj\nOjY3OmJmOjU5OmMyOmZhOjRlOjc1OjVjOmQ4OmYwOjU1OmRlOmJlOjc1OmI4OjMz\nOjMxOmQ1OjI0OmIwOjA0OmIzOmU4Ojk3OjViOjdlMGoGA1UdIwRjMGGAXzFmOjkx\nOmNhOjQxOjhmOmFjOjY3OmJmOjU5OmMyOmZhOjRlOjc1OjVjOmQ4OmYwOjU1OmRl\nOmJlOjc1OmI4OjMzOjMxOmQ1OjI0OmIwOjA0OmIzOmU4Ojk3OjViOjdlMFkGA1Ud\nEQRSMFCGTnNwaWZmZTovLzExMTExMTExLTIyMjItMzMzMy00NDQ0LTU1NTU1NTU1\nNTU1NS5jb25zdWwvbnMvZGVmYXVsdC9kYy9kYzEvc3ZjL3dlYjAKBggqhkjOPQQD\nAgNHADBEAiBS8kH3UERhBPHM/CQV/jXKLr0kReLqCdq1jZxc8Aq7hQIgFIus/ZX0\nOM/X3Yc1xb/qJiiEVzXcaz3oVFULOzrNAwk=\n-----END CERTIFICATE-----\n", + "PrivateKeyPEM": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIAOGglbwY8HdD3LFX6Bc94co2pzeFTto8ebWoML5E+QfoAoGCCqGSM49\nAwEHoUQDQgAEl0upEp3VJHJwU4Vx4wc4FkpBbwcc7yNnkLx1GzAP4jBN3+i0k+Jo\n+tXMY4CLgyxoRSAMx6a83WOLLAe3cgjTyQ==\n-----END EC PRIVATE KEY-----\n", + "Service": "web", + "ServiceURI": "spiffe://11111111-2222-3333-4444-555555555555.consul/ns/default/dc/dc1/svc/web", + "ValidAfter": "2018-05-21T16:33:28Z", + "ValidBefore": "2018-05-24T16:33:28Z", + "CreateIndex": 5, + "ModifyIndex": 5 +} +``` + +- `SerialNumber` `string` - Monotonically increasing 64-bit serial number + representing all certificates issued by this Consul cluster. + +- `CertPEM` `(string)` - The PEM-encoded certificate. + +- `PrivateKeyPEM` `(string)` - The PEM-encoded private key for this certificate. + +- `Service` `(string)` - The name of the service that this certificate identifies. + +- `ServiceURI` `(string)` - The URI SAN for this service. + +- `ValidAfter` `(string)` - The time after which the certificate is valid. + Used with `ValidBefore` this can determine the validity period of the certificate. + +- `ValidBefore` `(string)` - The time before which the certificate is valid. + Used with `ValidAfter` this can determine the validity period of the certificate. + +## Managed Proxy Configuration + +This endpoint returns the configuration for a +[managed proxy](/docs/connect/proxies.html). +Ths endpoint is only useful for _managed proxies_ and not relevant +for unmanaged proxies. + +Managed proxy configuration is set in the service definition. When Consul +starts the managed proxy, it provides the service ID and ACL token. The proxy +is expected to call this endpoint to retrieve its configuration. It may use +a blocking query to detect any configuration changes. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `GET` | `/agent/connect/proxy/:id` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | -------------------------- | +| `YES` | `all` | `service:write, proxy token` | + +### Parameters + +- `ID` `(string: )` - The ID (not the name) of the proxy service + in the local agent catalog. For managed proxies, this is provided in the + `CONSUL_PROXY_ID` environment variable by Consul. + +### Sample Request + +```text +$ curl \ + https://consul.rocks/v1/connect/proxy/web-proxy +``` + +### Sample Response + +```json +{ + "ProxyServiceID": "web-proxy", + "TargetServiceID": "web", + "TargetServiceName": "web", + "ContentHash": "cffa5f4635b134b9", + "ExecMode": "daemon", + "Command": [ + "/usr/local/bin/consul", + "connect", + "proxy" + ], + "Config": { + "bind_address": "127.0.0.1", + "bind_port": 20199, + "local_service_address": "127.0.0.1:8181" + } +} +``` + +- `ProxyServiceID` `string` - The ID of the proxy service. + +- `TargetServiceID` `(string)` - The ID of the target service the proxy represents. + +- `TargetServiceName` `(string)` - The name of the target service the proxy represents. + +- `ContentHash` `(string)` - The content hash of the response used for hash-based + blocking queries. + +- `ExecMode` `(string)` - The execution mode of the managed proxy. + +- `Command` `(array)` - The command for the managed proxy. + +- `Config` `(map)` - The configuration for the managed proxy. This + is a map of primitive values (including arrays and maps) that is set by the + user. diff --git a/website/source/api/agent/service.html.md b/website/source/api/agent/service.html.md index c7b100111..143f89b62 100644 --- a/website/source/api/agent/service.html.md +++ b/website/source/api/agent/service.html.md @@ -70,6 +70,9 @@ The agent is responsible for managing the status of its local services, and for sending updates about its local services to the servers to keep the global catalog in sync. +For "connect-proxy" kind services, the `service:write` ACL for the +`ProxyDestination` value is also required to register the service. + | Method | Path | Produces | | ------ | ---------------------------- | -------------------------- | | `PUT` | `/agent/service/register` | `application/json` | @@ -104,6 +107,16 @@ The table below shows this endpoint's support for - `Port` `(int: 0)` - Specifies the port of the service. +- `Kind` `(string: "")` - The kind of service. Defaults to "" which is a + typical Consul service. This value may also be "connect-proxy" for + services that are [Connect-capable](/docs/connect/index.html) + proxies representing another service. + +- `ProxyDestination` `(string: "")` - For "connect-proxy" `Kind` services, + this must be set to the name of the service that the proxy represents. This + service doesn't need to be registered, but the caller must have an ACL token + with permissions for this service. + - `Check` `(Check: nil)` - Specifies a check. Please see the [check documentation](/api/agent/check.html) for more information about the accepted fields. If you don't provide a name or id for the check then they diff --git a/website/source/api/catalog.html.md b/website/source/api/catalog.html.md index 4bbb612c6..5504bb28e 100644 --- a/website/source/api/catalog.html.md +++ b/website/source/api/catalog.html.md @@ -479,6 +479,86 @@ $ curl \ - `ServiceTags` is a list of tags for the service +- `ServiceKind` is the kind of service, usually "". See the Agent + registration API for more information. + +- `ServiceProxyDestination` is the name of the service that is being proxied, + for "connect-proxy" type services. + +## List Nodes for Connect-capable Service + +This endpoint returns the nodes providing a +[Connect-capable](/docs/connect/index.html) service in a given datacenter. +This will include both proxies and native integrations. A service may +register both Connect-capable and incapable services at the same time, +so this endpoint may be used to filter only the Connect-capable endpoints. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `GET` | `/catalog/connect/:service` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | ------------------------ | +| `YES` | `all` | `node:read,service:read` | + +### Parameters + +- `service` `(string: )` - Specifies the name of the service for which + to list nodes. This is specified as part of the URL. + +- `dc` `(string: "")` - Specifies the datacenter to query. This will default to + the datacenter of the agent being queried. This is specified as part of the + URL as a query parameter. + +### Sample Request + +```text +$ curl \ + https://consul.rocks/v1/catalog/connect/my-service +``` + +### Sample Response + +```json +[ + { + "ID": "40e4a748-2192-161a-0510-9bf59fe950b5", + "Node": "foobar", + "Address": "192.168.10.10", + "Datacenter": "dc1", + "TaggedAddresses": { + "lan": "192.168.10.10", + "wan": "10.0.10.10" + }, + "NodeMeta": { + "somekey": "somevalue" + }, + "CreateIndex": 51, + "ModifyIndex": 51, + "ServiceAddress": "172.17.0.3", + "ServiceEnableTagOverride": false, + "ServiceID": "32a2a47f7992:nodea:5000", + "ServiceName": "foobar", + "ServiceKind": "connect-proxy", + "ServiceProxyDestination": "my-service", + "ServicePort": 5000, + "ServiceMeta": { + "foobar_meta_value": "baz" + }, + "ServiceTags": [ + "tacos" + ] + } +] +``` + +The fields are the same as listing nodes for a service. + ## List Services for Node This endpoint returns the node's registered services. diff --git a/website/source/api/connect.html.md b/website/source/api/connect.html.md new file mode 100644 index 000000000..95ea596b1 --- /dev/null +++ b/website/source/api/connect.html.md @@ -0,0 +1,21 @@ +--- +layout: api +page_title: Connect - HTTP API +sidebar_current: api-connect +description: |- + The `/connect` endpoints provide access to Connect-related operations for intentions and the certificate authority. +--- + +# Connect HTTP Endpoint + +The `/connect` endpoints provide access to +[Connect-related](/docs/connect/index.html) operations for +intentions and the certificate authority. + +There are also Connect-related endpoints in the +[Agent](/api/agent.html) and [Catalog](/api/catalog.html) APIs. For example, +the API for requesting a TLS certificate for a service is part of the agent +APIs. And the catalog API has an endpoint for finding all Connect-capable +services in the catalog. + +Please choose a sub-section in the navigation for more information. diff --git a/website/source/api/connect/ca.html.md b/website/source/api/connect/ca.html.md new file mode 100644 index 000000000..522dd0927 --- /dev/null +++ b/website/source/api/connect/ca.html.md @@ -0,0 +1,150 @@ +--- +layout: api +page_title: Certificate Authority - Connect - HTTP API +sidebar_current: api-connect-ca +description: |- + The /connect/ca endpoints provide tools for interacting with Connect's + Certificate Authority mechanism via Consul's HTTP API. +--- + +# Certificate Authority (CA) - Connect HTTP API + +The `/connect/ca` endpoints provide tools for interacting with Connect's +Certificate Authority mechanism. + +## List CA Root Certificates + +This endpoint returns the current list of trusted CA root certificates in +the cluster. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `GET` | `/connect/ca/roots` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | ---------------- | +| `YES` | `all` | `operator:read` | + +### Sample Request + +```text +$ curl \ + https://consul.rocks/v1/connect/ca/roots +``` + +### Sample Response + +```json +{ + "ActiveRootID": "c7:bd:55:4b:64:80:14:51:10:a4:b9:b9:d7:e0:75:3f:86:ba:bb:24", + "TrustDomain": "7f42f496-fbc7-8692-05ed-334aa5340c1e.consul", + "Roots": [ + { + "ID": "c7:bd:55:4b:64:80:14:51:10:a4:b9:b9:d7:e0:75:3f:86:ba:bb:24", + "Name": "Consul CA Root Cert", + "SerialNumber": 7, + "SigningKeyID": "32:64:3a:30:39:3a:35:64:3a:38:34:3a:62:39:3a:38:39:3a:34:62:3a:64:64:3a:65:33:3a:38:38:3a:62:62:3a:39:63:3a:65:32:3a:62:32:3a:36:39:3a:38:31:3a:31:66:3a:34:62:3a:61:36:3a:66:64:3a:34:64:3a:64:66:3a:65:65:3a:37:34:3a:36:33:3a:66:33:3a:37:34:3a:35:35:3a:63:61:3a:62:30:3a:62:35:3a:36:35", + "NotBefore": "2018-05-25T21:39:23Z", + "NotAfter": "2028-05-22T21:39:23Z", + "RootCert": "-----BEGIN CERTIFICATE-----\nMIICmDCCAj6gAwIBAgIBBzAKBggqhkjOPQQDAjAWMRQwEgYDVQQDEwtDb25zdWwg\nQ0EgNzAeFw0xODA1MjUyMTM5MjNaFw0yODA1MjIyMTM5MjNaMBYxFDASBgNVBAMT\nC0NvbnN1bCBDQSA3MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEq4S32Pu0/VL4\nG75gvdyQuAhqMZFsfBRwD3pgvblgZMeJc9KDosxnPR+W34NXtMD/860NNVJIILln\n9lLhIjWPQqOCAXswggF3MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/\nMGgGA1UdDgRhBF8yZDowOTo1ZDo4NDpiOTo4OTo0YjpkZDplMzo4ODpiYjo5Yzpl\nMjpiMjo2OTo4MToxZjo0YjphNjpmZDo0ZDpkZjplZTo3NDo2MzpmMzo3NDo1NTpj\nYTpiMDpiNTo2NTBqBgNVHSMEYzBhgF8yZDowOTo1ZDo4NDpiOTo4OTo0YjpkZDpl\nMzo4ODpiYjo5YzplMjpiMjo2OTo4MToxZjo0YjphNjpmZDo0ZDpkZjplZTo3NDo2\nMzpmMzo3NDo1NTpjYTpiMDpiNTo2NTA/BgNVHREEODA2hjRzcGlmZmU6Ly83ZjQy\nZjQ5Ni1mYmM3LTg2OTItMDVlZC0zMzRhYTUzNDBjMWUuY29uc3VsMD0GA1UdHgEB\n/wQzMDGgLzAtgis3ZjQyZjQ5Ni1mYmM3LTg2OTItMDVlZC0zMzRhYTUzNDBjMWUu\nY29uc3VsMAoGCCqGSM49BAMCA0gAMEUCIBBBDOWXWApx4S6bHJ49AW87Nw8uQ/gJ\nJ6lvm3HzEQw2AiEA4PVqWt+z8fsQht0cACM42kghL97SgDSf8rgCqfLYMng=\n-----END CERTIFICATE-----\n", + "IntermediateCerts": null, + "Active": true, + "CreateIndex": 8, + "ModifyIndex": 8 + } + ] +} +``` + +## Get CA Configuration + +This endpoint returns the current CA configuration. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `GET` | `/connect/ca/configuration` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | --------------- | +| `YES` | `all` | `operator:read` | + +### Sample Request + +```text +$ curl \ + https://consul.rocks/v1/connect/ca/configuration +``` + +### Sample Response + +```json +{ + "Provider": "consul", + "Config": { + "PrivateKey": null, + "RootCert": null, + "RotationPeriod": "2160h" + }, + "CreateIndex": 5, + "ModifyIndex": 5 +} +``` + +## Update CA Configuration + +This endpoint updates the configuration for the CA. If this results in a +new root certificate being used, the [Root Rotation] +(/docs/guides/connect-ca.html#rotation) process will be triggered. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `PUT` | `/connect/ca/configuration` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | --------------- | +| `NO` | `none` | `operator:write`| + +### Parameters + +- `Provider` `(string: )` - Specifies the CA provider type to use. + +- `Config` `(map[string]string: )` - The raw configuration to use +for the chosen provider. For more information on configuring the Connect CA +providers, see [Provider Config](/docs/connect/ca.html). + +### Sample Payload + +```json +{ + "Provider": "consul", + "Config": { + "PrivateKey": "-----BEGIN RSA PRIVATE KEY-----...", + "RootCert": "-----BEGIN CERTIFICATE-----...", + "RotationPeriod": "720h" + } +} +``` + +### Sample Request + +```text +$ curl \ + --request PUT \ + --data @payload.json \ + https://consul.rocks/v1/connect/ca/configuration +``` \ No newline at end of file diff --git a/website/source/api/connect/intentions.html.md b/website/source/api/connect/intentions.html.md new file mode 100644 index 000000000..fc45e5fd7 --- /dev/null +++ b/website/source/api/connect/intentions.html.md @@ -0,0 +1,383 @@ +--- +layout: api +page_title: Intentions - Connect - HTTP API +sidebar_current: api-connect-intentions +description: |- + The /connect/intentions endpoint provide tools for managing intentions via + Consul's HTTP API. +--- + +# Intentions - Connect HTTP API + +The `/connect/intentions` endpoint provide tools for managing +[intentions](/docs/connect/intentions.html). + +## Create Intention + +This endpoint creates a new intention and returns its ID if it was created +successfully. + +The name and destination pair must be unique. If another intention matches +the name and destination, the creation will fail. You must either update the +existing intention or delete it prior to creating a new one. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `POST` | `/connect/intentions` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | ---------------- | +| `NO` | `none` | `service:write` | + +### Parameters + +- `SourceName` `(string: )` - The source of the intention. + For a `SourceType` of `consul` this is the name of a Consul service. The + service doesn't need to be registered. + +- `DestinationName` `(string: )` - The destination of the intention. + The intention destination is always a Consul service, unlike the source. + The service doesn't need to be registered. + +- `SourceType` `(string: )` - The type for the `SourceName` value. + This can be only "consul" today to represent a Consul service. + +- `Action` `(string: )` - This is one of "allow" or "deny" for + the action that should be taken if this intention matches a request. + +- `Description` `(string: nil)` - Description for the intention. This is not + used for anything by Consul, but is presented in API responses to assist + tooling. + +- `Meta` `(map: nil)` - Specifies arbitrary KV metadata pairs. + +### Sample Payload + +```json +{ + "SourceName": "web", + "DestinationName": "db", + "SourceType": "consul", + "Action": "allow" +} +``` + +### Sample Request + +```text +$ curl \ + --request POST \ + --data @payload.json \ + https://consul.rocks/v1/connect/intentions +``` + +### Sample Response + +```json +{ + "ID": "8f246b77-f3e1-ff88-5b48-8ec93abf3e05" +} +``` + +## Read Specific Intention + +This endpoint reads a specific intention. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `GET` | `/connect/intentions/:uuid` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | --------------- | +| `YES` | `all` | `service:read` | + +### Parameters + +- `uuid` `(string: )` - Specifies the UUID of the intention to read. This + is specified as part of the URL. + +### Sample Request + +```text +$ curl \ + https://consul.rocks/v1/connect/intentions/e9ebc19f-d481-42b1-4871-4d298d3acd5c +``` + +### Sample Response + +```json +{ + "ID": "e9ebc19f-d481-42b1-4871-4d298d3acd5c", + "Description": "", + "SourceNS": "default", + "SourceName": "web", + "DestinationNS": "default", + "DestinationName": "db", + "SourceType": "consul", + "Action": "allow", + "DefaultAddr": "", + "DefaultPort": 0, + "Meta": {}, + "CreatedAt": "2018-05-21T16:41:27.977155457Z", + "UpdatedAt": "2018-05-21T16:41:27.977157724Z", + "CreateIndex": 11, + "ModifyIndex": 11 +} +``` + +## List Intentions + +This endpoint lists all intentions. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `GET` | `/connect/intentions` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | --------------- | +| `YES` | `all` | `service:read` | + +### Sample Request + +```text +$ curl \ + https://consul.rocks/v1/connect/intentions +``` + +### Sample Response + +```json +[ + { + "ID": "e9ebc19f-d481-42b1-4871-4d298d3acd5c", + "Description": "", + "SourceNS": "default", + "SourceName": "web", + "DestinationNS": "default", + "DestinationName": "db", + "SourceType": "consul", + "Action": "allow", + "DefaultAddr": "", + "DefaultPort": 0, + "Meta": {}, + "CreatedAt": "2018-05-21T16:41:27.977155457Z", + "UpdatedAt": "2018-05-21T16:41:27.977157724Z", + "CreateIndex": 11, + "ModifyIndex": 11 + } +] +``` + +## Update Intention + +This endpoint updates an intention with the given values. + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `PUT` | `/connect/intentions/:uuid` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | ---------------- | +| `NO` | `none` | `service:write` | + +### Parameters + +- `uuid` `(string: )` - Specifies the UUID of the intention to update. This + is specified as part of the URL. + +- Other parameters are identical to creating an intention. + +### Sample Payload + +```json +{ + "SourceName": "web", + "DestinationName": "other-db", + "SourceType": "consul", + "Action": "allow" +} +``` + +### Sample Request + +```text +$ curl \ + --request PUT \ + --data @payload.json \ + https://consul.rocks/v1/connect/intentions/e9ebc19f-d481-42b1-4871-4d298d3acd5c +``` + +## Delete Intention + +This endpoint deletes a specific intention. + +| Method | Path | Produces | +| -------- | ---------------------------- | -------------------------- | +| `DELETE` | `/connect/intentions/:uuid` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | ---------------- | +| `NO` | `none` | `service:write` | + +### Parameters + +- `uuid` `(string: )` - Specifies the UUID of the intention to delete. This + is specified as part of the URL. + +### Sample Request + +```text +$ curl \ + --request DELETE \ + https://consul.rocks/v1/connect/intentions/e9ebc19f-d481-42b1-4871-4d298d3acd5c +``` + +## Check Intention Result + +This endpoint evaluates the intentions for a specific source and destination +and returns whether the connection would be authorized or not given the +current Consul configuration and set of intentions. + +This endpoint will work even if the destination service has +`intention = "deny"` specifically set, because the resulting API response +does not contain any information about the intention itself. + + +| Method | Path | Produces | +| ------ | ---------------------------- | -------------------------- | +| `GET` | `/connect/intentions/check` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | ---------------- | +| `NO` | `none` | `service:read` | + +### Parameters + +- `source` `(string: )` - Specifies the source service. This + is specified as part of the URL. + +- `destination` `(string: )` - Specifies the destination service. This + is specified as part of the URL. + +### Sample Request + +```text +$ curl \ + https://consul.rocks/v1/connect/intentions/check?source=web&destination=db +``` + +### Sample Response + +```json +{ + "Allowed": true +} +``` + +- `Allowed` is true if the connection would be allowed, false otherwise. + +## List Matching Intentions + +This endpoint lists the intentions that match a given source or destination. +The intentions in the response are in evaluation order. + +| Method | Path | Produces | +| ------ | ------------------------------ | -------------------------- | +| `GET` | `/connect/intentions/match` | `application/json` | + +The table below shows this endpoint's support for +[blocking queries](/api/index.html#blocking-queries), +[consistency modes](/api/index.html#consistency-modes), and +[required ACLs](/api/index.html#acls). + +| Blocking Queries | Consistency Modes | ACL Required | +| ---------------- | ----------------- | --------------- | +| `NO` | `none` | `service:read` | + +### Parameters + +- `by` `(string: )` - Specifies whether to match the "name" value + by "source" or "destination". + +- `name` `(string: )` - Specifies a name to match. This parameter + can be repeated for batching multiple matches. + +### Sample Request + +```text +$ curl \ + https://consul.rocks/v1/connect/intentions/match?by=source&name=web +``` + +### Sample Response + +```json +{ + "web": [ + { + "ID": "ed16f6a6-d863-1bec-af45-96bbdcbe02be", + "Description": "", + "SourceNS": "default", + "SourceName": "web", + "DestinationNS": "default", + "DestinationName": "db", + "SourceType": "consul", + "Action": "deny", + "DefaultAddr": "", + "DefaultPort": 0, + "Meta": {}, + "CreatedAt": "2018-05-21T16:41:33.296693825Z", + "UpdatedAt": "2018-05-21T16:41:33.296694288Z", + "CreateIndex": 12, + "ModifyIndex": 12 + }, + { + "ID": "e9ebc19f-d481-42b1-4871-4d298d3acd5c", + "Description": "", + "SourceNS": "default", + "SourceName": "web", + "DestinationNS": "default", + "DestinationName": "*", + "SourceType": "consul", + "Action": "allow", + "DefaultAddr": "", + "DefaultPort": 0, + "Meta": {}, + "CreatedAt": "2018-05-21T16:41:27.977155457Z", + "UpdatedAt": "2018-05-21T16:41:27.977157724Z", + "CreateIndex": 11, + "ModifyIndex": 11 + } + ] +} +``` diff --git a/website/source/assets/stylesheets/_inner.scss b/website/source/assets/stylesheets/_inner.scss index 1fd41e3f3..a9f0076f9 100644 --- a/website/source/assets/stylesheets/_inner.scss +++ b/website/source/assets/stylesheets/_inner.scss @@ -60,7 +60,7 @@ h3, h4 { color: $body-font-color; - margin-top: 54px; + margin-top: 35px; margin-bottom: $font-size; line-height: 1.3; } diff --git a/website/source/docs/agent/dns.html.md b/website/source/docs/agent/dns.html.md index 9c6fd1fcf..3d3f1c866 100644 --- a/website/source/docs/agent/dns.html.md +++ b/website/source/docs/agent/dns.html.md @@ -57,8 +57,8 @@ we can instead use `foo.node.consul.` This convention allows for terse syntax where appropriate while supporting queries of nodes in remote datacenters as necessary. -For a node lookup, the only records returned are A and AAAA records -containing the IP address, and TXT records containing the +For a node lookup, the only records returned are A and AAAA records +containing the IP address, and TXT records containing the `node_meta` values of the node. ```text @@ -85,9 +85,9 @@ foo.node.consul. 0 IN TXT "value only" consul. 0 IN SOA ns.consul. postmaster.consul. 1392836399 3600 600 86400 0 ``` -By default the TXT records value will match the node's metadata key-value -pairs according to [RFC1464](https://www.ietf.org/rfc/rfc1464.txt). -Alternatively, the TXT record will only include the node's metadata value when the +By default the TXT records value will match the node's metadata key-value +pairs according to [RFC1464](https://www.ietf.org/rfc/rfc1464.txt). +Alternatively, the TXT record will only include the node's metadata value when the node's metadata key starts with `rfc1035-`. ## Service Lookups @@ -207,6 +207,27 @@ Both A and SRV records are supported. SRV records provide the port that a servic registered on, enabling clients to avoid relying on well-known ports. SRV records are only served if the client specifically requests them. +### Connect-Capable Service Lookups + +To find Connect-capable services: + + .connect. + +This will find all [Connect-capable](/docs/connect/index.html) +endpoints for the given `service`. A Connect-capable endpoint may be +both a proxy for a service or a natively integrated Connect application. +The DNS interface does not differentiate the two. + +Most services will use a [proxy](/docs/connect/proxies.html) that handles +service discovery automatically and therefore won't use this DNS format. +This DNS format is primarily useful for [Connect-native](/docs/connect/native.html) +applications. + +This endpoint currently only finds services within the same datacenter +and doesn't support tags. This DNS interface will be expanded over time. +If you need more complex behavior, please use the +[catalog API](/api/catalog.html). + ### UDP Based DNS Queries When the DNS query is performed using UDP, Consul will truncate the results diff --git a/website/source/docs/commands/connect.html.md b/website/source/docs/commands/connect.html.md new file mode 100644 index 000000000..3264cc2ba --- /dev/null +++ b/website/source/docs/commands/connect.html.md @@ -0,0 +1,43 @@ +--- +layout: "docs" +page_title: "Commands: Connect" +sidebar_current: "docs-commands-connect" +--- + +# Consul Connect + +Command: `consul connect` + +The `connect` command is used to interact with Connect +[Connect](/docs/connect/intentions.html) subsystems. It exposes commands for +running the built-in mTLS proxy and viewing/updating the Certificate Authority +(CA) configuration. This command is available in Consul 1.2 and later. + +## Usage + +Usage: `consul connect ` + +For the exact documentation for your Consul version, run `consul connect -h` to view +the complete list of subcommands. + +```text +Usage: consul connect [options] [args] + + This command has subcommands for interacting with Consul Connect. + + Here are some simple examples, and more detailed examples are available + in the subcommands or the documentation. + + Run the built-in Connect mTLS proxy + + $ consul connect proxy + + For more examples, ask for subcommand help or view the documentation. + +Subcommands: + ca Interact with the Consul Connect Certificate Authority (CA) + proxy Runs a Consul Connect proxy +``` + +For more information, examples, and usage about a subcommand, click on the name +of the subcommand in the sidebar. \ No newline at end of file diff --git a/website/source/docs/commands/connect/ca.html.md.erb b/website/source/docs/commands/connect/ca.html.md.erb new file mode 100644 index 000000000..6f77b478c --- /dev/null +++ b/website/source/docs/commands/connect/ca.html.md.erb @@ -0,0 +1,91 @@ +--- +layout: "docs" +page_title: "Commands: Connect CA" +sidebar_current: "docs-commands-connect-ca" +description: > + The connect CA subcommand is used to view and modify the Connect Certificate + Authority (CA) configuration. +--- + +# Consul Connect Certificate Authority (CA) + +Command: `consul connect ca` + +The CA connect command is used to interact with Consul Connect's Certificate Authority +subsystem. The command can be used to view or modify the current CA configuration. See the +[Connect CA Guide](/docs/guides/connect-ca.html) for more information. + +```text +Usage: consul connect ca [options] [args] + + This command has subcommands for interacting with Consul Connect's + Certificate Authority (CA). + + Here are some simple examples, and more detailed examples are available + in the subcommands or the documentation. + + Get the configuration: + + $ consul connect ca get-config + + Update the configuration: + + $ consul connect ca set-config -config-file ca.json + + For more examples, ask for subcommand help or view the documentation. + +Subcommands: + get-config Display the current Connect Certificate Authority (CA) configuration + set-config Modify the current Connect CA configuration +``` + +## get-config + +This command displays the current CA configuration. + +Usage: `consul connect ca get-config [options]` + +#### API Options + +<%= partial "docs/commands/http_api_options_client" %> +<%= partial "docs/commands/http_api_options_server" %> + +The output looks like this: + +``` +{ + "Provider": "consul", + "Config": { + "PrivateKey": null, + "RootCert": null, + "RotationPeriod": "2160h" + }, + "CreateIndex": 5, + "ModifyIndex": 197 +} +``` + +## set-config + +Modifies the current CA configuration. If this results in a new root certificate +being used, the [Root Rotation](/docs/guides/connect-ca.html#rotation) process +will be triggered. + +Usage: `consul connect ca set-config [options]` + +#### API Options + +<%= partial "docs/commands/http_api_options_client" %> +<%= partial "docs/commands/http_api_options_server" %> + +#### Command Options + +* `-config-file` - (required) Specifies a JSON-formatted file to use for the new configuration. + +The output looks like this: + +``` +Configuration updated! +``` + +The return code will indicate success or failure. diff --git a/website/source/docs/commands/connect/proxy.html.md.erb b/website/source/docs/commands/connect/proxy.html.md.erb new file mode 100644 index 000000000..c9264b4bb --- /dev/null +++ b/website/source/docs/commands/connect/proxy.html.md.erb @@ -0,0 +1,82 @@ +--- +layout: "docs" +page_title: "Commands: Connect Proxy" +sidebar_current: "docs-commands-connect-proxy" +description: > + The connect proxy subcommand is used to run the built-in mTLS proxy for Connect. +--- + +# Consul Connect Proxy + +Command: `consul connect proxy` + +The connect proxy command is used to run Consul's built-in mTLS proxy for +use with Connect. This can be used in production to enable a Connect-unaware +application to accept and establish Connect-based connections. This proxy +can also be used in development to establish Connect-based connections. + + +## Usage + +Usage: `consul connect proxy [options]` + +#### API Options + +<%= partial "docs/commands/http_api_options_client" %> +<%= partial "docs/commands/http_api_options_server" %> + +#### Proxy Options + +* `-service` - Name of the service this proxy is representing. This service + doesn't need to actually exist in the Consul catalog, but proper ACL + permissions (`service:write`) are required. + +* `-upstream` - Upstream service to support connecting to. The format should be + 'name:addr', such as 'db:8181'. This will make 'db' available on port 8181. + When a regular TCP connection is made to port 8181, the proxy will service + discover "db" and establish a Connect mTLS connection identifying as + the `-service` value. This flag can be repeated multiple times. + +* `-listen` - Address to listen for inbound connections to the proxied service. + Must be specified with -service and -service-addr. If this isn't specified, + an inbound listener is not started. + +* `-service-addr` - Address of the local service to proxy. Required for + `-listen`. + +* `-register` - Self-register with the local Consul agent, making this + proxy available as Connect-capable service in the catalog. This is only + useful with `-listen`. + +* `-register-id` - Optional ID suffix for the service when `-register` is set + to disambiguate the service ID. By default the service ID is "-proxy" + where `` is the `-service` value. + +* `-log-level` - Specifies the log level. + +* `-pprof-addr` - Enable debugging via pprof. Providing a host:port (or just ':port') + enables profiling HTTP endpoints on that address. + +* `-proxy-id` - The proxy's ID on the local agent. This is only useful to + test the managed proxy mode. + +## Examples + +The example below shows how to start a local proxy for establishing outbound +connections to "db" representing the frontend service. Once running, any +process that creates a TCP connection to the specified port (8181) will +establish a mutual TLS connection to "db" identified as "frontend". + +```text +$ consul connect proxy -service frontend -upstream db:8181 +``` + +The next example starts a local proxy that also accepts inbound connections +on port 8443, authorizes the connection, then proxies it to port 8080: + +```text +$ consul connect proxy \ + -service frontend \ + -service-addr 127.0.0.1:8080 \ + -listen ':8443' +``` diff --git a/website/source/docs/commands/intention.html.md b/website/source/docs/commands/intention.html.md new file mode 100644 index 000000000..aab479144 --- /dev/null +++ b/website/source/docs/commands/intention.html.md @@ -0,0 +1,54 @@ +--- +layout: "docs" +page_title: "Commands: Intention" +sidebar_current: "docs-commands-intention" +--- + +# Consul Intention + +Command: `consul intention` + +The `intention` command is used to interact with Connect +[intentions](/docs/connect/intentions.html). It exposes commands for +creating, updating, reading, deleting, checking, and managing intentions. +This command is available in Consul 1.2 and later. + +Intentions may also be managed via the [HTTP API](/api/connect/intentions.html). + +## Usage + +Usage: `consul intention ` + +For the exact documentation for your Consul version, run `consul intention -h` to view +the complete list of subcommands. + +```text +Usage: consul intention [options] [args] + + ... + +Subcommands: + check Check whether a connection between two services is allowed. + create Create intentions for service connections. + delete Delete an intention. + get Show information about an intention. + match Show intentions that match a source or destination. +``` + +For more information, examples, and usage about a subcommand, click on the name +of the subcommand in the sidebar. + +## Basic Examples + +Create an intention to allow "web" to talk to "db": + + $ consul intention create web db + +Test whether a "web" is allowed to connect to "db": + + $ consul intention check web db + +Find all intentions for communicating to the "db" service: + + $ consul intention match db + diff --git a/website/source/docs/commands/intention/check.html.md.erb b/website/source/docs/commands/intention/check.html.md.erb new file mode 100644 index 000000000..48a4b3c9b --- /dev/null +++ b/website/source/docs/commands/intention/check.html.md.erb @@ -0,0 +1,37 @@ +--- +layout: "docs" +page_title: "Commands: Intention Check" +sidebar_current: "docs-commands-intention-check" +--- + +# Consul Intention Check + +Command: `consul intention check` + +The `intention check` command checks whether a connection attempt between +two services would be authorized given the current set of intentions and +Consul configuration. + +This command requires less ACL permissions than other intention-related +tasks because no information about the intention is revealed. Therefore, +callers only need to have `service:read` access for the destination. Richer +commands like [match](/docs/commands/intention/match.html) require full +intention read permissions and don't evaluate the result. + +## Usage + +Usage: `consul intention check [options] SRC DST` + +#### API Options + +<%= partial "docs/commands/http_api_options_client" %> + +## Examples + +```text +$ consul intention check web db +Denied + +$ consul intention check web billing +Allowed +``` diff --git a/website/source/docs/commands/intention/create.html.md.erb b/website/source/docs/commands/intention/create.html.md.erb new file mode 100644 index 000000000..d127baaf9 --- /dev/null +++ b/website/source/docs/commands/intention/create.html.md.erb @@ -0,0 +1,51 @@ +--- +layout: "docs" +page_title: "Commands: Intention Create" +sidebar_current: "docs-commands-intention-create" +--- + +# Consul Intention Create + +Command: `consul intention create` + +The `intention create` command creates or updates an intention. + +## Usage + +Usage: `consul intention create [options] SRC DST` +Usage: `consul intention create [options] -f FILE...` + +#### API Options + +<%= partial "docs/commands/http_api_options_client" %> + +#### Intention Create Options + +* `-allow` - Set the action to "allow" for intentions. This is the default. + +* `-deny` - Set the action to "deny" for intentions. This cannot be specified + with `-allow`. + +* `-file` - Read intention data one or more files specified by the command + line arguments, instead of source/destination pairs. + +* `-meta key=value` - Specify arbitrary KV metadata to associate with the + intention. + +* `-replace` - Replace any matching intention. The replacement is done + atomically per intention. + +## Examples + +Create an intention `web => db`: + + $ consul intention create web db + +Create intentions from a set of files: + + $ consul intention create -file one.json two.json + +Create intentions from a directory using shell expansion: + + $ consul intention create -file intentions/*.json + diff --git a/website/source/docs/commands/intention/delete.html.md.erb b/website/source/docs/commands/intention/delete.html.md.erb new file mode 100644 index 000000000..7f93ffc74 --- /dev/null +++ b/website/source/docs/commands/intention/delete.html.md.erb @@ -0,0 +1,36 @@ +--- +layout: "docs" +page_title: "Commands: Intention Delete" +sidebar_current: "docs-commands-intention-delete" +--- + +# Consul Intention Delete + +Command: `consul intention delete` + +The `intention delete` command deletes a matching intention. + +## Usage + +Usage: + + * `consul intention delete [options] SRC DST` + * `consul intention delete [options] ID` + +#### API Options + +<%= partial "docs/commands/http_api_options_client" %> + +## Examples + +Delete an intention from "web" to "db" with any action: + +```text +$ consul intention delete web db +``` + +Delete an intention by unique ID: + +```text +$ consul intention delete 4ffed935-439c-695d-4f51-f4fc0b12a7a7 +``` diff --git a/website/source/docs/commands/intention/get.html.md.erb b/website/source/docs/commands/intention/get.html.md.erb new file mode 100644 index 000000000..fb2ae5e45 --- /dev/null +++ b/website/source/docs/commands/intention/get.html.md.erb @@ -0,0 +1,33 @@ +--- +layout: "docs" +page_title: "Commands: Intention Get" +sidebar_current: "docs-commands-intention-get" +--- + +# Consul Intention Get + +Command: `consul intention get` + +The `intention get` command shows a single intention. + +## Usage + +Usage: + + * `consul intention get [options] SRC DST` + * `consul intention get [options] ID` + +#### API Options + +<%= partial "docs/commands/http_api_options_client" %> + +## Examples + +```text +$ consul intention get web db +Source: web +Destination: db +Action: deny +ID: 20edfa56-9cd4-51db-8c22-db09fdec61ef +Created At: Thursday, 24-May-18 17:07:49 PDT +``` diff --git a/website/source/docs/commands/intention/match.html.md.erb b/website/source/docs/commands/intention/match.html.md.erb new file mode 100644 index 000000000..d42280380 --- /dev/null +++ b/website/source/docs/commands/intention/match.html.md.erb @@ -0,0 +1,38 @@ +--- +layout: "docs" +page_title: "Commands: Intention Match" +sidebar_current: "docs-commands-intention-match" +--- + +# Consul Intention Match + +Command: `consul intention match` + +The `intention match` command shows the list of intentions that match +a given source or destination. The list of intentions is listed in evaluation +order: the first intention that matches a request would be evaluated. + +The [check](/docs/commands/intention/check.html) command can be used to +check whether a connection would be authorized between any two services. + +## Usage + +Usage: `consul intention match [options] SRC_OR_DST` + +#### API Options + +<%= partial "docs/commands/http_api_options_client" %> + +#### Intention Match Options + +* `-destination` - Match by destination. + +* `-source` - Match by source. + +## Examples + +```text +$ consul intention match -source web +web => db (deny) +web => * (allow) +``` diff --git a/website/source/docs/connect/ca.html.md b/website/source/docs/connect/ca.html.md new file mode 100644 index 000000000..424f83486 --- /dev/null +++ b/website/source/docs/connect/ca.html.md @@ -0,0 +1,11 @@ +--- +layout: "docs" +page_title: "Connect - Certificate Management" +sidebar_current: "docs-connect-ca" +description: |- + TODO +--- + +# Connect Certificate Management + +TODO diff --git a/website/source/docs/connect/configuration.html.md b/website/source/docs/connect/configuration.html.md new file mode 100644 index 000000000..9bce25247 --- /dev/null +++ b/website/source/docs/connect/configuration.html.md @@ -0,0 +1,41 @@ +--- +layout: "docs" +page_title: "Connect - Configuration" +sidebar_current: "docs-connect-config" +description: |- + A Connect-aware proxy enables unmodified applications to use Connect. A per-service proxy sidecar transparently handles inbound and outbound service connections, automatically wrapping and verifying TLS connections. +--- + +# Connect Configuration + +There are many configuration options exposed for Connect. The only option +that must be set is the "enabled" option on Consul Servers to enable Connect. +All other configurations are optional and have reasonable defaults. + +## Enable Connect on the Cluster + +The first step to use Connect is to enable Connect for your Consul +cluster. By default, Connect is disabled. Enabling Connect requires changing +the configuration of only your Consul _servers_ (not client agents). To enable +Connect, add the following to a new or existing +[server configuration file](/docs/agent/options.html). In HCL: + +```hcl +connect { + enabled = true +} +``` + +This will enable Connect and configure your Consul cluster to use the +built-in certificate authority for creating and managing certificates. +You may also configure Consul to use an external +[certificate management system](/docs/connect/ca.html), such as +[Vault](https://vaultproject.io). + +No agent-wide configuration is necessary for non-server agents. Services +and proxies may always register with Connect settings, but they will fail to +retrieve or verify any TLS certificates. This causes all Connect-based +connection attempts to fail until Connect is enabled on the server agents. + +-> **Note:** Connect is enabled by default when running Consul in +dev mode with `consul agent -dev`. diff --git a/website/source/docs/connect/dev.html.md b/website/source/docs/connect/dev.html.md new file mode 100644 index 000000000..b160b4018 --- /dev/null +++ b/website/source/docs/connect/dev.html.md @@ -0,0 +1,67 @@ +--- +layout: "docs" +page_title: "Connect - Development and Debugging" +sidebar_current: "docs-connect-dev" +description: |- + It is often necessary to connect to a service for development or debugging. If a service only exposes a Connect listener, then we need a way to establish a mutual TLS connection to the service. The `consul connect proxy` command can be used for this task on any machine with access to a Consul agent (local or remote). +--- + +# Developing and Debugging Connect Services + +It is often necessary to connect to a service for development or debugging. +If a service only exposes a Connect listener, then we need a way to establish +a mutual TLS connection to the service. The +[`consul connect proxy` command](/docs/commands/connect/proxy.html) can be used +for this task on any machine with access to a Consul agent (local or remote). + +Restricting access to services only via Connect ensures that the only way to +connect to a service is through valid authorization of the +[intentions](/docs/connect/intentions.html). This can extend to developers +and operators, too. + +## Connecting to Connect-only Services + +As an example, let's assume that we have a PostgreSQL database running that +we want to connect to via `psql`, but the only non-loopback listener is +via Connect. Let's also assume that we have an ACL token to identify as +`operator-mitchellh`. We can start a local proxy: + +```sh +$ consul connect proxy \ + -service operator-mitchellh \ + -upstream postgresql:8181 +``` + +This works because the source `-service` does not need to be registered +in the local Consul catalog. However, to retrieve a valid identifying +certificate, the ACL token must have `service:write` permissions. This +can be used as a sort of "virtual service" to represent people, too. In +the example above, the proxy is identifying as `operator-mitchellh`. + +With the proxy running, we can now use `psql` like normal: + +``` +$ psql -h 127.0.0.1 -p 8181 -U mitchellh mydb +> +``` + +This `psql` session is now happening through our local proxy via an +authorized mutual TLS connection to the PostgreSQL service in our Consul +catalog. + +### Masquerading as a Service + +You can also easily masquerade as any source service by setting the +`-service` value to any service. Note that the proper ACL permissions are +required to perform this task. + +For example, if you have an ACL token that allows `service:write` for +`web` and you want to connect to the `postgresql` service as "web", you +can start a proxy like so: + +```sh +$ consul connect proxy \ + -service web \ + -upstream postgresql:8181 +``` + diff --git a/website/source/docs/connect/index.html.md b/website/source/docs/connect/index.html.md new file mode 100644 index 000000000..0fa103a22 --- /dev/null +++ b/website/source/docs/connect/index.html.md @@ -0,0 +1,43 @@ +--- +layout: "docs" +page_title: "Connect (Service Segmentation)" +sidebar_current: "docs-connect-index" +description: |- + Consul Connect provides service-to-service connection authorization and encryption using mutual TLS. +--- + +# Connect + +Consul Connect provides service-to-service connection authorization +and encryption using mutual TLS. Applications can use +[sidecar proxies](/docs/connect/proxies.html) +to automatically establish TLS connections for inbound and outbound connections +without being aware of Connect at all. Applications may also +[natively integrate with Connect](/docs/connect/native.html) +for optimal performance and security. + +## How it Works + +TODO + +## Eliminating East-West Firewalls + +East-west firewalls are the typical tool for network security in a static world. +East-west is the transfer of data from server to server within a datacenter, +versus North-south traffic which describes end user to server communications. + +These firewalls wrap services with ingress/egress policies. This perimeter-based +approach is difficult to scale in a dynamic world with dozens or hundreds of +services or where machines may be frequently created or destroyed. Firewalls +create a sprawl of rules for each service instance that quickly becomes +overly difficult to maintain. + +Service security in a dynamic world is best solved through service-to-service +authentication and authorization. Instead of IP-based network security, +services can be deployed to low-trust networks and rely on service-identity +based security over in-transit data encryption. + +Connect enables service segmentation by securing service-to-service +communications through mutual TLS and transparent proxying on zero-trust +networks. This allows direct service communication without relying on firewalls +for east-west traffic security. diff --git a/website/source/docs/connect/intentions.html.md b/website/source/docs/connect/intentions.html.md new file mode 100644 index 000000000..6c0c61d03 --- /dev/null +++ b/website/source/docs/connect/intentions.html.md @@ -0,0 +1,134 @@ +--- +layout: "docs" +page_title: "Connect - Intentions" +sidebar_current: "docs-connect-intentions" +description: |- + Intentions define access control for services via Connect and are used to control which services may establish connections. Intentions can be managed via the API, CLI, or UI. +--- + +# Intentions + +Intentions define access control for services via Connect and are used +to control which services may establish connections. Intentions can be +managed via the API, CLI, or UI. + +Intentions are enforced by the [proxy](/docs/connect/proxies.html) +or [natively integrated application](/docs/connect/native.html) on +inbound connections. After verifying the TLS client certificate, the +[authorize API endpoint](#) is called which verifies the connection +is allowed by testing the intentions. If authorize returns false the +connection must be terminated. + +The default intention behavior is defined by the default +[ACL policy](/docs/guides/acls.html). If the default ACL policy is "allow all", +then all Connect connections are allowed by deafult. If the default ACL policy +is "deny all", then all Connect connections are denied by default. + +## Intention Basics + +Intentions can be managed via the +[API](#), +[CLI](#), +or UI. Please see the respective documentation for each for full details +on options, flags, etc. +Below is an example of a basic intention to show the basic attributes +of an intention. The full data model of an intention can be found in the +[API documentation](#). + +``` +$ consul intention create -deny web db +Created: web => db (deny) +``` + +The intention above is a deny intention with a source of "web" and +destination of "db". This says that connections from web to db are not +allowed and the connection will be rejected. + +### Wildcard Intentions + +An intention source or destination may also be the special wildcard +value `*`. This matches _any_ value and is used as a catch-all. Example: + +``` +$ consul intention create -deny web '*' +Created: web => * (deny) +``` + +This example says that the "web" service cannot connect to _any_ service. + +### Metadata + +Arbitrary string key/value data may be associated with intentions. This +is unused by Consul but can be used by external systems or for visibility +in the UI. + +``` +$ consul intention create \ + -deny \ + -meta description='Hello there' \ + web db +... + +$ consul intention get web db +Source: web +Destination: db +Action: deny +ID: 31449e02-c787-f7f4-aa92-72b5d9b0d9ec +Meta[description]: Hello there +Created At: Friday, 25-May-18 02:07:51 CEST +``` + +## Precedence and Match Order + +Intentions are matched in an implicit order based on specificity, preferring +deny over allow. The full precedence table is shown below and is evaluated +top to bottom. + +TODO + +## Intention Management Permissions + +Intention management can be protected by [ACLs](/docs/guides/acls.html). +Permissions for intentions are _destination-oriented_, meaning the ACLs +for managing intentions are looked up based on the destination value +of the intention, not the source. + +Intention permissions are first inherited from `service` management permissions. +For example, the ACL below would allow _read_ access to intentions with a +destination starting with "web": + +```hcl +service "web" { + policy = "read" +} +``` + +ACLs may also specify service-specific intention permissions. In the example +below, the ACL token may register a "web"-prefixed service but _may not_ read or write +intentions: + +```hcl +service "web" { + policy = "read" + intention = "deny" +} +``` + +## Performance and Intention Updates + +The intentions for services registered with a Consul agent are cached +locally on that agent. They are then updated via a background blocking query +against the Consul servers. + +Connect connection attempts require only local agent +communication for authorization and generally impose only impose microseconds +of latency to the connection. All actions in the data path of connections +require only local data to ensure minimal performance overhead. + +Updates to intentions are propagated nearly instantly to agents since agents +maintain a continuous blocking query in the background for intention updates +for registered services. + +Because all the intention data is cached locally, the agents can fail open. +Even if the agents are severed completely from the Consul servers, inbound +connection authorization continues to work for a configured amount of time. diff --git a/website/source/docs/connect/native.html.md b/website/source/docs/connect/native.html.md new file mode 100644 index 000000000..4f31271c3 --- /dev/null +++ b/website/source/docs/connect/native.html.md @@ -0,0 +1,11 @@ +--- +layout: "docs" +page_title: "Connect - Native Application Integration" +sidebar_current: "docs-connect-native" +description: |- + TODO +--- + +# Connect-Native App Integration + +TODO diff --git a/website/source/docs/connect/proxies.html.md b/website/source/docs/connect/proxies.html.md new file mode 100644 index 000000000..b588c6665 --- /dev/null +++ b/website/source/docs/connect/proxies.html.md @@ -0,0 +1,176 @@ +--- +layout: "docs" +page_title: "Connect - Proxies" +sidebar_current: "docs-connect-proxies" +description: |- + A Connect-aware proxy enables unmodified applications to use Connect. A per-service proxy sidecar transparently handles inbound and outbound service connections, automatically wrapping and verifying TLS connections. +--- + +# Connect Proxies + +A Connect-aware proxy enables unmodified applications to use Connect. +A per-service proxy sidecar transparently handles inbound and outbound +service connections, automatically wrapping and verifying TLS connections. + +When a proxy is used, the actual service being proxied should only accept +connections on a loopback address. This requires all external connections +to be established via the Connect protocol to provide authentication and +authorization. + +Consul supports both _managed_ and _unmanaged_ proxies. A managed proxy +is started, configured, and stopped by Consul. An unmanaged proxy is the +responsibility of the user, like any other Consul service. + +## Managed Proxies + +Managed proxies are started, configured, and stopped by Consul. They are +enabled via basic configurations within the +[service definition](/docs/agent/services.html). +This is the easiest way to start a proxy and allows Consul users to begin +using Connect with only a small configuration change. + +Managed proxies also offer the best security. Managed proxies are given +a unique proxy-specific ACL token that allows read-only access to Connect +information for the specific service the proxy is representing. This ACL +token is more restrictive than can be currently expressed manually in +an ACL policy. + +The default managed proxy is a basic proxy built-in to Consul and written +in Go. Having a basic built-in proxy allows Consul to have a sane default +with performance that is good enough for most workloads. In some basic +benchmarks, the service-to-service communication over the built-in proxy +could sustain 5 Gbps with a per-hop latency of less than X microseconds. Therefore, +the performance impact of even the basic built-in proxy is minimal. + +Consul will be +integrating with advanced proxies in the near future to support more complex +configurations and higher performance. The configuration below is all for +the built-in proxy. + +### Minimal Configuration + +Managed proxies are configured within a +[service definition](/docs/agent/services.html). The simplest possible +managed proxy configuration is an empty configuration. This enables the +default managed proxy and starts a listener for that service: + +```json +{ + "service": "redis", + "connect": { "proxy": {} } +} +``` + +The listener is started on random port within the configured Connect +port range. It can be discovered using the +[DNS interface](/docs/agent/dns.html#connect-capable-service-lookups) +or +[Catalog API](#). +In most cases, service-to-service communication is established by +a proxy configured with upstreams (described below), which handle the +discovery transparently. + +### Upstream Configuration + +To transparently discover and establish Connect-based connections to +dependencies, they must be configured with a static port on the managed +proxy configuration: + +```json +{ + "service": "web", + "connect": { + "proxy": { + "config": { + "upstreams": [{ + "destination_name": "redis", + "local_bind_port": 1234 + }] + } + } + } +} +``` + +In the example above, +"redis" is configured as an upstream with static port 1234 for service "web". +When a TCP connection is established on port 1234, the proxy +will find Connect-compatible "redis" services via Consul service discovery +and establish a TLS connection identifying as "web". + +~> **Security:** Any application that can communicate to the configured +static port will be able to masquerade as the source service ("web" in the +example above). You must either trust any loopback access on that port or +use namespacing techniques provided by your operating system. + +### Prepared Query Upstreams + +The upstream destination may also be a +[prepared query](/api/query.html). +This allows complex service discovery behavior such as connecting to +the nearest neighbor or filtering by tags. + +For example, given a prepared query named "nearest-redis" that is +configured to route to the nearest Redis instance, an upstream can be +configured to route to this query. In the example below, any TCP connection +to port 1234 will attempt a Connect-based connection to the nearest Redis +service. + +```json +{ + "service": "web", + "connect": { + "proxy": { + "config": { + "upstreams": [{ + "destination_name": "nearest-redis", + "destination_type": "prepared_query", + "local_bind_port": 1234 + }] + } + } + } +} +``` + +### Dynamic Upstreams + +If an application requires dynamic dependencies that are only available +at runtime, it must currently [natively integrate](/docs/connect/native.html) +with Connect. After natively integrating, the HTTP API or +[DNS interface](/docs/agent/dns.html#connect-capable-service-lookups) +can be used. + +## Unmanaged Proxies + +Unmanaged proxies are regular Consul services that are registered as a +proxy type and declare the service they represent. The proxy process must +be started, configured, and stopped manually by some external process such +as an operator or scheduler. + +To declare a service as a proxy, the service definition must contain +at least two additional fields: + + * `Kind` (string) must be set to `connect-proxy`. This declares that the + service is a proxy type. + + * `ProxyDestination` (string) must be set to the service that this proxy + is representing. + + * `Port` must be set so that other Connect services can discover the exact + address for connections. `Address` is optional if the service is being + registered against an agent, since it'll inherit the node address. + +Example: + +```json +{ + "Name": "redis-proxy", + "Kind": "connect-proxy", + "ProxyDestination": "redis", + "Port": 8181 +} +``` + +With this service registered, any Connect proxies searching for a +Connect-capable endpoint for "redis" will find this proxy. diff --git a/website/source/docs/connect/proxies/integrate.html.md b/website/source/docs/connect/proxies/integrate.html.md new file mode 100644 index 000000000..63a6cc5a6 --- /dev/null +++ b/website/source/docs/connect/proxies/integrate.html.md @@ -0,0 +1,63 @@ +--- +layout: "docs" +page_title: "Connect - Proxy Integration" +sidebar_current: "docs-connect-proxies-integrate" +description: |- + A Connect-aware proxy enables unmodified applications to use Connect. A per-service proxy sidecar transparently handles inbound and outbound service connections, automatically wrapping and verifying TLS connections. +--- + +# Connect Custom Proxy Integration + +Any proxy can be extended to support Connect. Consul ships with a built-in +proxy for a good development and out of the box experience, but understand +that production users will require other proxy solutions. + +A proxy must serve one or both of the following two roles: it must accept +inbound connections or establish outbound connections identified as a +particular service. One or both of these may be implemented depending on +the case, although generally both must be supported. + +## Accepting Inbound Connections + +For inbound connections, the proxy must accept TLS connections on some port. +The certificate served should be created by the +[`/v1/agent/connect/ca/leaf/`](/api/agent/connect.html) API endpoint. +The client certificate should be validated against the root certificates +provided by the +[`/v1/agent/connect/ca/roots`](/api/agent/connect.html) endpoint. +After validating the client certificate from the caller, the proxy should +call the +[`/v1/agent/connect/authorize`](/api/agent/connect.html) endpoint to +authorize the connection. + +All of these API endpoints operate on agent-local data that is updated +in the background. The leaf and roots should be updated in the background +by the proxy, but the authorize endpoint is expected to be called in the +connection path. The endpoints introduce only microseconds of additional +latency on the connection. + +The leaf and root cert endpoints support blocking queries. These should be +used if possible to get near-immediate updates for root cert rotations, +leaf expiry, etc. + +## Establishing Outbound Connections + +For outbound connections, the proxy should communicate to a +Connect-capable endpoint for a service and provide a client certificate +from the +[`/v1/agent/connect/ca/leaf/`](/api/agent/connect.html) API endpoint. +The certificate served by the remote endpoint can be verified against the +root certificates from the +[`/v1/agent/connect/ca/roots`](/api/agent/connect.html) endpoint. + +## Managed Mode Support + +If the proxy could run as a managed proxy, then it should accept the following +two environment variables that Consul populates on process startup. These +are both required to make the necessary API requests for configuration. + + * `CONSUL_PROXY_TOKEN` - The ACL token to use for all requests to proxy-related + API endpoints. + + * `CONSUL_PROXY_ID` - The service ID for requesting configuration for the + proxy from [`/v1/agent/connect/proxy/`](/api/agent/connect.html). diff --git a/website/source/docs/connect/security.html.md b/website/source/docs/connect/security.html.md new file mode 100644 index 000000000..a4a365eac --- /dev/null +++ b/website/source/docs/connect/security.html.md @@ -0,0 +1,96 @@ +--- +layout: "docs" +page_title: "Connect - Security" +sidebar_current: "docs-connect-security" +description: |- + TODO +--- + +# Connect Security + +Connect enables secure service-to-service communication over mutual TLS. This +provides both in-transit data encryption as well as authorization. This page +will document how to secure Connect. For a full security model reference, +see the dedicated [Consul security model](/docs/internals/security.html) page. + +Connect will function in any Consul configuration. However, unless the checklist +below is satisfied, Connect is not providing the security guarantees it was +built for. The checklist below can be incrementally adopted towards full +security if you prefer to operate in less secure models initially. + +~> **Warning**: The checklist below should not be considered exhaustive. Please +read and understand the [Consul security model](/docs/internals/security.html) +in depth to assess whether your deployment satisfies the security requirements +of Consul. + +## Checklist + +### ACLs Enabled with Default Deny + +Consul must be configured to use ACLs with a default deny policy. This forces +all requests to have explicit anonymous access or provide an ACL token. The +configuration also forces all service-to-service communication to be explicitly +whitelisted via an allow [intention](/docs/connect/intentions.html). + +To learn how to enable ACLs, please see the +[guide on ACLs](/docs/guides/acl.html). + +**If ACLs are enabled but are in default allow mode**, then services will be +able to communicate by default. Additionally, if a proper anonymous token +is not configured, this may allow anyone to edit intentions. We do not recommend +this. **If ACLs are not enabled**, deny intentions will still be enforced, but anyone +may edit intentions. This renders the security of the created intentions +effectively useless. + +### TCP and UDP Encryption Enabled + +TCP and UDP encryption must be enabled to prevent plaintext communication +between Consul agents. At a minimum, `verify_outgoing` should be enabled +to verify server authenticity with each server having a unique TLS certificate. +`verify_incoming` provides additional agent verification, but doesn't directly +affect Connect since requests must also always contain a valid ACL token. +Clients calling Consul APIs should be forced over encrypted connections. + +See the [Consul agent encryption page](/docs/agent/encryption.html) to +learn more about configuring agent encryption. + +**If encryption is not enabled**, a malicious actor can sniff network +traffic or perform a man-in-the-middle attack to steal ACL tokens, always +authorize connections, etc. + +### Prevent Unauthorized Access to the Config and Data Directories + +The configuration and data directories of the Consul agent on both +clients and servers should be protected from unauthorized access. This +protection must be done outside of Consul via access control systems provided +by your target operating system. + +The [full Consul security model](/docs/internals/security.html) explains the +risk of unauthorized access for both client agents and server agents. In +general, the blast radius of unauthorized access for client agent directories +is much smaller than servers. However, both must be protected against +unauthorized access. + +### Prevent Non-Connect Traffic to Services + +For services that are using +[proxies](/docs/connect/proxies.html) +(are not [natively integrated](/docs/connect/native.html)), +network access via their unencrypted listeners must be restricted +to only the proxy. This requires at a minimum restricting the listener +to bind to loopback only. More complex solutions may involve using +network namespacing techniques provided by the underlying operating system. + +For scenarios where multiple services are running on the same machine +without isolation, these services must all be trusted. We call this the +**trusted multi-tenenacy** deployment model. Any service could theoretically +connect to any other service via the loopback listener, bypassing Connect +completely. In this scenario, all services must be trusted _or_ isolation +mechanisms must be used. + +For developer or operator access to a service, we recommend +using a local Connect proxy. This is documented in the +[development and debugging guide](/docs/connect/dev.html). + +**If non-proxy traffic can communicate with the service**, this traffic +will not be encrypted or authorized via Connect. diff --git a/website/source/intro/getting-started/connect.html.md b/website/source/intro/getting-started/connect.html.md new file mode 100644 index 000000000..20dde36fa --- /dev/null +++ b/website/source/intro/getting-started/connect.html.md @@ -0,0 +1,192 @@ +--- +layout: "intro" +page_title: "Consul Connect" +sidebar_current: "gettingstarted-connect" +description: |- + Connect is a feature of Consul that provides service-to-service connection authorization and encryption using mutual TLS. This ensures that all service communication in your datacenter is encrypted and that the rules of what services can communicate is centrally managed with Consul. +--- + +# Connect + +We've now registered our first service with Consul and we've shown how you +can use the HTTP API or DNS interface to query the address and directly connect +to that service. Consul also provides a feature called **Connect** for +automatically connecting via an encrypted TLS connection and authorizing +which services are allowed to connect to each other. + +Applications do not need to be modified at all to use Connect. +[Sidecar proxies](/docs/connect/proxies.html) can be used +to automatically establish TLS connections for inbound and outbound connections +without being aware of Connect at all. Applications may also +[natively integrate with Connect](/docs/connect/native.html) +for optimal performance and security. + +-> **Security note:** The getting started guide will show Connect features +and focus on ease of use with a dev-mode agent. We will _not setup_ Connect in a +production-recommended secure way. Please read the Connect reference +documentation on security best practices to understand the tradeoffs. + +## Starting a Connect-unaware Service + +Let's begin by starting a service that is unaware of Connect all. To +keep it simple, let's just use `socat` to start a basic echo service. This +service will accept TCP connections and echo back any data sent to it. If +`socat` isn't installed on your machine, it should be easily available via +a package manager. + +```sh +$ socat -v tcp-l:8181,fork exec:"/bin/cat" +``` + +You can verify it is working by using `nc` to connect directly to it. Once +connected, type some text and press enter. The text you typed should be +echoed back: + +``` +$ nc 127.0.0.1 8181 +hello +hello +echo +echo +``` + +`socat` is a decades-old Unix utility and our process is configured to +only accept a basic TCP connection. It has no concept of encryption, the +TLS protocol, etc. This can be representative of an existing service in +your datacenter such as a database, backend web service, etc. + +## Registering the Service with Consul and Connect + +Next, let's register the service with Consul. We'll do this by writing +a new service definition. This is the same as the previous step in the +getting started guide, except this time we'll also configure Connect. + +```sh +$ cat < Consul Connect proxy starting... + Configuration mode: Flags + Service: web + Upstream: socat => :9191 + Public listener: Disabled + +... +``` + +With that running, we can verify it works by establishing a connection: + +``` +$ nc 127.0.0.1 9191 +hello +hello +``` + +**The connection between proxies is now encrypted and authorized.** +We're now communicating to the "socat" service via a TLS connection. +The local connections to/from the proxy are unencrypted, but in production +these will be loopback-only connections. Any traffic in and out of the +machine is always encrypted. + +## Registering a Dependent Service + +We previously established a connection by directly running +`consul connect proxy`. Realistically, services need to establish connections +to dependencies over Connect. Let's register a service "web" that registers +"socat" as an upstream dependency: + +```sh +$ cat < **Security note:** The Connect security model requires trusting +loopback connections when proxies are in use. To further secure this, +tools like network namespacing may be used. + +## Controlling Access with Intentions + +Intentions are used to define which services may communicate. Our connections +above succeeded because in a development mode agent, the ACL system is "allow +all" by default. + +Let's insert a rule to deny access from web to socat: + +```sh +$ consul intention create -deny web db +Created: web => socat (deny) +``` + +With the proxy processes running that we setup previously, connection +attempts now fail: + +```sh +$ nc 127.0.0.1 9191 +$ +``` + +Try deleting the intention (or updating it to allow) and attempting the +connection again. Intentions allow services to be segmented via a centralized +control plane (Consul). To learn more, read the reference documentation on +[intentions](/docs/connect/intentions.html). + +## Next Steps + +We've now configured a service on a single agent and used Connect for +automatic connection authorization and encryption. This is a great feature +highlight but let's explore the full value of Consul by [setting up our +first cluster](/intro/getting-started/join.html)! diff --git a/website/source/layouts/api.erb b/website/source/layouts/api.erb index 72dac9af5..8bd46460d 100644 --- a/website/source/layouts/api.erb +++ b/website/source/layouts/api.erb @@ -22,11 +22,25 @@ > Services + > + Connect + > Catalog + > + Connect + + > Coordinates diff --git a/website/source/layouts/docs.erb b/website/source/layouts/docs.erb index d503bdb5d..2fc7320ff 100644 --- a/website/source/layouts/docs.erb +++ b/website/source/layouts/docs.erb @@ -70,6 +70,17 @@ + > + connect + + > event @@ -82,6 +93,27 @@ > info + > + intention + + + > join @@ -216,6 +248,38 @@ + > + Connect (Service Segmentation) + + +
> diff --git a/website/source/layouts/intro.erb b/website/source/layouts/intro.erb index ce5e3d400..bec020c32 100644 --- a/website/source/layouts/intro.erb +++ b/website/source/layouts/intro.erb @@ -47,6 +47,9 @@ > Services + > + Connect + > Consul Cluster From ce984e57d291474bbcda3c8964d8666d9b085792 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Wed, 30 May 2018 20:57:42 -0700 Subject: [PATCH 307/539] website: how it works --- website/source/docs/connect/index.html.md | 47 ++++++++++++++++++++++- 1 file changed, 46 insertions(+), 1 deletion(-) diff --git a/website/source/docs/connect/index.html.md b/website/source/docs/connect/index.html.md index 0fa103a22..3b2db17bb 100644 --- a/website/source/docs/connect/index.html.md +++ b/website/source/docs/connect/index.html.md @@ -16,9 +16,54 @@ without being aware of Connect at all. Applications may also [natively integrate with Connect](/docs/connect/native.html) for optimal performance and security. +Connect enables deployment best-practices with service-to-service encryption +everywhere and identity-based authorization. Rather than authorizing host-based +access with IP address access rules, Connect uses the registered service +identity to enforce access control with [intentions](/docs/connect/intentions.html). +This makes it much easier to reason about access control and also enables +services to freely move, such as in a scheduled environment with software +such as Kubernetes or Nomad. Additionally, intention enforcement can be done +regardless of the underlying network, so Connect works with physical networks, +cloud networks, software-defined networks, cross-cloud, and more. + ## How it Works -TODO +The core of Connect is based on [mutual TLS](https://en.wikipedia.org/wiki/Mutual_authentication). + +Connect provides each service with an identity encoded as a TLS certificate. +This certificate is used to establish and accept connections to and from other +services. The identity is encoded in the TLS certificate in compliance with +the [SPIFFE X.509 Identity Document](https://github.com/spiffe/spiffe/blob/master/standards/X509-SVID.md). +This enables Connect services to establish and accept connections with +other SPIFFE-compliant systems. + +The client service verifies the destination service certificate +against the [public CA bundle](/api/connect/ca.html#list-ca-root-certificates). +This is very similar to a typical HTTPS web browser connection. In addition +to this, the client provides its own client certificate to show its +identity to the destination service. If the connection handshake succeeds, +the connection is encrypted and authorized. + +The destination service verifies the client certificate +against the [public CA bundle](/api/connect/ca.html#list-ca-root-certificates). +After verifying the certificate, it must also call the +[authorization API](/api/agent/connect.html#authorize) to authorize +the connection against the configured set of Consul intentions. +If the authorization API responds successfully, the connection is established. +Otherwise, the connection is rejected. + +To generate and distribute certificates, Consul has a built-in CA that +requires no other dependencies, and +also ships with built-in support for [Vault](#). The PKI system is pluggable +and can be [extended](#) to support any system. + +All APIs required for Connect typically respond in microseconds and impose +minimal overhead to existing services. This is because the Connect-related +APIs are all made to the local Consul agent over a loopback interface, and all +[agent Connect endpoints](/api/agent/connect.html) implement +local caching, background updating, and support blocking queries. As a result, +most API calls operate on purely local in-memory data and can respond +in microseconds. ## Eliminating East-West Firewalls From 1e980076b003b246b24137897700f30bae2c7eeb Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Fri, 1 Jun 2018 09:24:48 -0700 Subject: [PATCH 308/539] website: remove sergmentation from sidebar we'll move east-west to a "use case" section, didnt' feel right in the reference docs. --- website/source/docs/connect/index.html.md | 22 ---------------------- website/source/layouts/docs.erb | 2 +- 2 files changed, 1 insertion(+), 23 deletions(-) diff --git a/website/source/docs/connect/index.html.md b/website/source/docs/connect/index.html.md index 3b2db17bb..3b459688b 100644 --- a/website/source/docs/connect/index.html.md +++ b/website/source/docs/connect/index.html.md @@ -64,25 +64,3 @@ APIs are all made to the local Consul agent over a loopback interface, and all local caching, background updating, and support blocking queries. As a result, most API calls operate on purely local in-memory data and can respond in microseconds. - -## Eliminating East-West Firewalls - -East-west firewalls are the typical tool for network security in a static world. -East-west is the transfer of data from server to server within a datacenter, -versus North-south traffic which describes end user to server communications. - -These firewalls wrap services with ingress/egress policies. This perimeter-based -approach is difficult to scale in a dynamic world with dozens or hundreds of -services or where machines may be frequently created or destroyed. Firewalls -create a sprawl of rules for each service instance that quickly becomes -overly difficult to maintain. - -Service security in a dynamic world is best solved through service-to-service -authentication and authorization. Instead of IP-based network security, -services can be deployed to low-trust networks and rely on service-identity -based security over in-transit data encryption. - -Connect enables service segmentation by securing service-to-service -communications through mutual TLS and transparent proxying on zero-trust -networks. This allows direct service communication without relying on firewalls -for east-west traffic security. diff --git a/website/source/layouts/docs.erb b/website/source/layouts/docs.erb index 2fc7320ff..04cc16714 100644 --- a/website/source/layouts/docs.erb +++ b/website/source/layouts/docs.erb @@ -249,7 +249,7 @@ > - Connect (Service Segmentation) + Connect {{/if}} From bf27d1ada2fc49b965a7c49809d71b59cc2c7c7a Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sat, 16 Jun 2018 20:10:22 -0700 Subject: [PATCH 417/539] website: clearly note beta for Connect --- website/source/docs/connect/index.html.md | 4 ++++ website/source/layouts/docs.erb | 2 +- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/website/source/docs/connect/index.html.md b/website/source/docs/connect/index.html.md index ff79b48c0..aa30460b8 100644 --- a/website/source/docs/connect/index.html.md +++ b/website/source/docs/connect/index.html.md @@ -26,6 +26,10 @@ such as Kubernetes or Nomad. Additionally, intention enforcement can be done regardless of the underlying network, so Connect works with physical networks, cloud networks, software-defined networks, cross-cloud, and more. +-> **Beta:** Connect was introduced in Consul 1.2 and should be considered +beta quality. We're working hard to quickly address any reported bugs and +we hope to be remove the beta tag before the end of 2018. + ## How it Works The core of Connect is based on [mutual TLS](https://en.wikipedia.org/wiki/Mutual_authentication). diff --git a/website/source/layouts/docs.erb b/website/source/layouts/docs.erb index 769bcfb4b..8674df55e 100644 --- a/website/source/layouts/docs.erb +++ b/website/source/layouts/docs.erb @@ -249,7 +249,7 @@ > - Connect + Connect (Beta) +
  • Intro
  • Guides
  • Docs
  • API
  • From d6bbf9f6e3849d85a18e9664288dff353e1582e2 Mon Sep 17 00:00:00 2001 From: Jack Pearkes Date: Sat, 23 Jun 2018 11:08:22 -0700 Subject: [PATCH 524/539] website: update cli output content --- website/source/configuration.html.erb | 74 +++++++++++++++------------ website/source/discovery.html.erb | 68 ++++++++++++------------ website/source/index.html.erb | 34 ++++++------ website/source/segmentation.html.erb | 43 +++++++--------- 4 files changed, 111 insertions(+), 108 deletions(-) diff --git a/website/source/configuration.html.erb b/website/source/configuration.html.erb index 7f8115753..c243843cf 100644 --- a/website/source/configuration.html.erb +++ b/website/source/configuration.html.erb @@ -11,13 +11,13 @@ description: |-

    Service configuration made easy

    Feature rich key/value store to easily configure services

    @@ -92,17 +92,27 @@ description: |-
    -
    - $ curl \ - --request POST \ +
    + $ curl http://localhost:8500/v1/txn \ + --request PUT \ --data \ -'{ - "Name": "api", - "Service": { - "Service": "api", - "Tags": ["v1.2.3"], - "Failover": { - "Datacenters": ["dc1", "dc2"] + '[ + { + "KV": { + "Verb": "set", + "Key": "lock", + "Value": "MQ==" + } + }, + { + "KV": { + "Verb": "cas", + "Index": 10, + "Key": "configuration", + "Value": "c29tZS1jb25maWc=" + } + } + ]'
    @@ -126,16 +136,17 @@ description: |-
    - $ curl \ - --request POST \ - --data \ -'{ - "Name": "api", - "Service": { - "Service": "api", - "Tags": ["v1.2.3"], - "Failover": { - "Datacenters": ["dc1", "dc2"] + $ curl http://localhost:8500/v1/kv/web/config/rate_limit?wait=1m&index=229 +[ + { + "LockIndex": 0, + "Key": "web/config/rate_limit", + "Flags": 0, + "Value": "NjAw", + "CreateIndex": 229, + "ModifyIndex": 234 + } +]
    @@ -159,16 +170,11 @@ description: |-
    - $ curl \ - --request POST \ - --data \ -'{ - "Name": "api", - "Service": { - "Service": "api", - "Tags": ["v1.2.3"], - "Failover": { - "Datacenters": ["dc1", "dc2"] + $ consul watch \ + -type=key \ + -key=web/config/rate_limit \ + /usr/local/bin/record-rate-limit.sh +
    @@ -242,13 +248,13 @@ description: |-

    Ready to get started?

    - + Download - Explore docs + Explore docs
    diff --git a/website/source/discovery.html.erb b/website/source/discovery.html.erb index 21d020585..4509f088e 100644 --- a/website/source/discovery.html.erb +++ b/website/source/discovery.html.erb @@ -12,13 +12,13 @@ description: |-

    Service registry, integrated health checks, and DNS and HTTP interfaces enable any service to discover and be discovered by other services

    @@ -90,17 +90,21 @@ description: |-
    -
    - $ curl \ - --request POST \ - --data \ -'{ - "Name": "api", - "Service": { - "Service": "api", - "Tags": ["v1.2.3"], - "Failover": { - "Datacenters": ["dc1", "dc2"] +
    +$ dig web-frontend.service.consul. ANY + +; <<>> DiG 9.8.3-P1 <<>> web-frontend.service.consul. ANY +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29981 +;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 + +;; QUESTION SECTION: +;web-frontend.service.consul. IN ANY + +;; ANSWER SECTION: +web-frontend.service.consul. 0 IN A 10.0.3.83 +web-frontend.service.consul. 0 IN A 10.0.1.109
    @@ -124,16 +128,20 @@ description: |-
    - $ curl \ - --request POST \ - --data \ -'{ - "Name": "api", - "Service": { - "Service": "api", - "Tags": ["v1.2.3"], - "Failover": { - "Datacenters": ["dc1", "dc2"] + $ curl http://localhost:8500/v1/health/service/web?index=11&wait=30s +{ + ... + "Node": "10-0-1-109", + "CheckID": "service:web", + "Name": "Service 'web' check", + "Status": "critical", + "ServiceID": "web", + "ServiceName": "web", + "CreateIndex": 10, + "ModifyIndex": 20 + ... +} +
    @@ -157,16 +165,10 @@ description: |-
    - $ curl \ - --request POST \ - --data \ -'{ - "Name": "api", - "Service": { - "Service": "api", - "Tags": ["v1.2.3"], - "Failover": { - "Datacenters": ["dc1", "dc2"] + $ curl http://localhost:8500/v1/catalog/datacenters +["dc1", "dc2"] +$ curl http://localhost:8500/v1/catalog/nodes?dc=dc2 +...
    diff --git a/website/source/index.html.erb b/website/source/index.html.erb index 827eb0743..50512aa94 100644 --- a/website/source/index.html.erb +++ b/website/source/index.html.erb @@ -11,20 +11,20 @@ description: |-
    - - New Consul 1.0 release. Get the details + + New HashiCorp Consul 1.2: Service Mesh. Read the blog post

    Service Mesh Made Easy

    Consul is a distributed service mesh to connect, secure, and configure services across any runtime platform and public or private cloud

    - + Download - Get Started + Get Started
    @@ -160,16 +160,18 @@ description: |-
    - $ curl \ - --request POST \ - --data \ -'{ - "Name": "api", - "Service": { - "Service": "api", - "Tags": ["v1.2.3"], - "Failover": { - "Datacenters": ["dc1", "dc2"] + $ curl http://localhost:8500/v1/kv/deployment +[ + { + "LockIndex": 1, + "Session": "1c3f5836-4df4-0e26-6697-90dcce78acd9", + "Value": "Zm9v", + "Flags": 0, + "Key": "deployment", + "CreateIndex": 13, + "ModifyIndex": 19 + } +]
    @@ -249,7 +251,7 @@ description: |-

    Consul Open Source addresses the technical complexity of connecting services across distributed infrastructure.

    - + diff --git a/website/source/segmentation.html.erb b/website/source/segmentation.html.erb index fbb55583b..639aa3c4c 100644 --- a/website/source/segmentation.html.erb +++ b/website/source/segmentation.html.erb @@ -12,13 +12,13 @@ description: |-

    Service segmentation made easy

    Secure service-to-service communication with automatic TLS encryption and identity-based authorization

    @@ -99,17 +99,19 @@ description: |-
    -
    - $ curl \ - --request POST \ - --data \ -'{ - "Name": "api", - "Service": { - "Service": "api", - "Tags": ["v1.2.3"], - "Failover": { - "Datacenters": ["dc1", "dc2"] +
    $ consul connect proxy \ + -service web \ + -service-addr 127.0.0.1:80 \ + -listen 10.0.1.109:7200 +==> Consul Connect proxy starting... + Configuration mode: Flags + Service: web + Public listener: 10.0.1.109:7200 => 127.0.0.1:80 + +==> Log data will now stream in as it occurs: + + 2018/06/23 09:33:51 [INFO] public listener starting on 10.0.1.109:7200 + 2018/06/23 09:33:51 [INFO] proxy loaded config and ready to serve
    @@ -155,16 +157,7 @@ description: |-
    - $ curl \ - --request POST \ - --data \ -'{ - "Name": "api", - "Service": { - "Service": "api", - "Tags": ["v1.2.3"], - "Failover": { - "Datacenters": ["dc1", "dc2"] + TODO
    @@ -175,13 +168,13 @@ description: |-

    Ready to get started?

    - + Download - Explore docs + Explore docs
    From e491abb1341c7e2a314bda8a9102dde84e203a40 Mon Sep 17 00:00:00 2001 From: Paul Banks Date: Sun, 24 Jun 2018 13:35:39 +0100 Subject: [PATCH 525/539] Fix some doc typos. --- .../source/docs/guides/connect-production.md | 20 +++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/website/source/docs/guides/connect-production.md b/website/source/docs/guides/connect-production.md index 1bba85dc0..8bb66539a 100644 --- a/website/source/docs/guides/connect-production.md +++ b/website/source/docs/guides/connect-production.md @@ -49,7 +49,7 @@ Connect relies on to ensure it's security properties. A service's identity, in the form of an x.509 certificate, will only be issued to an API client that has `service:write` permission for that service. In other words, any client that has permission to _register_ an instance of a service -will be able to identify as that service and access all of resources that that +will be able to identify as that service and access all of the resources that that service is allowed to access. A secure ACL setup must meet these criteria: @@ -77,13 +77,13 @@ sufficient for ACL tokens to only be unique per _service_ and shared between instances. It is much better though if ACL tokens are unique per service _instance_ because -it limit the blast radius of a compromise. +it limits the blast radius of a compromise. A future release of Connect will support revoking specific certificates that have been issued. For example if a single node in a datacenter has been compromised, it will be possible to find all certificates issued to the agent on that node and revoke them. This will block all access to the intruder without -taking unaffected instances of the service(s) on that node offline too. +taking instances of the service(s) on other nodes offline too. While this will work with service-unique tokens, there is nothing stopping an attacker from obtaining certificates while spoofing the agent ID or other @@ -103,15 +103,19 @@ Vault](https://www.vaultproject.io/docs/secrets/consul/index.html). ## Configure Agent Transport Encryption Consul's gossip (UDP) and RPC (TCP) communications need to be encrypted -otherwise attackers may be able to see tokens and private keys while in flight -between the server and client agents or between client agent and application. +otherwise attackers may be able to see ACL tokens while in flight +between the server and client agents (RPC) or between client agent and +application (HTTP). Certificate private keys never leave the host they +are used on but are delivered to the application or proxy over local +HTTP so local agent traffic should be encrypted where potentially +untrusted parties might be able to observe localhost agent API traffic. Follow the [encryption documentation](/docs/agent/encryption.html) to ensure -both gossip encryption and RPC TLS are configured securely. +both gossip encryption and RPC/HTTP TLS are configured securely. For now client and server TLS certificates are still managed by manual configuration. In the future we plan to automate more of that with the same -mechanisms connect offers to user applications. +mechanisms Connect offers to user applications. ## Bootstrap Certificate Authority @@ -202,4 +206,4 @@ integrate](/docs/connect/native.html) with connect. If using any kind of proxy for connect, the application must ensure no untrusted connections can be made to it's unprotected listening port. This is typically done by binding to `localhost` and only allowing loopback traffic, but may also -be achieved using firewall rules or network namespacing. \ No newline at end of file +be achieved using firewall rules or network namespacing. From d9e779aa2e7f92af2b19559131588d466857cbd6 Mon Sep 17 00:00:00 2001 From: Jack Pearkes Date: Sun, 24 Jun 2018 15:11:54 -0700 Subject: [PATCH 526/539] website: fix two links on discovery page --- website/source/discovery.html.erb | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/website/source/discovery.html.erb b/website/source/discovery.html.erb index 4509f088e..5afc8442a 100644 --- a/website/source/discovery.html.erb +++ b/website/source/discovery.html.erb @@ -242,13 +242,13 @@ $ curl http://localhost:8500/v1/catalog/nodes?dc=dc2

    Ready to get started?

    - + Download - Explore docs + Explore docs
    From 8f890e5d7e69b353d0c4579b8504a44e0034938a Mon Sep 17 00:00:00 2001 From: RJ Spiker Date: Mon, 25 Jun 2018 09:41:18 -0600 Subject: [PATCH 527/539] website - some js fixes to make sure scripts are firing (#108) --- .../source/assets/javascripts/animations.js | 1934 +++++++++-------- .../javascripts/consul-connect/carousel.js | 9 +- 2 files changed, 974 insertions(+), 969 deletions(-) diff --git a/website/source/assets/javascripts/animations.js b/website/source/assets/javascripts/animations.js index 452b44a3f..f21f5dc7d 100644 --- a/website/source/assets/javascripts/animations.js +++ b/website/source/assets/javascripts/animations.js @@ -1,998 +1,1002 @@ -var qs = document.querySelector.bind(document) -var qsa = document.querySelectorAll.bind(document) +document.addEventListener('turbolinks:load', initializeAnimations) -// -// home page -// +function initializeAnimations() { + var qs = document.querySelector.bind(document) + var qsa = document.querySelectorAll.bind(document) -var $indexDynamic = qs('#index-dynamic-animation') -if ($indexDynamic) { - var initiated = false - var observer = new IntersectionObserver( - function(entries) { - if (!initiated && entries[0].isIntersecting) { - $indexDynamic.classList.add('active') - var lines = qsa( - '#lines-origin-aws > *, #lines-origin-azure > *, #lines-origin-gcp > *' - ) - setTimeout(function() { - timer = setInterval(function() { - lines[parseInt(Math.random() * lines.length)].classList.toggle( - 'off' - ) - }, 800) - }, 3000) - initiated = true + // + // home page + // + + var $indexDynamic = qs('#index-dynamic-animation') + if ($indexDynamic) { + var initiated = false + var observer = new IntersectionObserver( + function(entries) { + if (!initiated && entries[0].isIntersecting) { + $indexDynamic.classList.add('active') + var lines = qsa( + '#lines-origin-aws > *, #lines-origin-azure > *, #lines-origin-gcp > *' + ) + setTimeout(function() { + timer = setInterval(function() { + lines[parseInt(Math.random() * lines.length)].classList.toggle( + 'off' + ) + }, 800) + }, 3000) + initiated = true + } + }, + { threshold: 0.5 } + ) + observer.observe($indexDynamic) + } + + // + // configuration page + // + + var $configChallenge = qs('#configuration-challenge-animation') + var $configSolution = qs('#configuration-solution-animation') + + if ($configChallenge) { + // challenge animation + + var configChallengeTimeline = new TimelineLite({ + onComplete: function() { + configChallengeTimeline.restart() + configSolutionTimeline.restart() } - }, - { threshold: 0.5 } - ) - observer.observe($indexDynamic) -} - -// -// configuration page -// - -var $configChallenge = qs('#configuration-challenge-animation') -var $configSolution = qs('#configuration-solution-animation') - -if ($configChallenge) { - // challenge animation - - var configChallengeTimeline = new TimelineLite({ - onComplete: function() { - configChallengeTimeline.restart() - configSolutionTimeline.restart() - } - }) - - var line1 = qs('#c-line-1') - var line2 = qs('#c-line-2') - var line3 = qs('#c-line-3') - var line4 = qs('#c-line-4') - var line5 = qs('#c-line-5') - var line6 = qs('#c-line-6') - var line7 = qs('#c-line-7') - var line8 = qs('#c-line-8') - var box1 = qs('#c-box-1') - var box2 = qs('#c-box-2') - var box3 = qs('#c-box-3') - var box4 = qs('#c-box-4') - var box5 = qs('#c-box-5') - var box6 = qs('#c-box-6') - var box7 = qs('#c-box-7') - var box8 = qs('#c-box-8') - var progressBar = qs('#c-loading-bar > rect:last-child') - var cog = qs('#c-configuration-server > g > path') - - configChallengeTimeline - .to(box1, 1, {}) - .staggerTo( - [line1, line2, line3, line4, line5, line6, line7, line8], - 1.5, - { css: { strokeDashoffset: 0 } }, - 0.3, - 'start' - ) - .staggerTo( - [box1, box2, box3, box4, box5, box6, box7, box8], - 0.3, - { opacity: 1 }, - 0.3, - '-=2.5' - ) - .fromTo( - progressBar, - 3.5, - { attr: { width: 0 } }, - { attr: { width: 40 } }, - 'start' - ) - .to( - cog, - 3.5, - { rotation: 360, svgOrigin: '136px 127px', ease: Power1.easeOut }, - 'start' - ) - .call(function () { - configSolutionTimeline.resume(configSolutionTimeline.time()) }) - .to(line1, 2, {}) - .to( - [line1, line2, line3, line4, line5, line6, line7, line8, progressBar], - 0.5, - { opacity: 0 }, - 'reset' + + var line1 = qs('#c-line-1') + var line2 = qs('#c-line-2') + var line3 = qs('#c-line-3') + var line4 = qs('#c-line-4') + var line5 = qs('#c-line-5') + var line6 = qs('#c-line-6') + var line7 = qs('#c-line-7') + var line8 = qs('#c-line-8') + var box1 = qs('#c-box-1') + var box2 = qs('#c-box-2') + var box3 = qs('#c-box-3') + var box4 = qs('#c-box-4') + var box5 = qs('#c-box-5') + var box6 = qs('#c-box-6') + var box7 = qs('#c-box-7') + var box8 = qs('#c-box-8') + var progressBar = qs('#c-loading-bar > rect:last-child') + var cog = qs('#c-configuration-server > g > path') + + configChallengeTimeline + .to(box1, 1, {}) + .staggerTo( + [line1, line2, line3, line4, line5, line6, line7, line8], + 1.5, + { css: { strokeDashoffset: 0 } }, + 0.3, + 'start' + ) + .staggerTo( + [box1, box2, box3, box4, box5, box6, box7, box8], + 0.3, + { opacity: 1 }, + 0.3, + '-=2.5' + ) + .fromTo( + progressBar, + 3.5, + { attr: { width: 0 } }, + { attr: { width: 40 } }, + 'start' + ) + .to( + cog, + 3.5, + { rotation: 360, svgOrigin: '136px 127px', ease: Power1.easeOut }, + 'start' + ) + .call(function () { + configSolutionTimeline.resume(configSolutionTimeline.time()) + }) + .to(line1, 2, {}) + .to( + [line1, line2, line3, line4, line5, line6, line7, line8, progressBar], + 0.5, + { opacity: 0 }, + 'reset' + ) + .to( + [box1, box2, box3, box4, box5, box6, box7, box8], + 0.5, + { opacity: 0.5 }, + 'reset' + ) + .pause() + + // solution animation + + var configSolutionTimeline = new TimelineLite() + + var lines = qsa( + '#s-line-1, #s-line-2, #s-line-3, #s-line-4, #s-line-5, #s-line-6, #s-line-7, #s-line-8' ) - .to( - [box1, box2, box3, box4, box5, box6, box7, box8], - 0.5, - { opacity: 0.5 }, - 'reset' + var dots = qs('#s-dots') + var boxes = qsa( + '#s-service-box-1, #s-service-box-2, #s-service-box-3, #s-service-box-4, #s-service-box-5, #s-service-box-6, #s-service-box-7, #s-service-box-8' ) - .pause() + var progress = qs('#s-progress-indicator') - // solution animation + configSolutionTimeline + .to(boxes, 1, {}) + .to(lines, 1, { css: { strokeDashoffset: 0 } }, 'start') + .to(boxes, 0.5, { opacity: 1 }, '-=0.4') + .fromTo( + progress, + 1, + { attr: { width: 0 } }, + { attr: { width: 40 } }, + 'start' + ) + .to(dots, 0.25, { opacity: 1 }, '-=0.5') + .addPause() + .to(progress, 2, {}) + .to(lines, 0.5, { opacity: 0 }, 'reset') + .to(boxes, 0.5, { opacity: 0.5 }, 'reset') + .to(progress, 0.5, { opacity: 0 }, 'reset') + .to(dots, 0.5, { opacity: 0 }, 'reset') + .pause() - var configSolutionTimeline = new TimelineLite() + // kick off + $configChallenge.classList.add('active') + $configSolution.classList.add('active') + configChallengeTimeline.play() + configSolutionTimeline.play() + } - var lines = qsa( - '#s-line-1, #s-line-2, #s-line-3, #s-line-4, #s-line-5, #s-line-6, #s-line-7, #s-line-8' - ) - var dots = qs('#s-dots') - var boxes = qsa( - '#s-service-box-1, #s-service-box-2, #s-service-box-3, #s-service-box-4, #s-service-box-5, #s-service-box-6, #s-service-box-7, #s-service-box-8' - ) - var progress = qs('#s-progress-indicator') + // + // discovery page + // - configSolutionTimeline - .to(boxes, 1, {}) - .to(lines, 1, { css: { strokeDashoffset: 0 } }, 'start') - .to(boxes, 0.5, { opacity: 1 }, '-=0.4') - .fromTo( - progress, - 1, - { attr: { width: 0 } }, - { attr: { width: 40 } }, - 'start' + var $discoveryChallenge = qs('#discovery-challenge-animation') + var $discoverySolution = qs('#discovery-solution-animation') + + if ($discoveryChallenge) { + // challenge animation + var discoveryChallengeTimeline = new TimelineLite({ + onComplete: function() { + discoveryChallengeTimeline.restart() + discoverySolutionTimeline.restart() + } + }) + + // First, we get each of the elements we need to animate + var box = qs('#c-active-box') + var leftPlacement = qs('#c-box-left-placement') + var rightPlacement = qs('#c-box-right-placement') + var leftConnectionLines = qsa( + '#c-line-top-left > *, #c-line-bottom-left > *, #c-line-horizontal-left > *, #c-line-vertical-down > *' ) - .to(dots, 0.25, { opacity: 1 }, '-=0.5') - .addPause() - .to(progress, 2, {}) - .to(lines, 0.5, { opacity: 0 }, 'reset') - .to(boxes, 0.5, { opacity: 0.5 }, 'reset') - .to(progress, 0.5, { opacity: 0 }, 'reset') - .to(dots, 0.5, { opacity: 0 }, 'reset') - .pause() + var rightConnectionLines = qsa( + '#c-line-top-right > *, #c-line-bottom-right > *, #c-line-horizontal-left > *, #c-line-vertical-down > *, #c-line-horizontal-right > *' + ) + var leftConnectionTop = qs('#c-line-top-left') + var leftConnectionBottom = qs('#c-line-bottom-left') + var rightHorizontalConnection = qs('#c-line-horizontal-right') + var rightConnectionTop = qs('#c-line-top-right') + var rightConnectionBottom = qs('#c-line-bottom-right') + var rightConnectionLinesStroke = qsa( + '#c-line-top-right > *, #c-line-bottom-right > *, #c-line-horizontal-right > *, #c-line-horizontal-left > *, #c-line-vertical-down > *' + ) + var leftConnectionLinesStroke = qsa( + '#c-line-top-left > *, #c-line-bottom-left > *, #c-line-horizontal-left > *, #c-line-vertical-down > *' + ) + var brokenLinkLeft = qs('#c-broken-link-left') + var brokenLinkRight = qs('#c-broken-link-right') + var computer = qs('#c-computer') + var codeLines = qs('#c-computer > g') + var toLoadBalancerDown = qsa( + '#c-computer-to-load-balancers #c-arrow-down, #c-computer-to-load-balancers #c-circle' + ) + var toLoadBalancerRight = qs('#c-computer-to-load-balancers #c-arrow-right') + var toLoadBalancerLeft = qs('#c-computer-to-load-balancers #c-arrow-left') + var toLoadBalancerRest = qs('#c-computer-to-load-balancers #c-edit-box') + var progressBars = qsa( + '#c-load-balancer-left > #c-progress-bar, #c-load-balancer-right > #c-progress-bar-2, #c-load-balancer-middle > #c-progress-bar-3' + ) + var progressBarsBars = qsa( + '#c-load-balancer-left > #c-progress-bar > *:last-child, #c-load-balancer-right > #c-progress-bar-2 > *:last-child, #c-load-balancer-middle > #c-progress-bar-3 > *:last-child' + ) + var farLeftBoxBorder = qs('#c-box-far-left > path') - // kick off - $configChallenge.classList.add('active') - $configSolution.classList.add('active') - configChallengeTimeline.play() - configSolutionTimeline.play() -} - -// -// discovery page -// - -var $discoveryChallenge = qs('#discovery-challenge-animation') -var $discoverySolution = qs('#discovery-solution-animation') - -if ($discoveryChallenge) { - // challenge animation - var discoveryChallengeTimeline = new TimelineLite({ - onComplete: function() { - discoveryChallengeTimeline.restart() - discoverySolutionTimeline.restart() - } - }) - - // First, we get each of the elements we need to animate - var box = qs('#c-active-box') - var leftPlacement = qs('#c-box-left-placement') - var rightPlacement = qs('#c-box-right-placement') - var leftConnectionLines = qsa( - '#c-line-top-left > *, #c-line-bottom-left > *, #c-line-horizontal-left > *, #c-line-vertical-down > *' - ) - var rightConnectionLines = qsa( - '#c-line-top-right > *, #c-line-bottom-right > *, #c-line-horizontal-left > *, #c-line-vertical-down > *, #c-line-horizontal-right > *' - ) - var leftConnectionTop = qs('#c-line-top-left') - var leftConnectionBottom = qs('#c-line-bottom-left') - var rightHorizontalConnection = qs('#c-line-horizontal-right') - var rightConnectionTop = qs('#c-line-top-right') - var rightConnectionBottom = qs('#c-line-bottom-right') - var rightConnectionLinesStroke = qsa( - '#c-line-top-right > *, #c-line-bottom-right > *, #c-line-horizontal-right > *, #c-line-horizontal-left > *, #c-line-vertical-down > *' - ) - var leftConnectionLinesStroke = qsa( - '#c-line-top-left > *, #c-line-bottom-left > *, #c-line-horizontal-left > *, #c-line-vertical-down > *' - ) - var brokenLinkLeft = qs('#c-broken-link-left') - var brokenLinkRight = qs('#c-broken-link-right') - var computer = qs('#c-computer') - var codeLines = qs('#c-computer > g') - var toLoadBalancerDown = qsa( - '#c-computer-to-load-balancers #c-arrow-down, #c-computer-to-load-balancers #c-circle' - ) - var toLoadBalancerRight = qs('#c-computer-to-load-balancers #c-arrow-right') - var toLoadBalancerLeft = qs('#c-computer-to-load-balancers #c-arrow-left') - var toLoadBalancerRest = qs('#c-computer-to-load-balancers #c-edit-box') - var progressBars = qsa( - '#c-load-balancer-left > #c-progress-bar, #c-load-balancer-right > #c-progress-bar-2, #c-load-balancer-middle > #c-progress-bar-3' - ) - var progressBarsBars = qsa( - '#c-load-balancer-left > #c-progress-bar > *:last-child, #c-load-balancer-right > #c-progress-bar-2 > *:last-child, #c-load-balancer-middle > #c-progress-bar-3 > *:last-child' - ) - var farLeftBoxBorder = qs('#c-box-far-left > path') - - // Then, we run each step of the animation using GSAP's TimelineLine, a - // fantastic way to set up a series of complex movements - discoveryChallengeTimeline - .to(box, 1, {}) - // box moves to new position - .to(box, 1, { css: { transform: 'translate(96px, 48px)' } }) - .to(leftPlacement, 0.5, { css: { opacity: 1 } }, '-=1') - .to(rightPlacement, 0.25, { css: { opacity: 0 } }, '-=0.25') - // connection lines turn black - .to(leftConnectionLines, 0.5, { css: { stroke: '#000' } }) - .to(farLeftBoxBorder, 0.5, { css: { fill: '#000' } }, '-=0.5') - // broken link appears - .to( - leftConnectionTop, - 0.1, - { - css: { strokeDashoffset: 6 }, + // Then, we run each step of the animation using GSAP's TimelineLine, a + // fantastic way to set up a series of complex movements + discoveryChallengeTimeline + .to(box, 1, {}) + // box moves to new position + .to(box, 1, { css: { transform: 'translate(96px, 48px)' } }) + .to(leftPlacement, 0.5, { css: { opacity: 1 } }, '-=1') + .to(rightPlacement, 0.25, { css: { opacity: 0 } }, '-=0.25') + // connection lines turn black + .to(leftConnectionLines, 0.5, { css: { stroke: '#000' } }) + .to(farLeftBoxBorder, 0.5, { css: { fill: '#000' } }, '-=0.5') + // broken link appears + .to( + leftConnectionTop, + 0.1, + { + css: { strokeDashoffset: 6 }, + ease: Linear.easeNone + }, + '-=0.3' + ) + .to(brokenLinkLeft, 0.2, { css: { opacity: 1 } }, '-=0.15') + // computer appears and code is written + .to(computer, 0.5, { css: { opacity: 1 } }) + .staggerFrom( + codeLines, + 0.4, + { + css: { transform: 'translate(-64px, 0)', opacity: 0 } + }, + 0.1 + ) + .to(codeLines, 0.3, { + css: { transform: 'translate(0, 0)', opacity: 1 } + }) + // code moves to load balancers + .to(toLoadBalancerRest, 0.4, { css: { opacity: 1 } }) + .to(toLoadBalancerLeft, 0.2, { css: { opacity: 1 } }, 'loadBalancerArrows') + .to(toLoadBalancerRight, 0.2, { css: { opacity: 1 } }, 'loadBalancerArrows') + .to(toLoadBalancerDown, 0.2, { css: { opacity: 1 } }, 'loadBalancerArrows') + // load balancers progress bars, old broken link fades out + .to(progressBars, 0.2, { css: { opacity: 1 } }) + .staggerFromTo( + progressBarsBars, + 1.5, + { attr: { width: 0 } }, + { attr: { width: 40 } }, + 0.3 + ) + .to( + [] + .concat(toLoadBalancerRest) + .concat([].slice.call(toLoadBalancerDown)) + .concat([ + toLoadBalancerRight, + toLoadBalancerLeft, + brokenLinkLeft, + leftConnectionTop, + leftConnectionBottom + ]), + 0.5, + { css: { opacity: 0 } }, + '-=0.75' + ) + .to(computer, 0.5, { css: { opacity: .12 } }, '-=0.75') + .to(progressBars, 0.5, { css: { opacity: 0 } }) + // new connection is drawn + .to(rightHorizontalConnection, 0.3, { css: { strokeDashoffset: 0 } }) + .to(rightConnectionTop, 0.2, { + css: { strokeDashoffset: 0 }, ease: Linear.easeNone - }, - '-=0.3' - ) - .to(brokenLinkLeft, 0.2, { css: { opacity: 1 } }, '-=0.15') - // computer appears and code is written - .to(computer, 0.5, { css: { opacity: 1 } }) - .staggerFrom( - codeLines, - 0.4, - { - css: { transform: 'translate(-64px, 0)', opacity: 0 } - }, - 0.1 - ) - .to(codeLines, 0.3, { - css: { transform: 'translate(0, 0)', opacity: 1 } - }) - // code moves to load balancers - .to(toLoadBalancerRest, 0.4, { css: { opacity: 1 } }) - .to(toLoadBalancerLeft, 0.2, { css: { opacity: 1 } }, 'loadBalancerArrows') - .to(toLoadBalancerRight, 0.2, { css: { opacity: 1 } }, 'loadBalancerArrows') - .to(toLoadBalancerDown, 0.2, { css: { opacity: 1 } }, 'loadBalancerArrows') - // load balancers progress bars, old broken link fades out - .to(progressBars, 0.2, { css: { opacity: 1 } }) - .staggerFromTo( - progressBarsBars, - 1.5, - { attr: { width: 0 } }, - { attr: { width: 40 } }, - 0.3 - ) - .to( - [] - .concat(toLoadBalancerRest) - .concat([].slice.call(toLoadBalancerDown)) - .concat([ - toLoadBalancerRight, - toLoadBalancerLeft, - brokenLinkLeft, - leftConnectionTop, - leftConnectionBottom - ]), - 0.5, - { css: { opacity: 0 } }, - '-=0.75' - ) - .to(computer, 0.5, { css: { opacity: .12 } }, '-=0.75') - .to(progressBars, 0.5, { css: { opacity: 0 } }) - // new connection is drawn - .to(rightHorizontalConnection, 0.3, { css: { strokeDashoffset: 0 } }) - .to(rightConnectionTop, 0.2, { - css: { strokeDashoffset: 0 }, - ease: Linear.easeNone - }) - .to(rightConnectionBottom, 0.3, { - css: { strokeDashoffset: 0 }, - ease: Linear.easeNone - }) - // connection lines turn blue - .to( - rightConnectionLinesStroke, - 0.5, - { css: { stroke: '#3969ED' } }, - '-=0.3' - ) - .to(farLeftBoxBorder, 0.5, { css: { fill: '#3969ED' } }, '-=0.5') - // wait three seconds - .to(box, 3, {}) - // box moves back to original position - .to(box, 1, { css: { transform: 'translate(0, 0)' } }, 'loop2') - .to(leftPlacement, 0.25, { css: { opacity: 0 } }, '-=0.25') - .to(rightPlacement, 0.5, { css: { opacity: 1 } }, '-=0.5') - // connection lines turn black - .to(rightConnectionLines, 0.5, { css: { stroke: '#000' } }) - .to(farLeftBoxBorder, 0.5, { css: { fill: '#000' } }, '-=0.5') - // broken link appears - .to( - rightConnectionTop, - 0.1, - { - css: { strokeDashoffset: 6 }, + }) + .to(rightConnectionBottom, 0.3, { + css: { strokeDashoffset: 0 }, ease: Linear.easeNone - }, - '-=0.3' - ) - .to(brokenLinkRight, 0.2, { css: { opacity: 1 } }, '-=0.15') - // computer appears and code is written - .from(codeLines, 0.1, { css: { opacity: 0 } }) - .to(computer, 0.5, { css: { opacity: 1 } }, '-=0.1') - .staggerFromTo( - codeLines, - 0.4, - { css: { transform: 'translate(-64px, 0)', opacity: 0 } }, - { css: { transform: 'translate(0, 0)', opacity: 1 } }, - 0.1 - ) - // code moves to load balancers - .to(toLoadBalancerRest, 0.4, { css: { opacity: 1 } }) - .to(toLoadBalancerLeft, 0.2, { css: { opacity: 1 } }, 'loadBalancerArrows2') - .to( - toLoadBalancerRight, - 0.2, - { css: { opacity: 1 } }, - 'loadBalancerArrows2' - ) - .to(toLoadBalancerDown, 0.2, { css: { opacity: 1 } }, 'loadBalancerArrows2') - // load balancers progress bars, old broken link fades out - .to(progressBarsBars, 0.1, { attr: { width: 0 } }) - .to(progressBars, 0.2, { attr: { opacity: 1 } }) - .staggerFromTo( - progressBarsBars, - 1.5, - { css: { width: 0 } }, - { css: { width: 40 } }, - 0.3 - ) - .to( - [] - .concat(toLoadBalancerRest) - .concat([].slice.call(toLoadBalancerDown)) - .concat([ - toLoadBalancerRight, - toLoadBalancerLeft, - brokenLinkRight, - rightConnectionTop, - rightConnectionBottom, - rightHorizontalConnection - ]), - 0.5, - { css: { opacity: 0 } }, - '-=0.75' - ) - .to(computer, 0.5, { css: { opacity: .12 } }, '-=0.75') - .to(progressBars, 0.5, { css: { opacity: 0 } }) - // new connection is drawn - .to(leftConnectionTop, 0.01, { css: { strokeDashoffset: 17 } }) - .to(leftConnectionBottom, 0.01, { css: { strokeDashoffset: 56 } }) - .to([leftConnectionTop, leftConnectionBottom], 0.01, { - css: { opacity: 1 } + }) + // connection lines turn blue + .to( + rightConnectionLinesStroke, + 0.5, + { css: { stroke: '#3969ED' } }, + '-=0.3' + ) + .to(farLeftBoxBorder, 0.5, { css: { fill: '#3969ED' } }, '-=0.5') + // wait three seconds + .to(box, 3, {}) + // box moves back to original position + .to(box, 1, { css: { transform: 'translate(0, 0)' } }, 'loop2') + .to(leftPlacement, 0.25, { css: { opacity: 0 } }, '-=0.25') + .to(rightPlacement, 0.5, { css: { opacity: 1 } }, '-=0.5') + // connection lines turn black + .to(rightConnectionLines, 0.5, { css: { stroke: '#000' } }) + .to(farLeftBoxBorder, 0.5, { css: { fill: '#000' } }, '-=0.5') + // broken link appears + .to( + rightConnectionTop, + 0.1, + { + css: { strokeDashoffset: 6 }, + ease: Linear.easeNone + }, + '-=0.3' + ) + .to(brokenLinkRight, 0.2, { css: { opacity: 1 } }, '-=0.15') + // computer appears and code is written + .from(codeLines, 0.1, { css: { opacity: 0 } }) + .to(computer, 0.5, { css: { opacity: 1 } }, '-=0.1') + .staggerFromTo( + codeLines, + 0.4, + { css: { transform: 'translate(-64px, 0)', opacity: 0 } }, + { css: { transform: 'translate(0, 0)', opacity: 1 } }, + 0.1 + ) + // code moves to load balancers + .to(toLoadBalancerRest, 0.4, { css: { opacity: 1 } }) + .to(toLoadBalancerLeft, 0.2, { css: { opacity: 1 } }, 'loadBalancerArrows2') + .to( + toLoadBalancerRight, + 0.2, + { css: { opacity: 1 } }, + 'loadBalancerArrows2' + ) + .to(toLoadBalancerDown, 0.2, { css: { opacity: 1 } }, 'loadBalancerArrows2') + // load balancers progress bars, old broken link fades out + .to(progressBarsBars, 0.1, { attr: { width: 0 } }) + .to(progressBars, 0.2, { attr: { opacity: 1 } }) + .staggerFromTo( + progressBarsBars, + 1.5, + { css: { width: 0 } }, + { css: { width: 40 } }, + 0.3 + ) + .to( + [] + .concat(toLoadBalancerRest) + .concat([].slice.call(toLoadBalancerDown)) + .concat([ + toLoadBalancerRight, + toLoadBalancerLeft, + brokenLinkRight, + rightConnectionTop, + rightConnectionBottom, + rightHorizontalConnection + ]), + 0.5, + { css: { opacity: 0 } }, + '-=0.75' + ) + .to(computer, 0.5, { css: { opacity: .12 } }, '-=0.75') + .to(progressBars, 0.5, { css: { opacity: 0 } }) + // new connection is drawn + .to(leftConnectionTop, 0.01, { css: { strokeDashoffset: 17 } }) + .to(leftConnectionBottom, 0.01, { css: { strokeDashoffset: 56 } }) + .to([leftConnectionTop, leftConnectionBottom], 0.01, { + css: { opacity: 1 } + }) + .to(leftConnectionTop, 0.2, { + css: { strokeDashoffset: 0 }, + ease: Linear.easeNone + }) + .to(leftConnectionBottom, 0.3, { + css: { strokeDashoffset: 0 }, + ease: Linear.easeNone + }) + // connection lines turn blue + .to(leftConnectionLinesStroke, 0.5, { css: { stroke: '#3969ED' } }, '-=0.3') + .to(farLeftBoxBorder, 0.5, { css: { fill: '#3969ED' } }, '-=0.5') + .call(function () { + discoverySolutionTimeline.resume(discoverySolutionTimeline.time()) + }) + .to(box, 2, {}) + .pause() + + // solution animation + var discoverySolutionTimeline = new TimelineLite() + + var inactiveBox = qs('#s-active-service-1') + var inactiveBoxStroke = qs('#s-active-service-1 > path') + var activeBox = qs('#s-active-service-2') + var activeBoxStroke = qs('#s-active-service-2 > path') + var leftPlacement = qs('#s-dotted-service-box-2') + var rightPlacement = qs('#s-dotted-service-box-3') + var leftConnectionLine = qs('#s-connected-line-1') + var rightConnectionLine = qs('#s-connected-line-2') + var dottedLineLeft = qs('#s-dotted-line-left') + var dottedLineRight = qs('#s-dotted-lines-right') + var dottedLineRightPrimary = qs('#s-dotted-lines-right > path:nth-child(2)') + var dottedLineRightAlt = qs('#s-dotted-lines-right > path:last-child') + var syncLeft = qs('#s-dynamic-sync-left') + var syncRight = qs('#s-dynamic-sync-right') + var syncSpinnerLeft = qs('#s-dynamic-sync-left > path') + var syncSpinnerRight = qs('#s-dynamic-sync-right > path') + + discoverySolutionTimeline + .to(activeBox, 1, {}) + // box moves + .to(activeBox, 0.5, { x: 96, y: 48 }) + .to(leftPlacement, 0.25, { css: { opacity: 1 } }, '-=0.5') + .to(rightPlacement, 0.25, { css: { opacity: 0 } }, '-=0.1') + // connection is broken + .to(leftConnectionLine, 0.75, { css: { strokeDashoffset: 222 } }, '-=0.5') + // box color changes to black + .to(activeBoxStroke, 0.25, { css: { fill: '#000' } }, '-=0.4') + .to(inactiveBoxStroke, 0.25, { css: { fill: '#000' } }, '-=0.4') + // right sync lines appear + .to(dottedLineRight, 0.4, { css: { opacity: 1 } }) + .to(syncRight, 0.2, { css: { opacity: 1 } }, '-=0.2') + .to(syncSpinnerRight, 1, { rotation: 360, svgOrigin: '232px 127px' }) + // left sync lines appear + .to(dottedLineLeft, 0.4, { css: { opacity: 1 } }, '-=0.6') + .to(syncLeft, 0.2, { css: { opacity: 1 } }, '-=0.2') + .to(syncSpinnerLeft, 1, { rotation: 360, svgOrigin: '88px 127px' }) + // connection is redrawn + .to(rightConnectionLine, 0.75, { css: { strokeDashoffset: 0 } }) + // right sync lines disappear + .to(dottedLineRight, 0.4, { css: { opacity: 0 } }, '-=1.2') + .to(syncRight, 0.2, { css: { opacity: 0 } }, '-=1.2') + // left sync lines disappear + .to(dottedLineLeft, 0.4, { css: { opacity: 0 } }, '-=0.5') + .to(syncLeft, 0.2, { css: { opacity: 0 } }, '-=0.5') + // box color changes to pink + .to(activeBoxStroke, 0.25, { css: { fill: '#ca2171' } }, '-=0.2') + .to(inactiveBoxStroke, 0.25, { css: { fill: '#ca2171' } }, '-=0.2') + // wait three seconds + .to(activeBox, 3, {}) + // box moves + .to(activeBox, 0.5, { x: 0, y: 0 }) + .to(leftPlacement, 0.25, { css: { opacity: 0 } }, '-=0.1') + .to(rightPlacement, 0.25, { css: { opacity: 1 } }, '-=0.5') + // connection is broken + .to(rightConnectionLine, 0.75, { css: { strokeDashoffset: 270 } }, '-=0.5') + // box color changes to black + .to(activeBoxStroke, 0.25, { css: { fill: '#000' } }, '-=0.4') + .to(inactiveBoxStroke, 0.25, { css: { fill: '#000' } }, '-=0.4') + // right sync lines appear + .to(dottedLineRightAlt, 0.01, { css: { opacity: 1 } }) + .to(dottedLineRightPrimary, 0.01, { css: { opacity: 0 } }) + .to(dottedLineRight, 0.4, { css: { opacity: 1 } }) + .to(syncRight, 0.2, { css: { opacity: 1 } }, '-=0.2') + .fromTo( + syncSpinnerRight, + 1, + { rotation: 0 }, + { rotation: 360, svgOrigin: '232px 127px' } + ) + // left sync lines appear + .to(dottedLineLeft, 0.4, { css: { opacity: 1 } }, '-=0.6') + .to(syncLeft, 0.2, { css: { opacity: 1 } }, '-=0.2') + .fromTo( + syncSpinnerLeft, + 1, + { rotation: 0 }, + { rotation: 360, svgOrigin: '88px 127px' } + ) + // connection is redrawn + .to(leftConnectionLine, 0.75, { css: { strokeDashoffset: 0 } }) + // right sync lines disappear + .to(dottedLineRight, 0.4, { css: { opacity: 0 } }, '-=1.2') + .to(syncRight, 0.2, { css: { opacity: 0 } }, '-=1.2') + // left sync lines disappear + .to(dottedLineLeft, 0.4, { css: { opacity: 0 } }, '-=0.5') + .to(syncLeft, 0.2, { css: { opacity: 0 } }, '-=0.5') + // box color changes to pink + .to(activeBoxStroke, 0.25, { css: { fill: '#ca2171' } }, '-=0.2') + .to(inactiveBoxStroke, 0.25, { css: { fill: '#ca2171' } }, '-=0.2') + .addPause() + // wait three seconds + .to(activeBox, 2, {}) + .pause() + + // kick it off + $discoveryChallenge.classList.add('active') + $discoverySolution.classList.add('active') + discoveryChallengeTimeline.play() + discoverySolutionTimeline.play() + } + + // + // discovery page + // + + var $segmentationChallenge = qs('#segmentation-challenge-animation') + var $segmentationSolution = qs('#segmentation-solution-animation') + + if ($segmentationChallenge) { + // challenge animation + var segmentationChallengeTimeline = new TimelineLite({ + onComplete: function() { + segmentationChallengeTimeline.restart() + segmentationSolutionTimeline.restart() + } }) - .to(leftConnectionTop, 0.2, { - css: { strokeDashoffset: 0 }, - ease: Linear.easeNone - }) - .to(leftConnectionBottom, 0.3, { - css: { strokeDashoffset: 0 }, - ease: Linear.easeNone - }) - // connection lines turn blue - .to(leftConnectionLinesStroke, 0.5, { css: { stroke: '#3969ED' } }, '-=0.3') - .to(farLeftBoxBorder, 0.5, { css: { fill: '#3969ED' } }, '-=0.5') - .call(function () { - discoverySolutionTimeline.resume(discoverySolutionTimeline.time()) - }) - .to(box, 2, {}) - .pause() - // solution animation - var discoverySolutionTimeline = new TimelineLite() + var computerUpdatePath = qs('#c-firewall-updates #c-update_path') + var computerUpdateBox = qs('#c-firewall-updates #c-edit') + var computer = qs('#c-computer') + var progressBars = qsa( + '#c-progress-indicator, #c-progress-indicator-2, #c-progress-indicator-3' + ) + var progressBarBars = qsa( + '#c-progress-indicator > rect:last-child, #c-progress-indicator-2 > rect:last-child, #c-progress-indicator-3 > rect:last-child' + ) + var brokenLinks = qsa('#c-broken-link-1, #c-broken-link-2, #c-broken-link-3') + var box2 = qs('#c-box-2') + var box2Border = qs('#c-box-2 > path') + var box4 = qs('#c-box-4') + var box4Border = qs('#c-box-4 > path') + var box6 = qs('#c-box-6') + var box6Border = qs('#c-box-6 > path') + var box7 = qs('#c-box-7') + var box7Border = qs('#c-box-7 > path') + var path1a = qs('#c-path-1 > *:nth-child(2)') + var path1b = qs('#c-path-1 > *:nth-child(3)') + var path1c = qs('#c-path-1 > *:nth-child(1)') + var path2a = qs('#c-path-2 > *:nth-child(1)') + var path2b = qs('#c-path-2 > *:nth-child(3)') + var path2c = qs('#c-path-2 > *:nth-child(2)') + var path3a = qs('#c-path-3 > *:nth-child(2)') + var path3b = qs('#c-path-3 > *:nth-child(3)') + var path3c = qs('#c-path-3 > *:nth-child(1)') - var inactiveBox = qs('#s-active-service-1') - var inactiveBoxStroke = qs('#s-active-service-1 > path') - var activeBox = qs('#s-active-service-2') - var activeBoxStroke = qs('#s-active-service-2 > path') - var leftPlacement = qs('#s-dotted-service-box-2') - var rightPlacement = qs('#s-dotted-service-box-3') - var leftConnectionLine = qs('#s-connected-line-1') - var rightConnectionLine = qs('#s-connected-line-2') - var dottedLineLeft = qs('#s-dotted-line-left') - var dottedLineRight = qs('#s-dotted-lines-right') - var dottedLineRightPrimary = qs('#s-dotted-lines-right > path:nth-child(2)') - var dottedLineRightAlt = qs('#s-dotted-lines-right > path:last-child') - var syncLeft = qs('#s-dynamic-sync-left') - var syncRight = qs('#s-dynamic-sync-right') - var syncSpinnerLeft = qs('#s-dynamic-sync-left > path') - var syncSpinnerRight = qs('#s-dynamic-sync-right > path') - - discoverySolutionTimeline - .to(activeBox, 1, {}) - // box moves - .to(activeBox, 0.5, { x: 96, y: 48 }) - .to(leftPlacement, 0.25, { css: { opacity: 1 } }, '-=0.5') - .to(rightPlacement, 0.25, { css: { opacity: 0 } }, '-=0.1') - // connection is broken - .to(leftConnectionLine, 0.75, { css: { strokeDashoffset: 222 } }, '-=0.5') - // box color changes to black - .to(activeBoxStroke, 0.25, { css: { fill: '#000' } }, '-=0.4') - .to(inactiveBoxStroke, 0.25, { css: { fill: '#000' } }, '-=0.4') - // right sync lines appear - .to(dottedLineRight, 0.4, { css: { opacity: 1 } }) - .to(syncRight, 0.2, { css: { opacity: 1 } }, '-=0.2') - .to(syncSpinnerRight, 1, { rotation: 360, svgOrigin: '232px 127px' }) - // left sync lines appear - .to(dottedLineLeft, 0.4, { css: { opacity: 1 } }, '-=0.6') - .to(syncLeft, 0.2, { css: { opacity: 1 } }, '-=0.2') - .to(syncSpinnerLeft, 1, { rotation: 360, svgOrigin: '88px 127px' }) - // connection is redrawn - .to(rightConnectionLine, 0.75, { css: { strokeDashoffset: 0 } }) - // right sync lines disappear - .to(dottedLineRight, 0.4, { css: { opacity: 0 } }, '-=1.2') - .to(syncRight, 0.2, { css: { opacity: 0 } }, '-=1.2') - // left sync lines disappear - .to(dottedLineLeft, 0.4, { css: { opacity: 0 } }, '-=0.5') - .to(syncLeft, 0.2, { css: { opacity: 0 } }, '-=0.5') - // box color changes to pink - .to(activeBoxStroke, 0.25, { css: { fill: '#ca2171' } }, '-=0.2') - .to(inactiveBoxStroke, 0.25, { css: { fill: '#ca2171' } }, '-=0.2') - // wait three seconds - .to(activeBox, 3, {}) - // box moves - .to(activeBox, 0.5, { x: 0, y: 0 }) - .to(leftPlacement, 0.25, { css: { opacity: 0 } }, '-=0.1') - .to(rightPlacement, 0.25, { css: { opacity: 1 } }, '-=0.5') - // connection is broken - .to(rightConnectionLine, 0.75, { css: { strokeDashoffset: 270 } }, '-=0.5') - // box color changes to black - .to(activeBoxStroke, 0.25, { css: { fill: '#000' } }, '-=0.4') - .to(inactiveBoxStroke, 0.25, { css: { fill: '#000' } }, '-=0.4') - // right sync lines appear - .to(dottedLineRightAlt, 0.01, { css: { opacity: 1 } }) - .to(dottedLineRightPrimary, 0.01, { css: { opacity: 0 } }) - .to(dottedLineRight, 0.4, { css: { opacity: 1 } }) - .to(syncRight, 0.2, { css: { opacity: 1 } }, '-=0.2') - .fromTo( - syncSpinnerRight, - 1, - { rotation: 0 }, - { rotation: 360, svgOrigin: '232px 127px' } - ) - // left sync lines appear - .to(dottedLineLeft, 0.4, { css: { opacity: 1 } }, '-=0.6') - .to(syncLeft, 0.2, { css: { opacity: 1 } }, '-=0.2') - .fromTo( - syncSpinnerLeft, - 1, - { rotation: 0 }, - { rotation: 360, svgOrigin: '88px 127px' } - ) - // connection is redrawn - .to(leftConnectionLine, 0.75, { css: { strokeDashoffset: 0 } }) - // right sync lines disappear - .to(dottedLineRight, 0.4, { css: { opacity: 0 } }, '-=1.2') - .to(syncRight, 0.2, { css: { opacity: 0 } }, '-=1.2') - // left sync lines disappear - .to(dottedLineLeft, 0.4, { css: { opacity: 0 } }, '-=0.5') - .to(syncLeft, 0.2, { css: { opacity: 0 } }, '-=0.5') - // box color changes to pink - .to(activeBoxStroke, 0.25, { css: { fill: '#ca2171' } }, '-=0.2') - .to(inactiveBoxStroke, 0.25, { css: { fill: '#ca2171' } }, '-=0.2') - .addPause() - // wait three seconds - .to(activeBox, 2, {}) - .pause() - - // kick it off - $discoveryChallenge.classList.add('active') - $discoverySolution.classList.add('active') - discoveryChallengeTimeline.play() - discoverySolutionTimeline.play() -} - -// -// discovery page -// - -var $segmentationChallenge = qs('#segmentation-challenge-animation') -var $segmentationSolution = qs('#segmentation-solution-animation') - -if ($segmentationChallenge) { - // challenge animation - var segmentationChallengeTimeline = new TimelineLite({ - onComplete: function() { - segmentationChallengeTimeline.restart() - segmentationSolutionTimeline.restart() - } - }) - - var computerUpdatePath = qs('#c-firewall-updates #c-update_path') - var computerUpdateBox = qs('#c-firewall-updates #c-edit') - var computer = qs('#c-computer') - var progressBars = qsa( - '#c-progress-indicator, #c-progress-indicator-2, #c-progress-indicator-3' - ) - var progressBarBars = qsa( - '#c-progress-indicator > rect:last-child, #c-progress-indicator-2 > rect:last-child, #c-progress-indicator-3 > rect:last-child' - ) - var brokenLinks = qsa('#c-broken-link-1, #c-broken-link-2, #c-broken-link-3') - var box2 = qs('#c-box-2') - var box2Border = qs('#c-box-2 > path') - var box4 = qs('#c-box-4') - var box4Border = qs('#c-box-4 > path') - var box6 = qs('#c-box-6') - var box6Border = qs('#c-box-6 > path') - var box7 = qs('#c-box-7') - var box7Border = qs('#c-box-7 > path') - var path1a = qs('#c-path-1 > *:nth-child(2)') - var path1b = qs('#c-path-1 > *:nth-child(3)') - var path1c = qs('#c-path-1 > *:nth-child(1)') - var path2a = qs('#c-path-2 > *:nth-child(1)') - var path2b = qs('#c-path-2 > *:nth-child(3)') - var path2c = qs('#c-path-2 > *:nth-child(2)') - var path3a = qs('#c-path-3 > *:nth-child(2)') - var path3b = qs('#c-path-3 > *:nth-child(3)') - var path3c = qs('#c-path-3 > *:nth-child(1)') - - segmentationChallengeTimeline - .to(box2, 1, {}) - // box 4 and 6 appear - .to(box4Border, 0.4, { css: { fill: '#000' } }, 'box4-in') - .fromTo( - box4, - 0.3, - { scale: 0, rotation: 200, opacity: 0, svgOrigin: '291px 41px' }, - { scale: 1, rotation: 360, opacity: 1 }, - 'box4-in' - ) - .to(box6Border, 0.4, { css: { fill: '#000' } }, '-=0.2') - .fromTo( - box6, - 0.3, - { scale: 0, rotation: 200, opacity: 0, svgOrigin: '195px 289px' }, - { scale: 1, rotation: 360, opacity: 1 }, - '-=0.4' - ) - // wait for a moment - .to(box2, 1, {}) - // computer appears and sends updates to firewalls - .to(computer, 0.5, { opacity: 1 }) - .to(computerUpdateBox, 0.3, { opacity: 1 }, '-=0.2') - .to(computerUpdatePath, 0.3, { opacity: 1 }, '-=0.2') - // firewall progress bars - .to(progressBarBars, 0.01, { attr: { width: 0 } }) - .to(progressBars, 0.2, { opacity: 1 }) - .staggerTo(progressBarBars, 0.6, { attr: { width: 40 } }, 0.2) - // connection 1 made - .to(path1a, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) - .to(path1b, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) - .to(path1c, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) - // progress bars and firewall update lines fade out - .to(progressBars, 0.7, { opacity: 0 }, 'resetComputer1') - .to(computerUpdateBox, 0.7, { opacity: 0 }, 'resetComputer1') - .to(computerUpdatePath, 0.7, { opacity: 0 }, 'resetComputer1') - // connection turns blue - .to( - [path1a, path1b, path1c], - 0.5, - { css: { stroke: '#3969ED' } }, - 'resetComputer1' - ) - .to( - [box4Border, box6Border], - 0.5, - { css: { fill: '#3969ED' } }, - 'resetComputer1' - ) - // second connection draws - .to( - path2a, - 0.3, - { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }, - '-=0.3' - ) - .to(path2b, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) - .to(path2c, 0.2, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) - // second connection turns blue - .to([path2a, path2b, path2c], 0.5, { css: { stroke: '#3969ED' } }, '-=0.1') - .to(box7Border, 0.5, { css: { fill: '#3969ED' } }, '-=0.3') - // wait a moment - .to(box2, 2, {}) - // blue elements fade back to gray - .to( - [path1a, path1b, path1c, path2a, path2b, path2c], - 0.5, - { - css: { stroke: '#b5b8c4' } - }, - 'colorReset1' - ) - .to( - [box7Border, box4Border, box6Border], - 0.5, - { css: { fill: '#b5b8c4' } }, - 'colorReset1' - ) - // box 2 appears - .to(box2Border, 0.4, { css: { fill: '#000' } }, 'colorReset1') - .fromTo( - box2, - 0.3, - { scale: 0, rotation: 200, opacity: 0, svgOrigin: '195px 42px' }, - { scale: 1, rotation: 360, opacity: 1 }, - '-=0.4' - ) - // wait a moment - .to(box2, 1, {}) - // computer updates firewalls - .to(computerUpdateBox, 0.3, { opacity: 1 }, '-=0.2') - .to(computerUpdatePath, 0.3, { opacity: 1 }, '-=0.2') - // firewall progress bars - .to(progressBarBars, 0.01, { width: 0 }) - .to(progressBars, 0.2, { opacity: 1 }) - .staggerTo(progressBarBars, 0.6, { width: 40 }, 0.2) - // third connection made - .to(path3a, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) - .to(path3b, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) - .to(path3c, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) - // progress bars & computer arrows fade out - .to(progressBars, 0.5, { opacity: 0 }, 'computerReset2') - .to(computerUpdateBox, 0.5, { opacity: 0 }, 'computerReset2') - .to(computerUpdatePath, 0.5, { opacity: 0 }, 'computerReset2') - // third connection turns blue - .to( - [path3a, path3b, path3c], - 0.5, - { css: { stroke: '#3969ED' } }, - 'computerReset2' - ) - .to( - [box2Border, box7Border], - 0.5, - { css: { fill: '#3969ED' } }, - 'computerReset2' - ) - // wait a bit - .to(box2, 2, {}) - // third connection turns back to gray - .to( - [path3a, path3b, path3c], - 0.5, - { css: { stroke: '#b5b8c4' } }, - 'colorReset2' - ) - .to( - [box2Border, box7Border], - 0.5, - { css: { fill: '#b5b8c4' } }, - 'colorReset2' - ) - // boxes 2, 4, and 6 disappear - .to( - [box2, box4, box6], - 0.6, - { scale: 0, rotation: 200, opacity: 0 }, - '-=0.4' - ) - // lines turn red and broken links appear - .to( - [path1a, path1b, path1c, path2a, path2b, path2c, path3a, path3b, path3c], - 0.3, - { css: { stroke: '#ED4168' } }, - '-=0.2' - ) - .to(brokenLinks, 0.3, { opacity: 1 }, '-=0.3') - // wait a moment - .to(box2, 1, {}) - // code sent to firewalls - .to(computerUpdateBox, 0.3, { opacity: 1 }) - .to(computerUpdatePath, 0.3, { opacity: 1 }) - // firewall progress bars - .to(progressBarBars, 0.01, { width: 0 }) - .to(progressBars, 0.2, { opacity: 1 }) - .staggerTo(progressBarBars, 0.6, { width: 40 }, 0.2) - .to(box2, 0.5, {}) - // faulty connections removed - .to( - [ - path1a, - path1b, - path1c, + segmentationChallengeTimeline + .to(box2, 1, {}) + // box 4 and 6 appear + .to(box4Border, 0.4, { css: { fill: '#000' } }, 'box4-in') + .fromTo( + box4, + 0.3, + { scale: 0, rotation: 200, opacity: 0, svgOrigin: '291px 41px' }, + { scale: 1, rotation: 360, opacity: 1 }, + 'box4-in' + ) + .to(box6Border, 0.4, { css: { fill: '#000' } }, '-=0.2') + .fromTo( + box6, + 0.3, + { scale: 0, rotation: 200, opacity: 0, svgOrigin: '195px 289px' }, + { scale: 1, rotation: 360, opacity: 1 }, + '-=0.4' + ) + // wait for a moment + .to(box2, 1, {}) + // computer appears and sends updates to firewalls + .to(computer, 0.5, { opacity: 1 }) + .to(computerUpdateBox, 0.3, { opacity: 1 }, '-=0.2') + .to(computerUpdatePath, 0.3, { opacity: 1 }, '-=0.2') + // firewall progress bars + .to(progressBarBars, 0.01, { attr: { width: 0 } }) + .to(progressBars, 0.2, { opacity: 1 }) + .staggerTo(progressBarBars, 0.6, { attr: { width: 40 } }, 0.2) + // connection 1 made + .to(path1a, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) + .to(path1b, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) + .to(path1c, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) + // progress bars and firewall update lines fade out + .to(progressBars, 0.7, { opacity: 0 }, 'resetComputer1') + .to(computerUpdateBox, 0.7, { opacity: 0 }, 'resetComputer1') + .to(computerUpdatePath, 0.7, { opacity: 0 }, 'resetComputer1') + // connection turns blue + .to( + [path1a, path1b, path1c], + 0.5, + { css: { stroke: '#3969ED' } }, + 'resetComputer1' + ) + .to( + [box4Border, box6Border], + 0.5, + { css: { fill: '#3969ED' } }, + 'resetComputer1' + ) + // second connection draws + .to( path2a, - path2b, - path2c, - path3a, - path3b, - path3c - ].concat(brokenLinks), - 0.7, - { opacity: 0 } - ) - // progress bars and connection arrows fade out - .to(progressBars, 0.5, { opacity: 0 }, 'computerReset3') - .to(computerUpdateBox, 0.5, { opacity: 0 }, 'computerReset3') - .to(computerUpdatePath, 0.5, { opacity: 0 }, 'computerReset3') - .to(computer, 0.5, { opacity: 0 }, 'computerReset3') - .call(function () { - segmentationSolutionTimeline.resume(segmentationSolutionTimeline.time()) - }) - // wait a moment before the loop - .to(box2, 1, {}) - .pause() - - // solution animation - var segmentationSolutionTimeline = new TimelineLite() - - // service boxes - var box1 = qs('#s-service-2') - var box1Border = qs('#s-service-2 > path') - var box1Lock = qs('#s-service-2 #s-secure-indicator-2') - var box2 = qs('#s-service-4') - var box2Border = qs('#s-service-4 > path') - var box2Lock = qs('#s-service-4 #s-secure-indicator-4') - var box3 = qs('#s-service-6') - var box3Border = qs('#s-service-6 > path') - var box3Lock = qs('#s-service-6 #s-secure-indicator-6') - - // connection paths - var path1a = qs('#s-connection-path-2') - var path1b = qs('#s-connection-path-8') - var path2a = qs('#s-connection-path-9') - var path2b = qs('#s-connection-path-10') - var path3a = qs('#s-connection-path-1') - var path3b = qs('#s-connection-path-4') - var path3c = qs('#s-connection-path-5') - var path3d = qs('#s-connection-path-6') - - // inbound consul updates - var inboundPathLower = qs('#s-consul-inbound-paths-lower') - var inboundUpdateLower = qs('#s-dynamic-update-inbound-lower') - var inboundUpdateLowerSpinner = qs('#s-dynamic-update-inbound-lower > path') - var inboundPathUpper = qs('#s-consul-inbound-paths-upper') - var inboundUpdateUpper = qs('#s-dynamic-update-inbound-upper') - var inboundUpdateUpperSpinner = qs('#s-dynamic-update-inbound-upper > path') - - // outbound consul updates - var outboundPathsLower = qsa( - '#s-consul-server-connection-lower, #s-consul-outbound-5, #s-consul-outbound-6, #s-consul-outbound-7' - ) - var outboundUpdateLower = qsa( - '#s-dynamic-update-outbound-ower, #s-tls-cert-lower' - ) - var outboundUpdateLowerSpinner = qs('#s-dynamic-update-outbound-ower > path') - var outboundPathsUpper1 = qsa( - '#s-consul-server-connection-upper, #s-consul-outbound-3, #s-consul-outbound-4' - ) - var outboundPathsUpper2 = qsa( - '#s-consul-server-connection-upper, #s-consul-outbound-1, #s-soncul-outbound-2' - ) - var outboundUpdateUpper = qsa( - '#s-tls-cert-upper, #s-dynamic-update-outbound-upper' - ) - var outboundUpdateUpperSpinner = qs('#s-dynamic-update-outbound-upper > path') - - segmentationSolutionTimeline - .to(box2, 1, {}) - // boxes 2 and 3 appear - .fromTo( - box2, - 0.3, - { scale: 0, rotation: 200, opacity: 0, svgOrigin: '281px 104px' }, - { scale: 1, rotation: 360, opacity: 1 } - ) - .fromTo( - box3, - 0.3, - { scale: 0, rotation: 200, opacity: 0, svgOrigin: '185px 226px' }, - { scale: 1, rotation: 360, opacity: 1 }, - '-=0.1' - ) - // wait a moment - .to(box1, 0.5, {}) - // consul speaks to each box that needs a connection made - .to(outboundPathsUpper1, 0.5, { opacity: 1 }) - .to(outboundPathsLower, 0.5, { opacity: 1 }, '-=0.3') - .to(outboundUpdateUpper, 0.3, { opacity: 1 }, '-=0.3') - .to(outboundUpdateLower, 0.3, { opacity: 1 }, '-=0.1') - .to( - outboundUpdateUpperSpinner, - 0.7, - { - rotation: 360, - svgOrigin: '44px 99px' - }, - '-=0.5' - ) - .to( - outboundUpdateLowerSpinner, - 0.7, - { - rotation: 360, - svgOrigin: '44px 246px' - }, - '-=0.3' - ) - // pink borders, locks, connections drawn, consul talk fades - .to(box2Lock, 0.3, { opacity: 1 }, 'connections-1') - .to(box2Border, 0.3, { fill: '#CA2270' }, 'connections-1') - .to(box3Lock, 0.3, { opacity: 1 }, 'connections-1') - .to(box3Border, 0.3, { fill: '#CA2270' }, 'connections-1') - .to(outboundPathsUpper1, 0.7, { opacity: 0 }, 'connections-1') - .to(outboundPathsLower, 0.7, { opacity: 0 }, 'connections-1') - .to(outboundUpdateUpper, 0.7, { opacity: 0 }, 'connections-1') - .to(outboundUpdateLower, 0.7, { opacity: 0 }, 'connections-1') - .to( - path1a, - 0.5, - { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, - 'connections-1' - ) - .to( - path1b, - 0.5, - { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, - 'connections-1' - ) - .to( - path2a, - 0.5, - { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, - 'connections-1' - ) - .to( - path2b, - 0.5, - { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, - 'connections-1' - ) - // wait a moment - .to(box1, 0.5, {}) - // box 1 appears - .fromTo( - box1, - 0.3, - { scale: 0, rotation: 200, opacity: 0, svgOrigin: '185px 104px' }, - { scale: 1, rotation: 360, opacity: 1 }, - '-=0.1' - ) - // wait a moment, previous paths fade ('#EEB9D1') - .to(box1, 0.5, {}, 'stage-1-complete') - .to(box2Border, 0.5, { fill: '#EEB9D1' }, 'stage-1-complete') - .to(box3Border, 0.5, { fill: '#EEB9D1' }, 'stage-1-complete') - .to(path1a, 0.5, { css: { stroke: '#EEB9D1' } }, 'stage-1-complete') - .to(path1b, 0.5, { css: { stroke: '#EEB9D1' } }, 'stage-1-complete') - .to(path2a, 0.5, { css: { stroke: '#EEB9D1' } }, 'stage-1-complete') - .to(path2b, 0.5, { css: { stroke: '#EEB9D1' } }, 'stage-1-complete') - // consul speaks to each box that needs a connection made - .to(outboundPathsUpper2, 0.5, { opacity: 1 }) - .to(outboundPathsLower, 0.5, { opacity: 1 }, '-=0.3') - .to(outboundUpdateUpper, 0.3, { opacity: 1 }, '-=0.3') - .to(outboundUpdateLower, 0.3, { opacity: 1 }, '-=0.1') - .to( - outboundUpdateUpperSpinner, - 0.7, - { - rotation: 720, - svgOrigin: '44px 99px' - }, - '-=0.5' - ) - .to( - outboundUpdateLowerSpinner, - 0.7, - { - rotation: 720, - svgOrigin: '44px 246px' - }, - '-=0.3' - ) - // connections drawn - .to(box1Lock, 0.3, { opacity: 1 }, 'connections-2') - .to(box1Border, 0.3, { fill: '#CA2270' }, 'connections-2') - .to( - path3a, - 0.5, - { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, - 'connections-2' - ) - .to( - path3b, - 0.5, - { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, - 'connections-2' - ) - .to( - path3c, - 0.5, - { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, - 'connections-2' - ) - .to( - path3d, - 0.5, - { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, - 'connections-2' - ) - .to(box1, 0.7, {}, 'stage-2-complete') - .to(outboundPathsUpper2, 0.7, { opacity: 0 }, 'stage-2-complete') - .to(outboundPathsLower, 0.7, { opacity: 0 }, 'stage-2-complete') - .to(outboundUpdateUpper, 0.7, { opacity: 0 }, 'stage-2-complete') - .to(outboundUpdateLower, 0.7, { opacity: 0 }, 'stage-2-complete') - .to(box1Border, 0.5, { fill: '#EEB9D1' }, 'path-fade-2') - .to(path3a, 0.5, { css: { stroke: '#EEB9D1' } }, 'path-fade-2') - .to(path3b, 0.5, { css: { stroke: '#EEB9D1' } }, 'path-fade-2') - .to(path3c, 0.5, { css: { stroke: '#EEB9D1' } }, 'path-fade-2') - .to(path3d, 0.5, { css: { stroke: '#EEB9D1' } }, 'path-fade-2') - // wait a moment - .to(box1, 1, {}) - // all new boxes and connections fade - .to( - [ - box1, + 0.3, + { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }, + '-=0.3' + ) + .to(path2b, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) + .to(path2c, 0.2, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) + // second connection turns blue + .to([path2a, path2b, path2c], 0.5, { css: { stroke: '#3969ED' } }, '-=0.1') + .to(box7Border, 0.5, { css: { fill: '#3969ED' } }, '-=0.3') + // wait a moment + .to(box2, 2, {}) + // blue elements fade back to gray + .to( + [path1a, path1b, path1c, path2a, path2b, path2c], + 0.5, + { + css: { stroke: '#b5b8c4' } + }, + 'colorReset1' + ) + .to( + [box7Border, box4Border, box6Border], + 0.5, + { css: { fill: '#b5b8c4' } }, + 'colorReset1' + ) + // box 2 appears + .to(box2Border, 0.4, { css: { fill: '#000' } }, 'colorReset1') + .fromTo( box2, - box3, - path1a, - path1b, - path2a, - path2b, - path3a, - path3b, - path3c, - path3d - ], - 0.5, - { opacity: 0.3 } + 0.3, + { scale: 0, rotation: 200, opacity: 0, svgOrigin: '195px 42px' }, + { scale: 1, rotation: 360, opacity: 1 }, + '-=0.4' + ) + // wait a moment + .to(box2, 1, {}) + // computer updates firewalls + .to(computerUpdateBox, 0.3, { opacity: 1 }, '-=0.2') + .to(computerUpdatePath, 0.3, { opacity: 1 }, '-=0.2') + // firewall progress bars + .to(progressBarBars, 0.01, { width: 0 }) + .to(progressBars, 0.2, { opacity: 1 }) + .staggerTo(progressBarBars, 0.6, { width: 40 }, 0.2) + // third connection made + .to(path3a, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) + .to(path3b, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) + .to(path3c, 0.3, { css: { strokeDashoffset: 0 }, ease: Linear.easeNone }) + // progress bars & computer arrows fade out + .to(progressBars, 0.5, { opacity: 0 }, 'computerReset2') + .to(computerUpdateBox, 0.5, { opacity: 0 }, 'computerReset2') + .to(computerUpdatePath, 0.5, { opacity: 0 }, 'computerReset2') + // third connection turns blue + .to( + [path3a, path3b, path3c], + 0.5, + { css: { stroke: '#3969ED' } }, + 'computerReset2' + ) + .to( + [box2Border, box7Border], + 0.5, + { css: { fill: '#3969ED' } }, + 'computerReset2' + ) + // wait a bit + .to(box2, 2, {}) + // third connection turns back to gray + .to( + [path3a, path3b, path3c], + 0.5, + { css: { stroke: '#b5b8c4' } }, + 'colorReset2' + ) + .to( + [box2Border, box7Border], + 0.5, + { css: { fill: '#b5b8c4' } }, + 'colorReset2' + ) + // boxes 2, 4, and 6 disappear + .to( + [box2, box4, box6], + 0.6, + { scale: 0, rotation: 200, opacity: 0 }, + '-=0.4' + ) + // lines turn red and broken links appear + .to( + [path1a, path1b, path1c, path2a, path2b, path2c, path3a, path3b, path3c], + 0.3, + { css: { stroke: '#ED4168' } }, + '-=0.2' + ) + .to(brokenLinks, 0.3, { opacity: 1 }, '-=0.3') + // wait a moment + .to(box2, 1, {}) + // code sent to firewalls + .to(computerUpdateBox, 0.3, { opacity: 1 }) + .to(computerUpdatePath, 0.3, { opacity: 1 }) + // firewall progress bars + .to(progressBarBars, 0.01, { width: 0 }) + .to(progressBars, 0.2, { opacity: 1 }) + .staggerTo(progressBarBars, 0.6, { width: 40 }, 0.2) + .to(box2, 0.5, {}) + // faulty connections removed + .to( + [ + path1a, + path1b, + path1c, + path2a, + path2b, + path2c, + path3a, + path3b, + path3c + ].concat(brokenLinks), + 0.7, + { opacity: 0 } + ) + // progress bars and connection arrows fade out + .to(progressBars, 0.5, { opacity: 0 }, 'computerReset3') + .to(computerUpdateBox, 0.5, { opacity: 0 }, 'computerReset3') + .to(computerUpdatePath, 0.5, { opacity: 0 }, 'computerReset3') + .to(computer, 0.5, { opacity: 0 }, 'computerReset3') + .call(function () { + segmentationSolutionTimeline.resume(segmentationSolutionTimeline.time()) + }) + // wait a moment before the loop + .to(box2, 1, {}) + .pause() + + // solution animation + var segmentationSolutionTimeline = new TimelineLite() + + // service boxes + var box1 = qs('#s-service-2') + var box1Border = qs('#s-service-2 > path') + var box1Lock = qs('#s-service-2 #s-secure-indicator-2') + var box2 = qs('#s-service-4') + var box2Border = qs('#s-service-4 > path') + var box2Lock = qs('#s-service-4 #s-secure-indicator-4') + var box3 = qs('#s-service-6') + var box3Border = qs('#s-service-6 > path') + var box3Lock = qs('#s-service-6 #s-secure-indicator-6') + + // connection paths + var path1a = qs('#s-connection-path-2') + var path1b = qs('#s-connection-path-8') + var path2a = qs('#s-connection-path-9') + var path2b = qs('#s-connection-path-10') + var path3a = qs('#s-connection-path-1') + var path3b = qs('#s-connection-path-4') + var path3c = qs('#s-connection-path-5') + var path3d = qs('#s-connection-path-6') + + // inbound consul updates + var inboundPathLower = qs('#s-consul-inbound-paths-lower') + var inboundUpdateLower = qs('#s-dynamic-update-inbound-lower') + var inboundUpdateLowerSpinner = qs('#s-dynamic-update-inbound-lower > path') + var inboundPathUpper = qs('#s-consul-inbound-paths-upper') + var inboundUpdateUpper = qs('#s-dynamic-update-inbound-upper') + var inboundUpdateUpperSpinner = qs('#s-dynamic-update-inbound-upper > path') + + // outbound consul updates + var outboundPathsLower = qsa( + '#s-consul-server-connection-lower, #s-consul-outbound-5, #s-consul-outbound-6, #s-consul-outbound-7' ) - // faded boxes speak to consul - .to(inboundPathLower, 0.5, { opacity: 1 }, 'inbound') - .to(inboundPathUpper, 0.5, { opacity: 1 }, 'inbound') - .to(inboundUpdateLower, 0.5, { opacity: 1 }, 'inbound') - .to(inboundUpdateUpper, 0.5, { opacity: 1 }, 'inbound') - .to( - inboundUpdateLowerSpinner, - 0.7, - { - rotation: 360, - svgOrigin: '44px 237px' - }, - '-=0.3' + var outboundUpdateLower = qsa( + '#s-dynamic-update-outbound-ower, #s-tls-cert-lower' ) - .to( - inboundUpdateUpperSpinner, - 0.7, - { - rotation: 360, - svgOrigin: '44px 91px' - }, - '-=0.3' + var outboundUpdateLowerSpinner = qs('#s-dynamic-update-outbound-ower > path') + var outboundPathsUpper1 = qsa( + '#s-consul-server-connection-upper, #s-consul-outbound-3, #s-consul-outbound-4' ) - // consul removes faded boxes and connections - .to( - [ - box1, + var outboundPathsUpper2 = qsa( + '#s-consul-server-connection-upper, #s-consul-outbound-1, #s-soncul-outbound-2' + ) + var outboundUpdateUpper = qsa( + '#s-tls-cert-upper, #s-dynamic-update-outbound-upper' + ) + var outboundUpdateUpperSpinner = qs('#s-dynamic-update-outbound-upper > path') + + segmentationSolutionTimeline + .to(box2, 1, {}) + // boxes 2 and 3 appear + .fromTo( box2, + 0.3, + { scale: 0, rotation: 200, opacity: 0, svgOrigin: '281px 104px' }, + { scale: 1, rotation: 360, opacity: 1 } + ) + .fromTo( box3, + 0.3, + { scale: 0, rotation: 200, opacity: 0, svgOrigin: '185px 226px' }, + { scale: 1, rotation: 360, opacity: 1 }, + '-=0.1' + ) + // wait a moment + .to(box1, 0.5, {}) + // consul speaks to each box that needs a connection made + .to(outboundPathsUpper1, 0.5, { opacity: 1 }) + .to(outboundPathsLower, 0.5, { opacity: 1 }, '-=0.3') + .to(outboundUpdateUpper, 0.3, { opacity: 1 }, '-=0.3') + .to(outboundUpdateLower, 0.3, { opacity: 1 }, '-=0.1') + .to( + outboundUpdateUpperSpinner, + 0.7, + { + rotation: 360, + svgOrigin: '44px 99px' + }, + '-=0.5' + ) + .to( + outboundUpdateLowerSpinner, + 0.7, + { + rotation: 360, + svgOrigin: '44px 246px' + }, + '-=0.3' + ) + // pink borders, locks, connections drawn, consul talk fades + .to(box2Lock, 0.3, { opacity: 1 }, 'connections-1') + .to(box2Border, 0.3, { fill: '#CA2270' }, 'connections-1') + .to(box3Lock, 0.3, { opacity: 1 }, 'connections-1') + .to(box3Border, 0.3, { fill: '#CA2270' }, 'connections-1') + .to(outboundPathsUpper1, 0.7, { opacity: 0 }, 'connections-1') + .to(outboundPathsLower, 0.7, { opacity: 0 }, 'connections-1') + .to(outboundUpdateUpper, 0.7, { opacity: 0 }, 'connections-1') + .to(outboundUpdateLower, 0.7, { opacity: 0 }, 'connections-1') + .to( path1a, + 0.5, + { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, + 'connections-1' + ) + .to( path1b, + 0.5, + { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, + 'connections-1' + ) + .to( path2a, + 0.5, + { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, + 'connections-1' + ) + .to( path2b, + 0.5, + { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, + 'connections-1' + ) + // wait a moment + .to(box1, 0.5, {}) + // box 1 appears + .fromTo( + box1, + 0.3, + { scale: 0, rotation: 200, opacity: 0, svgOrigin: '185px 104px' }, + { scale: 1, rotation: 360, opacity: 1 }, + '-=0.1' + ) + // wait a moment, previous paths fade ('#EEB9D1') + .to(box1, 0.5, {}, 'stage-1-complete') + .to(box2Border, 0.5, { fill: '#EEB9D1' }, 'stage-1-complete') + .to(box3Border, 0.5, { fill: '#EEB9D1' }, 'stage-1-complete') + .to(path1a, 0.5, { css: { stroke: '#EEB9D1' } }, 'stage-1-complete') + .to(path1b, 0.5, { css: { stroke: '#EEB9D1' } }, 'stage-1-complete') + .to(path2a, 0.5, { css: { stroke: '#EEB9D1' } }, 'stage-1-complete') + .to(path2b, 0.5, { css: { stroke: '#EEB9D1' } }, 'stage-1-complete') + // consul speaks to each box that needs a connection made + .to(outboundPathsUpper2, 0.5, { opacity: 1 }) + .to(outboundPathsLower, 0.5, { opacity: 1 }, '-=0.3') + .to(outboundUpdateUpper, 0.3, { opacity: 1 }, '-=0.3') + .to(outboundUpdateLower, 0.3, { opacity: 1 }, '-=0.1') + .to( + outboundUpdateUpperSpinner, + 0.7, + { + rotation: 720, + svgOrigin: '44px 99px' + }, + '-=0.5' + ) + .to( + outboundUpdateLowerSpinner, + 0.7, + { + rotation: 720, + svgOrigin: '44px 246px' + }, + '-=0.3' + ) + // connections drawn + .to(box1Lock, 0.3, { opacity: 1 }, 'connections-2') + .to(box1Border, 0.3, { fill: '#CA2270' }, 'connections-2') + .to( path3a, + 0.5, + { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, + 'connections-2' + ) + .to( path3b, + 0.5, + { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, + 'connections-2' + ) + .to( path3c, + 0.5, + { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, + 'connections-2' + ) + .to( path3d, - inboundPathLower, - inboundPathUpper, - inboundUpdateLower, - inboundUpdateUpper - ], - 0.5, - { opacity: 0.0 } - ) - .addPause() - // wait a moment before the loop - .to(box1, 1, {}) - .pause() + 0.5, + { css: { strokeDashoffset: 0, stroke: '#CA2270' } }, + 'connections-2' + ) + .to(box1, 0.7, {}, 'stage-2-complete') + .to(outboundPathsUpper2, 0.7, { opacity: 0 }, 'stage-2-complete') + .to(outboundPathsLower, 0.7, { opacity: 0 }, 'stage-2-complete') + .to(outboundUpdateUpper, 0.7, { opacity: 0 }, 'stage-2-complete') + .to(outboundUpdateLower, 0.7, { opacity: 0 }, 'stage-2-complete') + .to(box1Border, 0.5, { fill: '#EEB9D1' }, 'path-fade-2') + .to(path3a, 0.5, { css: { stroke: '#EEB9D1' } }, 'path-fade-2') + .to(path3b, 0.5, { css: { stroke: '#EEB9D1' } }, 'path-fade-2') + .to(path3c, 0.5, { css: { stroke: '#EEB9D1' } }, 'path-fade-2') + .to(path3d, 0.5, { css: { stroke: '#EEB9D1' } }, 'path-fade-2') + // wait a moment + .to(box1, 1, {}) + // all new boxes and connections fade + .to( + [ + box1, + box2, + box3, + path1a, + path1b, + path2a, + path2b, + path3a, + path3b, + path3c, + path3d + ], + 0.5, + { opacity: 0.3 } + ) + // faded boxes speak to consul + .to(inboundPathLower, 0.5, { opacity: 1 }, 'inbound') + .to(inboundPathUpper, 0.5, { opacity: 1 }, 'inbound') + .to(inboundUpdateLower, 0.5, { opacity: 1 }, 'inbound') + .to(inboundUpdateUpper, 0.5, { opacity: 1 }, 'inbound') + .to( + inboundUpdateLowerSpinner, + 0.7, + { + rotation: 360, + svgOrigin: '44px 237px' + }, + '-=0.3' + ) + .to( + inboundUpdateUpperSpinner, + 0.7, + { + rotation: 360, + svgOrigin: '44px 91px' + }, + '-=0.3' + ) + // consul removes faded boxes and connections + .to( + [ + box1, + box2, + box3, + path1a, + path1b, + path2a, + path2b, + path3a, + path3b, + path3c, + path3d, + inboundPathLower, + inboundPathUpper, + inboundUpdateLower, + inboundUpdateUpper + ], + 0.5, + { opacity: 0.0 } + ) + .addPause() + // wait a moment before the loop + .to(box1, 1, {}) + .pause() - // kick it off - $segmentationChallenge.classList.add('active') - $segmentationSolution.classList.add('active') - segmentationChallengeTimeline.play() - segmentationSolutionTimeline.play() -} + // kick it off + $segmentationChallenge.classList.add('active') + $segmentationSolution.classList.add('active') + segmentationChallengeTimeline.play() + segmentationSolutionTimeline.play() + } +} \ No newline at end of file diff --git a/website/source/assets/javascripts/consul-connect/carousel.js b/website/source/assets/javascripts/consul-connect/carousel.js index c42e9b255..605417bbf 100644 --- a/website/source/assets/javascripts/consul-connect/carousel.js +++ b/website/source/assets/javascripts/consul-connect/carousel.js @@ -1,3 +1,6 @@ +var qs = document.querySelector.bind(document) +var qsa = document.querySelectorAll.bind(document) + // siema carousels var dots = qsa('.g-carousel .pagination li') var carousel = new Siema({ @@ -20,15 +23,13 @@ var carousel = new Siema({ }) // on previous button click -document - .querySelector('.g-carousel .prev') +qs('.g-carousel .prev') .addEventListener('click', function() { carousel.prev() }) // on next button click -document - .querySelector('.g-carousel .next') +qs('.g-carousel .next') .addEventListener('click', function() { carousel.next() }) From df03db47ceede7ba7a5434cc59e61729a7dd4cd0 Mon Sep 17 00:00:00 2001 From: John Cowen Date: Mon, 25 Jun 2018 11:33:07 +0100 Subject: [PATCH 528/539] tenenacy > tenancy --- website/source/docs/connect/security.html.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/source/docs/connect/security.html.md b/website/source/docs/connect/security.html.md index 7171173d3..182d875dc 100644 --- a/website/source/docs/connect/security.html.md +++ b/website/source/docs/connect/security.html.md @@ -85,7 +85,7 @@ network namespacing techniques provided by the underlying operating system. For scenarios where multiple services are running on the same machine without isolation, these services must all be trusted. We call this the -**trusted multi-tenenacy** deployment model. Any service could theoretically +**trusted multi-tenancy** deployment model. Any service could theoretically connect to any other service via the loopback listener, bypassing Connect completely. In this scenario, all services must be trusted _or_ isolation mechanisms must be used. From bfc628da0a863ab47631fd822f6cc6525d06472f Mon Sep 17 00:00:00 2001 From: Jack Pearkes Date: Sun, 24 Jun 2018 14:24:01 -0700 Subject: [PATCH 529/539] website: fix scrolling/loading issue on iOS --- website/source/assets/stylesheets/_global.scss | 4 ---- 1 file changed, 4 deletions(-) diff --git a/website/source/assets/stylesheets/_global.scss b/website/source/assets/stylesheets/_global.scss index 10319049c..3f8bf49df 100755 --- a/website/source/assets/stylesheets/_global.scss +++ b/website/source/assets/stylesheets/_global.scss @@ -1,6 +1,4 @@ html { - height: 100%; - min-height: 100%; text-rendering: optimizeLegibility; -webkit-font-smoothing: antialiased; overflow-x: hidden; @@ -13,8 +11,6 @@ body { font-size: $font-size; font-family: $font-family-open-sans; font-weight: $font-weight-reg; - height: 100%; - min-height: 100%; overflow-x: hidden; } From 0a8a54287760f93d4b7000c95f23060fd942ddc8 Mon Sep 17 00:00:00 2001 From: Mike Wickett Date: Mon, 25 Jun 2018 11:26:35 -0400 Subject: [PATCH 530/539] Adds small polyfill for classlist because IE does not support it on SVG elements --- .../javascripts/consul-connect/vendor/classlist-polyfill.min.js | 2 ++ website/source/layouts/layout.erb | 1 + 2 files changed, 3 insertions(+) create mode 100644 website/source/assets/javascripts/consul-connect/vendor/classlist-polyfill.min.js diff --git a/website/source/assets/javascripts/consul-connect/vendor/classlist-polyfill.min.js b/website/source/assets/javascripts/consul-connect/vendor/classlist-polyfill.min.js new file mode 100644 index 000000000..d866a3046 --- /dev/null +++ b/website/source/assets/javascripts/consul-connect/vendor/classlist-polyfill.min.js @@ -0,0 +1,2 @@ +/*! @source http://purl.eligrey.com/github/classList.js/blob/master/classList.js */ +"document" in self && ("classList" in document.createElement("_") && (!document.createElementNS || "classList" in document.createElementNS("http://www.w3.org/2000/svg", "g")) || !function (t) { "use strict"; if ("Element" in t) { var e = "classList", n = "prototype", i = t.Element[n], s = Object, r = String[n].trim || function () { return this.replace(/^\s+|\s+$/g, "") }, o = Array[n].indexOf || function (t) { for (var e = 0, n = this.length; n > e; e++)if (e in this && this[e] === t) return e; return -1 }, c = function (t, e) { this.name = t, this.code = DOMException[t], this.message = e }, a = function (t, e) { if ("" === e) throw new c("SYNTAX_ERR", "The token must not be empty."); if (/\s/.test(e)) throw new c("INVALID_CHARACTER_ERR", "The token must not contain space characters."); return o.call(t, e) }, l = function (t) { for (var e = r.call(t.getAttribute("class") || ""), n = e ? e.split(/\s+/) : [], i = 0, s = n.length; s > i; i++)this.push(n[i]); this._updateClassName = function () { t.setAttribute("class", this.toString()) } }, u = l[n] = [], h = function () { return new l(this) }; if (c[n] = Error[n], u.item = function (t) { return this[t] || null }, u.contains = function (t) { return ~a(this, t + "") }, u.add = function () { var t, e = arguments, n = 0, i = e.length, s = !1; do t = e[n] + "", ~a(this, t) || (this.push(t), s = !0); while (++n < i); s && this._updateClassName() }, u.remove = function () { var t, e, n = arguments, i = 0, s = n.length, r = !1; do for (t = n[i] + "", e = a(this, t); ~e;)this.splice(e, 1), r = !0, e = a(this, t); while (++i < s); r && this._updateClassName() }, u.toggle = function (t, e) { var n = this.contains(t), i = n ? e !== !0 && "remove" : e !== !1 && "add"; return i && this[i](t), e === !0 || e === !1 ? e : !n }, u.replace = function (t, e) { var n = a(t + ""); ~n && (this.splice(n, 1, e), this._updateClassName()) }, u.toString = function () { return this.join(" ") }, s.defineProperty) { var f = { get: h, enumerable: !0, configurable: !0 }; try { s.defineProperty(i, e, f) } catch (p) { void 0 !== p.number && -2146823252 !== p.number || (f.enumerable = !1, s.defineProperty(i, e, f)) } } else s[n].__defineGetter__ && i.__defineGetter__(e, h) } }(self), function () { "use strict"; var t = document.createElement("_"); if (t.classList.add("c1", "c2"), !t.classList.contains("c2")) { var e = function (t) { var e = DOMTokenList.prototype[t]; DOMTokenList.prototype[t] = function (t) { var n, i = arguments.length; for (n = 0; i > n; n++)t = arguments[n], e.call(this, t) } }; e("add"), e("remove") } if (t.classList.toggle("c3", !1), t.classList.contains("c3")) { var n = DOMTokenList.prototype.toggle; DOMTokenList.prototype.toggle = function (t, e) { return 1 in arguments && !this.contains(t) == !e ? e : n.call(this, t) } } "replace" in document.createElement("_").classList || (DOMTokenList.prototype.replace = function (t, e) { var n = this.toString().split(" "), i = n.indexOf(t + ""); ~i && (n = n.slice(i), this.remove.apply(this, n), this.add(e), this.add.apply(this, n.slice(1))) }), t = null }()); \ No newline at end of file diff --git a/website/source/layouts/layout.erb b/website/source/layouts/layout.erb index c06221c06..f96d9a60f 100644 --- a/website/source/layouts/layout.erb +++ b/website/source/layouts/layout.erb @@ -34,6 +34,7 @@ <%= javascript_include_tag "consul-connect/vendor/intersection-observer-polyfill", defer: true %> <%= javascript_include_tag "consul-connect/vendor/siema.min", defer: true %> + <%= javascript_include_tag "consul-connect/vendor/classlist-polyfill.min", defer: true %> <%= javascript_include_tag "application", defer: true %> From e5d7e2ec47d3b91da4c4e2f123ee7c7e05199801 Mon Sep 17 00:00:00 2001 From: Mike Wickett Date: Mon, 25 Jun 2018 13:41:31 -0400 Subject: [PATCH 531/539] Small tweaks to video playback --- .../assets/javascripts/consul-connect/home-hero.js | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/website/source/assets/javascripts/consul-connect/home-hero.js b/website/source/assets/javascripts/consul-connect/home-hero.js index 92ead8aac..94d9ed9fc 100644 --- a/website/source/assets/javascripts/consul-connect/home-hero.js +++ b/website/source/assets/javascripts/consul-connect/home-hero.js @@ -27,8 +27,10 @@ function initialiateVideoChange(index) { ) loadingBar.style.transitionDuration = '0s' - // reset the current video - $$videos[currentIndex].currentTime = 0 + // reset the current video + if (!isNaN($$videos[currentIndex].duration)) { + $$videos[currentIndex].currentTime = 0 + } $$videoControls[currentIndex].classList.remove('playing') // stop deactivation @@ -59,7 +61,8 @@ function playVideo(index, wrapper) { $$videoControls[index].querySelector( '.progress-bar span' - ).style.transitionDuration = `${Math.ceil($$videos[index].duration / playbackRate)}s` + ).style.transitionDuration = + Math.ceil($$videos[index].duration / playbackRate).toString() + 's' // set the currentIndex to be that of the current video's index currentIndex = index @@ -88,4 +91,4 @@ for (var i = 0; i < $$videoControls.length; i++) { // go to first video to start this thing if ($$videos.length > 0) { initialiateVideoChange(0) -} \ No newline at end of file +} From 08776304fb93334b0b845d1000c8bcf0c22a5481 Mon Sep 17 00:00:00 2001 From: RJ Spiker Date: Mon, 25 Jun 2018 11:59:58 -0600 Subject: [PATCH 532/539] website - some visual updates including css bug fixes and image updates (#111) --- .../distributed-locks-and-semaphores.png | 3 - .../assets/images/consul-connect/feature.jpg | 3 - .../assets/images/consul-connect/grid_1.png | 3 + .../assets/images/consul-connect/grid_2.png | 3 + .../assets/images/consul-connect/grid_3.png | 3 + .../logos/consul-enterprise-logo.svg | 7 +++ .../consul-connect/logos/consul-logo.svg | 7 +++ .../consul-connect/open-and-extensible.png | 3 - .../images/consul-connect/svgs/semaphores.svg | 55 +++++++++++++++++++ .../workflows-not-technologies.png | 3 - .../consul-connect/components/_logo-grid.scss | 5 -- .../components/_text-asset.scss | 12 ++++ .../consul-connect/pages/_home.scss | 13 +++++ website/source/configuration.html.erb | 2 +- website/source/index.html.erb | 14 ++--- website/source/segmentation.html.erb | 20 +------ 16 files changed, 113 insertions(+), 43 deletions(-) delete mode 100644 website/source/assets/images/consul-connect/distributed-locks-and-semaphores.png delete mode 100644 website/source/assets/images/consul-connect/feature.jpg create mode 100644 website/source/assets/images/consul-connect/grid_1.png create mode 100644 website/source/assets/images/consul-connect/grid_2.png create mode 100644 website/source/assets/images/consul-connect/grid_3.png create mode 100644 website/source/assets/images/consul-connect/logos/consul-enterprise-logo.svg create mode 100644 website/source/assets/images/consul-connect/logos/consul-logo.svg delete mode 100644 website/source/assets/images/consul-connect/open-and-extensible.png create mode 100644 website/source/assets/images/consul-connect/svgs/semaphores.svg delete mode 100644 website/source/assets/images/consul-connect/workflows-not-technologies.png diff --git a/website/source/assets/images/consul-connect/distributed-locks-and-semaphores.png b/website/source/assets/images/consul-connect/distributed-locks-and-semaphores.png deleted file mode 100644 index 0183b765c..000000000 --- a/website/source/assets/images/consul-connect/distributed-locks-and-semaphores.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:37be667f00897fa8f4e867e646ba94e23a4cfcc9c084ced2763db2628ea7c031 -size 37864 diff --git a/website/source/assets/images/consul-connect/feature.jpg b/website/source/assets/images/consul-connect/feature.jpg deleted file mode 100644 index 83411ef7a..000000000 --- a/website/source/assets/images/consul-connect/feature.jpg +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c971e3e33bbae50b240ff28d42b582b8b43887f171558182caa570e46b120675 -size 84628 diff --git a/website/source/assets/images/consul-connect/grid_1.png b/website/source/assets/images/consul-connect/grid_1.png new file mode 100644 index 000000000..2db1d1e75 --- /dev/null +++ b/website/source/assets/images/consul-connect/grid_1.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a885d9643e2ae7202980d482d6f60ed6a3699d401929618127aea724706297f5 +size 103170 diff --git a/website/source/assets/images/consul-connect/grid_2.png b/website/source/assets/images/consul-connect/grid_2.png new file mode 100644 index 000000000..251c3cc3f --- /dev/null +++ b/website/source/assets/images/consul-connect/grid_2.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d79543fac463cf99e46978226661a2217118cdc3cac408f267c404350dec430 +size 129232 diff --git a/website/source/assets/images/consul-connect/grid_3.png b/website/source/assets/images/consul-connect/grid_3.png new file mode 100644 index 000000000..9de85b9b6 --- /dev/null +++ b/website/source/assets/images/consul-connect/grid_3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea17c3140f3d67f1c3ac81491481743959141d6ed8957f904ef871fad28824f4 +size 131975 diff --git a/website/source/assets/images/consul-connect/logos/consul-enterprise-logo.svg b/website/source/assets/images/consul-connect/logos/consul-enterprise-logo.svg new file mode 100644 index 000000000..c5bff249d --- /dev/null +++ b/website/source/assets/images/consul-connect/logos/consul-enterprise-logo.svg @@ -0,0 +1,7 @@ + + Consul Enterprise + + + + + diff --git a/website/source/assets/images/consul-connect/logos/consul-logo.svg b/website/source/assets/images/consul-connect/logos/consul-logo.svg new file mode 100644 index 000000000..daef751a6 --- /dev/null +++ b/website/source/assets/images/consul-connect/logos/consul-logo.svg @@ -0,0 +1,7 @@ + + Consul + + + + + diff --git a/website/source/assets/images/consul-connect/open-and-extensible.png b/website/source/assets/images/consul-connect/open-and-extensible.png deleted file mode 100644 index 3de665b2f..000000000 --- a/website/source/assets/images/consul-connect/open-and-extensible.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3b24b9839f50e786960e841e8b842e692eba80917c0d9407276f8888a571ebfa -size 34458 diff --git a/website/source/assets/images/consul-connect/svgs/semaphores.svg b/website/source/assets/images/consul-connect/svgs/semaphores.svg new file mode 100644 index 000000000..afefb803c --- /dev/null +++ b/website/source/assets/images/consul-connect/svgs/semaphores.svg @@ -0,0 +1,55 @@ + + + Distributed Locks and Semaphores + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + LEADER + + + FOLLOWER + + + FOLLOWER + + + + + diff --git a/website/source/assets/images/consul-connect/workflows-not-technologies.png b/website/source/assets/images/consul-connect/workflows-not-technologies.png deleted file mode 100644 index dfdb2b3b8..000000000 --- a/website/source/assets/images/consul-connect/workflows-not-technologies.png +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:11f9fb25bd8da00e492893eb751e5f3be3cc4b56eaaabd81e612798d4eaa194a -size 24096 diff --git a/website/source/assets/stylesheets/consul-connect/components/_logo-grid.scss b/website/source/assets/stylesheets/consul-connect/components/_logo-grid.scss index b3f3c98b1..25b48531b 100644 --- a/website/source/assets/stylesheets/consul-connect/components/_logo-grid.scss +++ b/website/source/assets/stylesheets/consul-connect/components/_logo-grid.scss @@ -14,13 +14,8 @@ width: 50%; @media (min-width: 768px) { - padding: 5px $site-gutter-padding; width: 33%; } - - @media (min-width: 992px) { - padding: 25px $site-gutter-padding; - } } img { diff --git a/website/source/assets/stylesheets/consul-connect/components/_text-asset.scss b/website/source/assets/stylesheets/consul-connect/components/_text-asset.scss index 7af8785d6..127df53be 100644 --- a/website/source/assets/stylesheets/consul-connect/components/_text-asset.scss +++ b/website/source/assets/stylesheets/consul-connect/components/_text-asset.scss @@ -36,6 +36,10 @@ margin-bottom: -120px; } + & > div:last-child { + justify-content: unset; + } + img { width: 145%; } @@ -87,6 +91,10 @@ } } + &:last-child { + justify-content: center; + } + & > img { width: 100%; @@ -95,6 +103,10 @@ } } + & > svg { + max-width: 100%; + } + &.code-sample > div { box-shadow: 0 40px 48px -20px rgba(63, 68, 85, 0.4); color: $white; diff --git a/website/source/assets/stylesheets/consul-connect/pages/_home.scss b/website/source/assets/stylesheets/consul-connect/pages/_home.scss index c936b661a..5e496cde6 100644 --- a/website/source/assets/stylesheets/consul-connect/pages/_home.scss +++ b/website/source/assets/stylesheets/consul-connect/pages/_home.scss @@ -174,6 +174,10 @@ width: 42vw; } + @media (min-width: 1725px) { + width: 725px; + } + & > div { align-items: center; color: #d2d4dc; @@ -248,6 +252,11 @@ width: 42vw; } + @media (min-width: 1725px) { + padding-top: calc((725px * 0.63569) + 38px); + width: 725px; + } + & > div { background: #0e1016; border-radius: 3px 3px 0 0; @@ -392,6 +401,10 @@ width: 50%; } + & > svg { + width: 135px; + } + &:first-child { background: $consul-red; position: relative; diff --git a/website/source/configuration.html.erb b/website/source/configuration.html.erb index c243843cf..55eeb9e1a 100644 --- a/website/source/configuration.html.erb +++ b/website/source/configuration.html.erb @@ -195,7 +195,7 @@ description: |-
    - Service Registry + <%= inline_svg 'consul-connect/svgs/semaphores.svg', height: 383 %>
    diff --git a/website/source/index.html.erb b/website/source/index.html.erb index 50512aa94..aed1f2b25 100644 --- a/website/source/index.html.erb +++ b/website/source/index.html.erb @@ -111,7 +111,7 @@ description: |-
    - + Service Discovery

    Service Discovery for connectivity

    Service Registry enables services to register and discover each other.

    @@ -121,7 +121,7 @@ description: |-
    - + Service Segmentation

    Service Segmentation for security

    Secure service-to-service communication with automatic TLS encryption and identity-based authorization.

    @@ -131,7 +131,7 @@ description: |-
    - + Service Configuration

    Service Configuration for runtime configuration

    Feature rich Key/Value store to easily configure services.

    @@ -189,7 +189,7 @@ description: |-
    - Workflows, not Technologies + Run and Connect Anywhere
    @@ -209,7 +209,7 @@ description: |-
    - Open and Extensible + Extend and Integrate
    @@ -247,7 +247,7 @@ description: |-
    - Consul + <%= inline_svg 'consul-connect/logos/consul-logo.svg' %>

    Consul Open Source addresses the technical complexity of connecting services across distributed infrastructure.

    @@ -263,7 +263,7 @@ description: |-
    - Consul Enterprise + <%= inline_svg 'consul-connect/logos/consul-enterprise-logo.svg' %>

    Consul Enterprise addresses the organizational complexity of large user bases and compliance requirements with collaboration and governance features.

    diff --git a/website/source/segmentation.html.erb b/website/source/segmentation.html.erb index 639aa3c4c..da7c28bba 100644 --- a/website/source/segmentation.html.erb +++ b/website/source/segmentation.html.erb @@ -96,24 +96,8 @@ description: |-

    -
    -
    - -
    $ consul connect proxy \ - -service web \ - -service-addr 127.0.0.1:80 \ - -listen 10.0.1.109:7200 -==> Consul Connect proxy starting... - Configuration mode: Flags - Service: web - Public listener: 10.0.1.109:7200 => 127.0.0.1:80 - -==> Log data will now stream in as it occurs: - - 2018/06/23 09:33:51 [INFO] public listener starting on 10.0.1.109:7200 - 2018/06/23 09:33:51 [INFO] proxy loaded config and ready to serve -
    -
    +
    + Secure services across any runtime platform
    From 3e68e37080f972f266caa3df9d55f1ba1b00aed0 Mon Sep 17 00:00:00 2001 From: Jack Pearkes Date: Mon, 25 Jun 2018 10:55:47 -0700 Subject: [PATCH 533/539] website: add an example of TLS encryption --- website/source/segmentation.html.erb | 26 ++++++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/website/source/segmentation.html.erb b/website/source/segmentation.html.erb index da7c28bba..1b54c2af2 100644 --- a/website/source/segmentation.html.erb +++ b/website/source/segmentation.html.erb @@ -140,8 +140,30 @@ description: |-
    -
    - TODO +
    $ consul connect proxy -service web \ + -service-addr 127.0.0.1:8000 + -listen 10.0.1.109:7200 +==> Consul Connect proxy starting... + Configuration mode: Flags + Service: web + Public listener: 10.0.1.109:7200 => 127.0.0.1:8000 +... +$ tshark -V \ + -Y "ssl.handshake.certificate" \ + -O "ssl" \ + -f "dst port 7200" +Frame 39: 899 bytes on wire (7192 bits), 899 bytes captured (7192 bits) on interface 0 +Internet Protocol Version 4, Src: 10.0.1.110, Dst: 10.0.1.109 +Transmission Control Protocol, Src Port: 61918, Dst Port: 7200, Seq: 136, Ack: 916, Len: 843 +Secure Sockets Layer + TLSv1.2 Record Layer: Handshake Protocol: Certificate + Version: TLS 1.2 (0x0303) + Handshake Protocol: Certificate + RDNSequence item: 1 item (id-at-commonName=Consul CA 7) + RelativeDistinguishedName item (id-at-commonName=Consul CA 7) + Id: 2.5.4.3 (id-at-commonName) + DirectoryString: printableString (1) + printableString: Consul CA 7
    From 99562a015368fd2618e9778f7bcb8461c0fb83c1 Mon Sep 17 00:00:00 2001 From: Mitchell Hashimoto Date: Sun, 24 Jun 2018 16:55:26 -0500 Subject: [PATCH 534/539] website: split out CA docs by provider type --- website/source/docs/connect/ca.html.md | 199 ++++++------------ website/source/docs/connect/ca/consul.html.md | 129 ++++++++++++ website/source/docs/connect/ca/vault.html.md | 88 ++++++++ website/source/layouts/docs.erb | 8 + 4 files changed, 286 insertions(+), 138 deletions(-) create mode 100644 website/source/docs/connect/ca/consul.html.md create mode 100644 website/source/docs/connect/ca/vault.html.md diff --git a/website/source/docs/connect/ca.html.md b/website/source/docs/connect/ca.html.md index 06c465b98..3cc1e9bb9 100644 --- a/website/source/docs/connect/ca.html.md +++ b/website/source/docs/connect/ca.html.md @@ -8,30 +8,53 @@ description: |- # Connect Certificate Management -The certificate management in Connect is done centrally through the Consul +Certificate management in Connect is done centrally through the Consul servers using the configured CA (Certificate Authority) provider. A CA provider -controls the active root certificate and performs leaf certificate signing for -proxies to use for mutual TLS. Currently, the only supported provider is the -built-in Consul CA, which generates and stores the root certificate and key on -the Consul servers and can be configured with a custom key/certificate if needed. +manages root and intermediate certificates and performs certificate signing +operations. The Consul leader orchestrates CA provider operations as necessary, +such as when a service needs a new certificate or during CA rotation events. -The CA provider is initialized either on cluster bootstrapping, or (if Connect is -disabled initially) when a leader server is elected that has Connect enabled. -During the cluster's initial bootstrapping, the CA provider can be configured -through the [Agent configuration](docs/agent/options.html#connect_ca_config) -and afterward can only be updated through the [Update CA Configuration endpoint] -(/api/connect/ca.html#update-ca-configuration). +The CA provider abstraction enables Consul to support multiple systems for +storing and signing certificates. Consul ships with a +[built-in CA](/docs/connect/ca/consul.html) which generates and stores the +root certificate and private key on the Consul servers. Consul also also +built-in support for +[Vault as a CA](/docs/connect/ca/vault.html). With Vault, the root certificate +and private key material remain with the Vault cluster. A future version of +Consul will support pluggable CA systems using external binaries. -### Consul CA (Certificate Authority) Provider +## CA Bootstrapping -By default, if no provider is configured when Connect is enabled, the Consul -provider will be used and a private key/root certificate will be generated -and used as the active root certificate for the cluster. To see this in action, -start Consul in [dev mode](/docs/agent/options.html#_dev) and query the -[list CA Roots endpoint](/api/connect/ca.html#list-ca-root-certificates): +CA initialization happens automatically when a new Consul leader is elected +as long as +[Connect is enabled](/docs/connect/configuration.html#enable-connect-on-the-cluster) +and the CA system hasn't already been initialized. This initialization process +will generate the initial root certificates and setup the internal Consul server +state. + +For the initial bootstrap, the CA provider can be configured through the +[Agent configuration](docs/agent/options.html#connect_ca_config). After +initialization, the CA can only be updated through the +[Update CA Configuration API endpoint](/api/connect/ca.html#update-ca-configuration). +If a CA is already initialized, any changes to the CA configuration in the +agent configuration file (including removing the configuration completely) +will have no effect. + +If no specific provider is configured when Connect is enabled, the built-in +Consul CA provider will be used and a private key and root certificate will +be generated automatically. + +## Viewing Root Certificates + +Root certificates can be queried with the +[list CA Roots endpoint](/api/connect/ca.html#list-ca-root-certificates). +With this endpoint, you can see the list of currently trusted root certificates. +When a cluster first initializes, this will only list one trusted root. Multiple +roots may appear as part of +[rotation](#). ```bash -$ curl localhost:8500/v1/connect/ca/roots +$ curl http://localhost:8500/v1/connect/ca/roots { "ActiveRootID": "31:6c:06:fb:49:94:42:d5:e4:55:cc:2e:27:b3:b2:2e:96:67:3e:7e", "TrustDomain": "36cb52cd-4058-f811-0432-6798a240c5d3.consul", @@ -53,17 +76,15 @@ $ curl localhost:8500/v1/connect/ca/roots } ``` -#### Specifying a Private Key and Root Certificate +## CA Configuration -The above root certificate has been automatically generated during the cluster's -bootstrap, but it is possible to configure the Consul CA provider to use a specific -private key and root certificate. - -To view the current CA configuration, use the [Get CA Configuration endpoint] -(/api/connect/ca.html#get-ca-configuration): +After initialization, the CA provider configuration can be viewed with the +[Get CA Configuration API endpoint](/api/connect/ca.html#get-ca-configuration). +Consul will filter sensitive values from this endpoint depending on the +provider in use, so the configuration may not be complete. ```bash -$ curl localhost:8500/v1/connect/ca/configuration +$ curl http://localhost:8500/v1/connect/ca/configuration { "Provider": "consul", "Config": { @@ -74,66 +95,23 @@ $ curl localhost:8500/v1/connect/ca/configuration } ``` -This is the default Connect CA configuration if nothing is explicitly set when -Connect is enabled - the PrivateKey and RootCert fields have not been set, so those have -been generated (as seen above in the roots list). +The CA provider can be reconfigured using the +[Update CA Configuration API endpoint](/api/connect/ca.html#update-ca-configuration). +Specific options for reconfiguration can be found in the specific +CA provider documentation in the sidebar to the left. -There are two ways to have the Consul CA use a custom private key and root certificate: -either through the `ca_config` section of the [Agent configuration] -(docs/agent/options.html#connect_ca_config) (which can only be used during the cluster's -initial bootstrap) or through the [Update CA Configuration endpoint] -(/api/connect/ca.html#update-ca-configuration). - -Currently consul requires that root certificates are valid [SPIFFE SVID Signing certificates] -(https://github.com/spiffe/spiffe/blob/master/standards/X509-SVID.md) and that the URI encoded -in the SAN is the cluster identifier created at bootstrap with the ".consul" TLD. In this -example, we will set the URI SAN to `spiffe://36cb52cd-4058-f811-0432-6798a240c5d3.consul`. - -In order to use the Update CA Configuration HTTP endpoint, the private key and certificate -must be passed via JSON: - -```bash -$ jq -n --arg key "$(cat root.key)" --arg cert "$(cat root.crt)" ' -{ - "Provider": "consul", - "Config": { - "PrivateKey": $key, - "RootCert": $cert, - "RotationPeriod": "2160h" - } -}' > ca_config.json -``` - -The resulting `ca_config.json` file can then be used to update the active root certificate: - -```bash -$ cat ca_config.json -{ - "Provider": "consul", - "Config": { - "PrivateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEArqiy1c3pbT3cSkjdEM1APALUareU...", - "RootCert": "-----BEGIN CERTIFICATE-----\nMIIDijCCAnKgAwIBAgIJAOFZ66em1qC7MA0GCSqGSIb3...", - "RotationPeriod": "2160h" - } -} - -$ curl --request PUT --data @ca_config.json localhost:8500/v1/connect/ca/configuration - -... - -[INFO] connect: CA rotated to new root under provider "consul" -``` - -The cluster is now using the new private key and root certificate. Updating the CA config -this way also triggered a certificate rotation, which will be covered in the next section. - -#### Root Certificate Rotation +## Root Certificate Rotation Whenever the CA's configuration is updated in a way that causes the root key to change, a special rotation process will be triggered in order to smoothly transition to -the new certificate. +the new certificate. This rotation is automatically orchestrated by Consul. -First, an intermediate CA certificate is requested from the new root, which is then +This also automatically occurs when a completely different CA provider is +configured (since this changes the root key). Therefore, this automatic rotation +process can also be used to cleanly transition between CA providers. For example, +updating Connect to use Vault instead of the built-in CA. + +During rotation, an intermediate CA certificate is requested from the new root, which is then cross-signed by the old root. This cross-signed certificate is then distributed alongside any newly-generated leaf certificates used by the proxies once the new root becomes active, and provides a chain of trust back to the old root certificate in the @@ -145,9 +123,9 @@ certificate or CA provider has been set up, the new root becomes the active one and is immediately used for signing any new incoming certificate requests. If we check the [list CA roots endpoint](/api/connect/ca.html#list-ca-root-certificates) -after the config update in the previous section, we can see both the old and new root +after updating the configuration with a new root certificate, we can see both the old and new root certificates are present, and the currently active root has an intermediate certificate -which has been generated and cross-signed automatically by the old root during the +which has been generated and cross-signed automatically by the old root during the rotation process: ```bash @@ -190,58 +168,3 @@ $ curl localhost:8500/v1/connect/ca/roots The old root certificate will be automatically removed once enough time has elapsed for any leaf certificates signed by it to expire. - -### External CA (Certificate Authority) Providers - -#### Vault - -Currently, the only supported external CA (Certificate Authority) provider is Vault. The -Vault provider can be used by setting the `ca_provider = "vault"` field in the Connect -configuration: - -```hcl -connect { - enabled = true - ca_provider = "vault" - ca_config { - address = "http://localhost:8200" - token = "..." - root_pki_path = "connect-root" - intermediate_pki_path = "connect-intermediate" - } -} -``` - -The `root_pki_path` can be set to either a new or existing PKI backend; if no CA has been -initialized at the path, a new root CA will be generated. From this root PKI, Connect will -generate an intermediate CA at `intermediate_pki_path`. This intermediate CA is used so that -Connect can manage its lifecycle/rotation - it will never touch or overwrite any existing data -at `root_pki_path`. The intermediate CA is used for signing leaf certificates used by the -services and proxies in Connect to verify identity. - -To update the configuration for the Vault provider, the process is the same as for the Consul CA -provider above: use the [Update CA Configuration endpoint](/api/connect/ca.html#update-ca-configuration) -or the `consul connect ca set-config` command: - -```bash -$ cat ca_config.json -{ - "Provider": "vault", - "Config": { - "Address": "http://localhost:8200", - "Token": "...", - "RootPKIPath": "connect-root-2", - "IntermediatePKIPath": "connect-intermediate" - } -} - -$ consul connect ca set-config -config-file=ca_config.json - -... - -[INFO] connect: CA rotated to new root under provider "vault" -``` - -If the PKI backend at `connect-root-2` in this case has a different root certificate (or if it's -unmounted and hasn't been initialized), the rotation process will be triggered, as described above -in the [Root Certificate Rotation](#root-certificate-rotation) section. diff --git a/website/source/docs/connect/ca/consul.html.md b/website/source/docs/connect/ca/consul.html.md new file mode 100644 index 000000000..5d7257e1a --- /dev/null +++ b/website/source/docs/connect/ca/consul.html.md @@ -0,0 +1,129 @@ +--- +layout: "docs" +page_title: "Connect - Certificate Management" +sidebar_current: "docs-connect-ca-consul" +description: |- + Consul ships with a built-in CA system so that Connect can be easily enabled out of the box. The built-in CA generates and stores the root certificate and private key on Consul servers. It can also be configured with a custom certificate and private key if needed. +--- + +# Built-In CA + +Consul ships with a built-in CA system so that Connect can be +easily enabled out of the box. The built-in CA generates and stores the +root certificate and private key on Consul servers. It can also be +configured with a custom certificate and private key if needed. + +If Connect is enabled and no CA provider is specified, the built-in +CA is the default provider used. The provider can be +[updated and rotated](/docs/connect/ca.html#root-certificate-rotation) +at any point to migrate to a new provider. + +-> This page documents the specifics of the built-in CA provider. +Please read the [certificate management overview](/docs/connect/ca.html) +page first to understand how Consul manages certificates with configurable +CA providers. + +## Configuration + +The built-in CA provider has no required configuration. Enabling Connect +alone will configure the built-in CA provider and will automatically generate +a root certificate and private key: + +```hcl +connect { + enabled = true +} +``` + +A number of optional configuration options are supported. The +first key is the value used in API calls while the second key (after the `/`) +is used if configuring in an agent configuration file. + + * `PrivateKey` / `private_key` (`string: ""`) - A PEM-encoded private key + for signing operations. This must match the private key used for the root + certificate if it is manually specified. If this is blank, a private key + is automatically generated. + + * `RootCert` / `root_cert` (`string: ""`) - A PEM-encoded root certificate + to use. If this is blank, a root certificate is automatically generated + using the private key specified. If this is specified, the certificate + must be a valid + [SPIFFE SVID signing certificate](https://github.com/spiffe/spiffe/blob/master/standards/X509-SVID.md) + and the URI in the SAN must match the cluster identifier created at + bootstrap with the ".consul" TLD. + +## Specifying a Custom Private Key and Root Certificate + +By default, a root certificate and private key will be automatically +generated during the cluster's bootstrap. It is possible to configure +the Consul CA provider to use a specific private key and root certificate. +This is particularly useful if you have an external PKI system that doesn't +currently integrate with Consul directly. + +To view the current CA configuration, use the [Get CA Configuration endpoint] +(/api/connect/ca.html#get-ca-configuration): + +```bash +$ curl localhost:8500/v1/connect/ca/configuration +{ + "Provider": "consul", + "Config": { + "RotationPeriod": "2160h" + }, + "CreateIndex": 5, + "ModifyIndex": 5 +} +``` + +This is the default Connect CA configuration if nothing is explicitly set when +Connect is enabled - the PrivateKey and RootCert fields have not been set, so those have +been generated (as seen above in the roots list). + +There are two ways to have the Consul CA use a custom private key and root certificate: +either through the `ca_config` section of the [Agent configuration] +(docs/agent/options.html#connect_ca_config) (which can only be used during the cluster's +initial bootstrap) or through the [Update CA Configuration endpoint] +(/api/connect/ca.html#update-ca-configuration). + +Currently consul requires that root certificates are valid [SPIFFE SVID Signing certificates] +(https://github.com/spiffe/spiffe/blob/master/standards/X509-SVID.md) and that the URI encoded +in the SAN is the cluster identifier created at bootstrap with the ".consul" TLD. In this +example, we will set the URI SAN to `spiffe://36cb52cd-4058-f811-0432-6798a240c5d3.consul`. + +In order to use the Update CA Configuration HTTP endpoint, the private key and certificate +must be passed via JSON: + +```bash +$ jq -n --arg key "$(cat root.key)" --arg cert "$(cat root.crt)" ' +{ + "Provider": "consul", + "Config": { + "PrivateKey": $key, + "RootCert": $cert, + "RotationPeriod": "2160h" + } +}' > ca_config.json +``` + +The resulting `ca_config.json` file can then be used to update the active root certificate: + +```bash +$ cat ca_config.json +{ + "Provider": "consul", + "Config": { + "PrivateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEArqiy1c3pbT3cSkjdEM1APALUareU...", + "RootCert": "-----BEGIN CERTIFICATE-----\nMIIDijCCAnKgAwIBAgIJAOFZ66em1qC7MA0GCSqGSIb3...", + "RotationPeriod": "2160h" + } +} + +$ curl --request PUT --data @ca_config.json localhost:8500/v1/connect/ca/configuration + +... + +[INFO] connect: CA rotated to new root under provider "consul" +``` + +The cluster is now using the new private key and root certificate. Updating the CA config +this way also triggered a certificate rotation. diff --git a/website/source/docs/connect/ca/vault.html.md b/website/source/docs/connect/ca/vault.html.md new file mode 100644 index 000000000..43440bc58 --- /dev/null +++ b/website/source/docs/connect/ca/vault.html.md @@ -0,0 +1,88 @@ +--- +layout: "docs" +page_title: "Connect - Certificate Management" +sidebar_current: "docs-connect-ca-vault" +description: |- + Consul can be used with Vault to manage and sign certificates. The Vault CA provider uses the Vault PKI secrets engine to generate and sign certificates. +--- + +# Vault as a Connect CA + +Consul can be used with [Vault](https://www.vaultproject.io) to +manage and sign certificates. +The Vault CA provider uses the +[Vault PKI secrets engine](https://www.vaultproject.io/docs/secrets/pki/index.html) +to generate and sign certificates. + +-> This page documents the specifics of the built-in CA provider. +Please read the [certificate management overview](/docs/connect/ca.html) +page first to understand how Consul manages certificates with configurable +CA providers. + +## Requirements + +Prior to using Vault as a CA provider for Consul, the following requirements +must be met: + + * **Vault 0.10.3 or later.** Consul uses URI SANs in the PKI engine which + were introduced in Vault 0.10.3. Prior versions of Vault are not + compatible with Connect. + +## Configuration + +The Vault CA is enabled by setting the `ca_provider` to `"vault"` and +setting the required configuration values. An example configuration +is shown below: + +```hcl +connect { + enabled = true + ca_provider = "vault" + ca_config { + address = "http://localhost:8200" + token = "..." + root_pki_path = "connect-root" + intermediate_pki_path = "connect-intermediate" + } +} +``` + +The set of configuration options is listed below. The +first key is the value used in API calls while the second key (after the `/`) +is used if configuring in an agent configuration file. + + * `Address` / `address` (`string: `) - The address of the Vault + server. + + * `Token` / `token` (`string: `) - A token for accessing Vault. + This is write-only and will not be exposed when reading the CA configuration. + This token must have proper privileges for the PKI paths configured. + + * `RootPKIPath` / `root_pki_path` (`string: `) - The path to + a PKI secrets engine for the root certificate. If the path doesn't + exist, Consul will attempt to mount and configure this automatically. + + * `IntermediatePKIPath` / `intermediate_pki_path` (`string: `) - + The path to a PKI secrets engine for the generated intermediate certificate. + This certificate will be signed by the configured root PKI path. If this + path doesn't exist, Consul will attempt to mount and configure this + automatically. + +## Root and Intermediate PKI Paths + +The Vault CA provider uses two separately configured +[PKI secrets engines](https://www.vaultproject.io/docs/secrets/pki/index.html) +for managing Connect certificates. + +The `RootPKIPath` is the PKI engine for the root certificate. Consul will +use this root certificate to sign the intermediate certificate. Consul will +never attempt to write or modify any data within the root PKI path. + +The `IntermediatePKIPath` is the PKI engine used for storing the intermediate +signed with the root certificate. The intermediate is used to sign all leaf +certificates and Consul may periodically generate new intermediates for +automatic rotation. Therefore, Consul requires write access to this path. + +If either path does not exist, then Consul will attempt to mount and +initialize it. This requires additional privileges by the Vault token in use. +If the paths already exist, Consul will use them as configured. diff --git a/website/source/layouts/docs.erb b/website/source/layouts/docs.erb index 5df44a3ec..c59f79bdc 100644 --- a/website/source/layouts/docs.erb +++ b/website/source/layouts/docs.erb @@ -267,6 +267,14 @@ > Certificate Management + > Native App Integration From 6dfc0d848bf51434b81caf5e2e209cef3fdd2c82 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Mon, 25 Jun 2018 11:12:53 -0700 Subject: [PATCH 535/539] website: correct a few last things in CA docs --- website/source/docs/connect/ca.html.md | 2 +- website/source/docs/connect/ca/consul.html.md | 3 ++- 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/website/source/docs/connect/ca.html.md b/website/source/docs/connect/ca.html.md index 3cc1e9bb9..b255b8ac1 100644 --- a/website/source/docs/connect/ca.html.md +++ b/website/source/docs/connect/ca.html.md @@ -17,7 +17,7 @@ such as when a service needs a new certificate or during CA rotation events. The CA provider abstraction enables Consul to support multiple systems for storing and signing certificates. Consul ships with a [built-in CA](/docs/connect/ca/consul.html) which generates and stores the -root certificate and private key on the Consul servers. Consul also also +root certificate and private key on the Consul servers. Consul also has built-in support for [Vault as a CA](/docs/connect/ca/vault.html). With Vault, the root certificate and private key material remain with the Vault cluster. A future version of diff --git a/website/source/docs/connect/ca/consul.html.md b/website/source/docs/connect/ca/consul.html.md index 5d7257e1a..ea48850a9 100644 --- a/website/source/docs/connect/ca/consul.html.md +++ b/website/source/docs/connect/ca/consul.html.md @@ -50,7 +50,8 @@ is used if configuring in an agent configuration file. must be a valid [SPIFFE SVID signing certificate](https://github.com/spiffe/spiffe/blob/master/standards/X509-SVID.md) and the URI in the SAN must match the cluster identifier created at - bootstrap with the ".consul" TLD. + bootstrap with the ".consul" TLD. The cluster identifier can be found + using the [CA List Roots endpoint](/api/connect/ca.html#list-ca-root-certificates). ## Specifying a Custom Private Key and Root Certificate From 934fa52c9862e9638aa8b6c273231632c50d1bb6 Mon Sep 17 00:00:00 2001 From: Jack Pearkes Date: Mon, 25 Jun 2018 11:23:32 -0700 Subject: [PATCH 536/539] website: getting started next/previous step change --- website/source/intro/getting-started/join.html.md | 3 +-- website/source/intro/getting-started/services.html.md | 4 ++-- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/website/source/intro/getting-started/join.html.md b/website/source/intro/getting-started/join.html.md index 01ee74e78..73203ba01 100644 --- a/website/source/intro/getting-started/join.html.md +++ b/website/source/intro/getting-started/join.html.md @@ -12,8 +12,7 @@ description: > # Consul Cluster We've started our first agent and registered and queried a service on that -agent. This showed how easy it is to use Consul but didn't show how this could -be extended to a scalable, production-grade service discovery infrastructure. +agent. Additionally, we've configured Consul Connect to automatically authorize and encrypt connections between services. This showed how easy it is to use Consul but didn't show how this could be extended to a scalable, production-grade service mesh infrastructure. In this step, we'll create our first real cluster with multiple members. When a Consul agent is started, it begins without knowledge of any other node: diff --git a/website/source/intro/getting-started/services.html.md b/website/source/intro/getting-started/services.html.md index 7046e2575..000ceaa1a 100644 --- a/website/source/intro/getting-started/services.html.md +++ b/website/source/intro/getting-started/services.html.md @@ -163,5 +163,5 @@ dynamically. ## Next Steps We've now configured a single agent and registered a service. This is good -progress, but let's explore the full value of Consul by [setting up our -first cluster](/intro/getting-started/join.html)! +progress, but let's explore the full value of Consul by learning how to +[automatically encrypt and authorize service-to service communication](/intro/getting-started/connect.html) with Consul Connect. From 3db81ca030bf0eb9fbc0ef1f195c30f6d36a1a32 Mon Sep 17 00:00:00 2001 From: Jack Pearkes Date: Mon, 25 Jun 2018 11:34:26 -0700 Subject: [PATCH 537/539] website: minor example and copy fix for multi-dc --- website/source/discovery.html.erb | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/website/source/discovery.html.erb b/website/source/discovery.html.erb index 5afc8442a..0facebd1c 100644 --- a/website/source/discovery.html.erb +++ b/website/source/discovery.html.erb @@ -155,7 +155,7 @@ web-frontend.service.consul. 0 IN A 10.0.1.109

    Multi Datacenter

    -

    Consul supports to multiple datacenters out of the box with no complicated configuration. Look up services in other datacenters or keep the request local. Advanced features like Prepared Queries enable automatic failover to other datacenters.

    +

    Consul supports multiple datacenters out of the box with no complicated configuration. Look up services in other datacenters or keep the request local. Advanced features like Prepared Queries enable automatic failover to other datacenters.

    Learn more

    @@ -168,6 +168,19 @@ web-frontend.service.consul. 0 IN A 10.0.1.109$ curl http://localhost:8500/v1/catalog/datacenters ["dc1", "dc2"] $ curl http://localhost:8500/v1/catalog/nodes?dc=dc2 +[ + { + "ID": "7081dcdf-fdc0-0432-f2e8-a357d36084e1", + "Node": "10-0-1-109", + "Address": "10.0.1.109", + "Datacenter": "dc2", + "TaggedAddresses": { + "lan": "10.0.1.109", + "wan": "10.0.1.109" + }, + "CreateIndex": 112, + "ModifyIndex": 125 + }, ...
    From 3127fccec4126a2b377e2df7030a0fd4f61211c0 Mon Sep 17 00:00:00 2001 From: Jack Pearkes Date: Mon, 25 Jun 2018 11:36:36 -0700 Subject: [PATCH 538/539] website: whitespace fix --- website/source/discovery.html.erb | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/website/source/discovery.html.erb b/website/source/discovery.html.erb index 0facebd1c..f8023ac8c 100644 --- a/website/source/discovery.html.erb +++ b/website/source/discovery.html.erb @@ -166,8 +166,7 @@ web-frontend.service.consul. 0 IN A 10.0.1.109
    $ curl http://localhost:8500/v1/catalog/datacenters -["dc1", "dc2"] -$ curl http://localhost:8500/v1/catalog/nodes?dc=dc2 +["dc1", "dc2"]$ curl http://localhost:8500/v1/catalog/nodes?dc=dc2 [ { "ID": "7081dcdf-fdc0-0432-f2e8-a357d36084e1", From d3cec142d4d4ec7fb6484b9535a19421081065f3 Mon Sep 17 00:00:00 2001 From: Jack Pearkes Date: Mon, 25 Jun 2018 12:05:29 -0700 Subject: [PATCH 539/539] website: fix an assortment of broken links --- website/source/api/connect/ca.html.md | 4 ++-- website/source/docs/commands/connect/ca.html.md.erb | 4 ++-- website/source/docs/connect/ca.html.md | 2 +- website/source/docs/connect/ca/consul.html.md | 2 +- website/source/docs/connect/intentions.html.md | 2 +- 5 files changed, 7 insertions(+), 7 deletions(-) diff --git a/website/source/api/connect/ca.html.md b/website/source/api/connect/ca.html.md index 00fdda13c..d8bf80fed 100644 --- a/website/source/api/connect/ca.html.md +++ b/website/source/api/connect/ca.html.md @@ -14,7 +14,7 @@ Certificate Authority mechanism. ## List CA Root Certificates -This endpoint returns the current list of trusted CA root certificates in +This endpoint returns the current list of trusted CA root certificates in the cluster. | Method | Path | Produces | @@ -104,7 +104,7 @@ $ curl \ This endpoint updates the configuration for the CA. If this results in a new root certificate being used, the [Root Rotation] -(/docs/guides/connect-ca.html#rotation) process will be triggered. +(/docs/connect/ca.html#root-certificate-rotation) process will be triggered. | Method | Path | Produces | | ------ | ---------------------------- | -------------------------- | diff --git a/website/source/docs/commands/connect/ca.html.md.erb b/website/source/docs/commands/connect/ca.html.md.erb index e279d74fd..08874de05 100644 --- a/website/source/docs/commands/connect/ca.html.md.erb +++ b/website/source/docs/commands/connect/ca.html.md.erb @@ -13,7 +13,7 @@ Command: `consul connect ca` The CA connect command is used to interact with Consul Connect's Certificate Authority subsystem. The command can be used to view or modify the current CA configuration. See the -[Connect CA Guide](/docs/guides/connect-ca.html) for more information. +[Connect CA documentation](/docs/connect/ca.html) for more information. ```text Usage: consul connect ca [options] [args] @@ -64,7 +64,7 @@ The output looks like this: ## set-config Modifies the current CA configuration. If this results in a new root certificate -being used, the [Root Rotation](/docs/guides/connect-ca.html#rotation) process +being used, the [Root Rotation](/docs/connect/ca.html#root-certificate-rotation) process will be triggered. Usage: `consul connect ca set-config [options]` diff --git a/website/source/docs/connect/ca.html.md b/website/source/docs/connect/ca.html.md index b255b8ac1..74c412434 100644 --- a/website/source/docs/connect/ca.html.md +++ b/website/source/docs/connect/ca.html.md @@ -33,7 +33,7 @@ will generate the initial root certificates and setup the internal Consul server state. For the initial bootstrap, the CA provider can be configured through the -[Agent configuration](docs/agent/options.html#connect_ca_config). After +[Agent configuration](/docs/agent/options.html#connect_ca_config). After initialization, the CA can only be updated through the [Update CA Configuration API endpoint](/api/connect/ca.html#update-ca-configuration). If a CA is already initialized, any changes to the CA configuration in the diff --git a/website/source/docs/connect/ca/consul.html.md b/website/source/docs/connect/ca/consul.html.md index ea48850a9..e95677681 100644 --- a/website/source/docs/connect/ca/consul.html.md +++ b/website/source/docs/connect/ca/consul.html.md @@ -82,7 +82,7 @@ been generated (as seen above in the roots list). There are two ways to have the Consul CA use a custom private key and root certificate: either through the `ca_config` section of the [Agent configuration] -(docs/agent/options.html#connect_ca_config) (which can only be used during the cluster's +(/docs/agent/options.html#connect_ca_config) (which can only be used during the cluster's initial bootstrap) or through the [Update CA Configuration endpoint] (/api/connect/ca.html#update-ca-configuration). diff --git a/website/source/docs/connect/intentions.html.md b/website/source/docs/connect/intentions.html.md index 589b3329f..ed950e8f3 100644 --- a/website/source/docs/connect/intentions.html.md +++ b/website/source/docs/connect/intentions.html.md @@ -117,7 +117,7 @@ Consul supporting namespaces. ## Intention Management Permissions -Intention management can be protected by [ACLs](/docs/guides/acls.html). +Intention management can be protected by [ACLs](/docs/guides/acl.html). Permissions for intentions are _destination-oriented_, meaning the ACLs for managing intentions are looked up based on the destination value of the intention, not the source.