Merge remote-tracking branch 'origin/master' into service_metadata
This commit is contained in:
commit
824b72cf90
|
@ -8,6 +8,6 @@ branches:
|
|||
- master
|
||||
|
||||
script:
|
||||
- make test
|
||||
- GOTEST_FLAGS="-p 2 -parallel 2" make test
|
||||
|
||||
sudo: false
|
||||
|
|
14
CHANGELOG.md
14
CHANGELOG.md
|
@ -1,8 +1,19 @@
|
|||
## (UNRELEASED)
|
||||
|
||||
|
||||
## 1.0.6 (February 9, 2018)
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* agent: Fixed a panic when using the Azure provider for retry-join. [[GH-3875](https://github.com/hashicorp/consul/issues/3875)]
|
||||
* agent: Fixed a panic when querying Consul's DNS interface over TCP. [[GH-3877](https://github.com/hashicorp/consul/issues/3877)]
|
||||
|
||||
## 1.0.5 (February 7, 2018)
|
||||
|
||||
NOTE ON SKIPPED RELEASE 1.0.4:
|
||||
|
||||
We found [[GH-3867](https://github.com/hashicorp/consul/issues/3867)] after cutting the 1.0.4 release and pushing the 1.0.4 release tag, so we decided to scuttle that release and push 1.0.5 instead with a fix for that issue.
|
||||
|
||||
## 1.0.4 (February 6, 2018)
|
||||
SECURITY:
|
||||
|
||||
* dns: Updated DNS vendor library to pick up bug fix in the DNS server where an open idle connection blocks the accept loop. [[GH-3859](https://github.com/hashicorp/consul/issues/3859)]
|
||||
|
@ -20,6 +31,7 @@ BUG FIXES:
|
|||
|
||||
* agent: (Consul Enterprise) Fixed an issue where the snapshot agent's HTTP client config was being ignored in favor of the HTTP command-line flags.
|
||||
* agent: Fixed an issue where health checks added to services with tags would cause extra periodic writes to the Consul servers, even if nothing had changed. This could cause extra churn on downstream applications like consul-template or Fabio. [[GH-3845](https://github.com/hashicorp/consul/issues/3845)]
|
||||
* agent: Fixed several areas where reading from catalog, health, or agent HTTP endpoints could make unintended mofidications to Consul's state in a way that would cause unnecessary anti-entropy syncs back to the Consul servers. This could cause extra churn on downstream applications like consul-template or Fabio. [[GH-3867](https://github.com/hashicorp/consul/issues/3867)]
|
||||
* agent: Fixed an issue where Serf events for failed Consul servers weren't being proactively processed by the RPC router. This would prvent Consul from proactively choosing a new server, and would instead wait for a failed RPC request before choosing a new server. This exposed clients to a failed request, when often the proactive switching would avoid that. [[GH-3864](https://github.com/hashicorp/consul/issues/3864)]
|
||||
|
||||
## 1.0.3 (January 24, 2018)
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
SHELL = bash
|
||||
GOTOOLS = \
|
||||
github.com/elazarl/go-bindata-assetfs/... \
|
||||
github.com/jteeuwen/go-bindata/... \
|
||||
github.com/hashicorp/go-bindata/... \
|
||||
github.com/magiconair/vendorfmt/cmd/vendorfmt \
|
||||
github.com/mitchellh/gox \
|
||||
golang.org/x/tools/cmd/cover \
|
||||
|
|
|
@ -0,0 +1,76 @@
|
|||
# Consul Internals Guide
|
||||
|
||||
This guide is intended to help folks who want to understand more about how Consul works from a code perspective, or who are thinking about contributing to Consul. For a high level overview of Consul's design, please see the [Consul Architecture Guide](https://www.consul.io/docs/internals/architecture.html) as a starting point.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
Consul is designed around the concept of a [Consul Agent](https://www.consul.io/docs/agent/basics.html). The agent is deployed as a single Go binary and runs on every node in a cluster.
|
||||
|
||||
A small subset of agents, usually 3 to 7, run in server mode and participate in the [Raft Consensus Protocol](https://www.consul.io/docs/internals/consensus.html). The Consul servers hold a consistent view of the state of the cluster, including the [service catalog](https://www.consul.io/api/catalog.html) and the [health state of services and nodes](https://www.consul.io/api/health.html) as well as other items like Consul's [key/value store contents](https://www.consul.io/api/kv.html). An agent in server mode is a superset of the client capabilities that follow.
|
||||
|
||||
All the remaining agents in a cluster run in client mode. Applications on client nodes use their local agent in client mode to [register services](https://www.consul.io/api/agent.html) and to discover other services or interact with the key/value store. For the latter queries, the agent sends RPC requests internally to one of the Consul servers for the information. None of the key/value data is on any of the client agents, for example, it's always fetched on the fly from a Consul server.
|
||||
|
||||
Both client and server mode agents participate in a [Gossip Protocol](https://www.consul.io/docs/internals/gossip.html) which provides two important mechanisms. First, it allows for agents to learn about all the other agents in the cluster, just by joining initially with a single existing member of the cluster. This allows clients to discover new Consul servers. Second, the gossip protocol provides a distributed failure detector, whereby the agents in the cluster randomly probe each other at regular intervals. Because of this failure detector, Consul can run health checks locally on each agent and just sent edge-triggered updates when the state of a health check changes, confident that if the agent dies altogether then the cluster will detect that. This makes Consul's health checking design very scaleable compared to centralized systems with a central polling type of design.
|
||||
|
||||
There are many other aspects of Consul that are well-covered in Consul's [Internals Guides](https://www.consul.io/docs/internals/index.html).
|
||||
|
||||
## Source Code Layout
|
||||
|
||||
### Shared Components
|
||||
|
||||
The components in this section are shared between Consul agents in client and server modes.
|
||||
|
||||
| Directory | Contents |
|
||||
| --------- | -------- |
|
||||
| [command/agent](https://github.com/hashicorp/consul/tree/master/command/agent) | This contains the actual CLI command implementation for the `consul agent` command, which mostly just invokes an agent object from the `agent` package. |
|
||||
| [agent](https://github.com/hashicorp/consul/tree/master/command/agent) | This is where the agent object is defined, and the top level `agent` package has all of the functionality that's common to both client and server agents. This includes things like service registration, the HTTP and DNS endpoints, watches, and top-level glue for health checks. |
|
||||
| [agent/config](https://github.com/hashicorp/consul/tree/master/agent/config) | This has all the user-facing [configuration](https://www.consul.io/docs/agent/options.html) processing code, as well as the internal configuration structure that's used by the agent. |
|
||||
| [agent/checks](https://github.com/hashicorp/consul/tree/master/agent/checks) | This has implementations for the different [health check types](https://www.consul.io/docs/agent/checks.html). |
|
||||
| [agent/ae](https://github.com/hashicorp/consul/tree/master/agent/ae), [agent/local](https://github.com/hashicorp/consul/tree/master/agent/local) | These are used together to power the agent's [Anti-Entropy Sync Back](https://www.consul.io/docs/internals/anti-entropy.html) process to the Consul servers. |
|
||||
| [agent/router](https://github.com/hashicorp/consul/tree/master/agent/router), [agent/pool](https://github.com/hashicorp/consul/tree/master/agent/pool) | These are used for routing RPC queries to Consul servers and for connection pooling. |
|
||||
| [agent/structs](https://github.com/hashicorp/consul/tree/master/agent/structs) | This has definitions of all the internal RPC protocol request and response structures. |
|
||||
|
||||
### Server Components
|
||||
|
||||
The components in this section are only used by Consul servers.
|
||||
|
||||
| Directory | Contents |
|
||||
| --------- | -------- |
|
||||
| [agent/consul](https://github.com/hashicorp/consul/tree/master/agent/consul) | This is where the Consul server object is defined, and the top-level `consul` package has all of the functionality that's used by server agents. This includes things like the internal RPC endpoints. |
|
||||
| [agent/consul/fsm](https://github.com/hashicorp/consul/tree/master/agent/consul/fsm), [agent/consul/state](https://github.com/hashicorp/consul/tree/master/agent/consul/state) | These components make up Consul's finite state machine (updated by the Raft consensus algorithm) and backed by the state store (based on immutable radix trees). All updates of Consul's consistent state is handled by the finite state machine, and all read queries to the Consul servers are serviced by the state store's data structures. |
|
||||
| [agent/consul/autopulot](https://github.com/hashicorp/consul/tree/master/agent/consul/autopilot) | This contains a package of functions that provide Consul's [Autopilot](https://www.consul.io/docs/guides/autopilot.html) features. |
|
||||
|
||||
### Other Components
|
||||
|
||||
There are several other top-level packages used internally by Consul as well as externally by other applications.
|
||||
|
||||
| Directory | Contents |
|
||||
| --------- | -------- |
|
||||
| [acl](https://github.com/hashicorp/consul/tree/master/api) | This supports the underlying policy engine for Consul's [ACL](https://www.consul.io/docs/guides/acl.html) system. |
|
||||
| [api](https://github.com/hashicorp/consul/tree/master/api) | This `api` package provides an official Go API client for Consul, which is also used by Consul's [CLI](https://www.consul.io/docs/commands/index.html) commands to communicate with the local Consul agent. |
|
||||
| [command](https://github.com/hashicorp/consul/tree/master/command) | This contains a sub-package for each of Consul's [CLI](https://www.consul.io/docs/commands/index.html) command implementations. |
|
||||
| [snapshot](https://github.com/hashicorp/consul/tree/master/snapshot) | This has implementation details for Consul's [snapshot archives](https://www.consul.io/api/snapshot.html). |
|
||||
| [watch](https://github.com/hashicorp/consul/tree/master/watch) | This has implementation details for Consul's [watches](https://www.consul.io/docs/agent/watches.html), used both internally to Consul and by the [watch CLI command]](https://www.consul.io/docs/commands/watch.html). |
|
||||
| [website](https://github.com/hashicorp/consul/tree/master/website) | This has the full source code for [consul.io](https://www.consul.io/). Pull requests can update the source code and Consul's documentation all together. |
|
||||
|
||||
## FAQ
|
||||
|
||||
This section addresses some frequently asked questions about Consul's architecture.
|
||||
|
||||
### How does eventually-consistent gossip relate to the Raft consensus protocol?
|
||||
|
||||
When you query Consul for information about a service, such as via the [DNS interface](https://www.consul.io/docs/agent/dns.html), the agent will always make an internal RPC request to a Consul server that will query the consistent state store. Even though an agent might learn that another agent is down via gossip, that won't be reflected in service discovery until the current Raft leader server perceives that through gossip and updates the catalog using Raft. You can see an example of where these layers are plumbed together here - https://github.com/hashicorp/consul/blob/v1.0.5/agent/consul/leader.go#L559-L602.
|
||||
|
||||
## Why does a blocking query sometimes return with identical results?
|
||||
|
||||
Consul's [blocking queries](https://www.consul.io/api/index.html#blocking-queries) make a best-effort attempt to wait for new information, but they may return the same results as the initial query under some circumstances. First, queries are limited to 10 minutes max, so if they time out they will return. Second, due to Consul's prefix-based internal immutable radix tree indexing, there may be modifications to higher-level nodes in the radix tree that cause spurious wakeups. In particular, waiting on things that do not exist is not very efficient, but not very expensive for Consul to serve, so we opted to keep the code complexity low and not try to optimize for that case. You can see the common handler that implements the blocking query logic here - https://github.com/hashicorp/consul/blob/v1.0.5/agent/consul/rpc.go#L361-L439. For more on the immutable radix tree implementation, see https://github.com/hashicorp/go-immutable-radix/ and https://github.com/hashicorp/go-memdb, and the general support for "watches".
|
||||
|
||||
### Do the client agents store any key/value entries?
|
||||
|
||||
No. These are always fetched via an internal RPC request to a Consul server. The agent doesn't do any caching, and if you want to be able to fetch these values even if there's no cluster leader, then you can use a more relaxed [consistency mode](https://www.consul.io/api/index.html#consistency-modes). You can see an example where the `/v1/kv/<key>` HTTP endpoint on the agent makes an internal RPC call here - https://github.com/hashicorp/consul/blob/v1.0.5/agent/kvs_endpoint.go#L56-L90.
|
||||
|
||||
### I don't want to run a Consul agent on every node, can I just run servers with a load balancer in front?
|
||||
|
||||
We strongly recommend running the Consul agent on each node in a cluster. Even the key/value store benefits from having agents on each node. For example, when you lock a key it's done through a [session](https://www.consul.io/docs/internals/sessions.html), which has a lifetime that's by default tied to the health of the agent as determined by Consul's gossip-based distributed failure detector. If the agent dies, the session will be released automatically, allowing some other process to quickly see that and obtain the lock without having to wait for an open-ended TTL to expire. If you are using Consul's service discovery features, the local agent runs the health checks for each service registered on that node and only needs to send edge-triggered updates to the Consul servers (because gossip will determine if the agent itself dies). Most attempts to avoid running an agent on each node will face solving issues that are already solved by Consul's design if the agent is deployed as intended.
|
||||
|
||||
For cases where you really cannot run an agent alongside a service, such as for monitoring an [external service](https://www.consul.io/docs/guides/external.html), there's a companion project called the [Consul External Service Monitor](https://github.com/hashicorp/consul-esm) that may help.
|
|
@ -146,9 +146,11 @@ func (s *HTTPServer) AgentServices(resp http.ResponseWriter, req *http.Request)
|
|||
}
|
||||
|
||||
// Use empty list instead of nil
|
||||
for _, s := range services {
|
||||
for id, s := range services {
|
||||
if s.Tags == nil {
|
||||
s.Tags = make([]string, 0)
|
||||
clone := *s
|
||||
clone.Tags = make([]string, 0)
|
||||
services[id] = &clone
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -170,10 +172,11 @@ func (s *HTTPServer) AgentChecks(resp http.ResponseWriter, req *http.Request) (i
|
|||
}
|
||||
|
||||
// Use empty list instead of nil
|
||||
// checks needs to be a deep copy for this not be racy
|
||||
for _, c := range checks {
|
||||
for id, c := range checks {
|
||||
if c.ServiceTags == nil {
|
||||
c.ServiceTags = make([]string, 0)
|
||||
clone := *c
|
||||
clone.ServiceTags = make([]string, 0)
|
||||
checks[id] = &clone
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -547,11 +547,26 @@ func TestAgent_RemoveServiceRemovesAllChecks(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
// TestAgent_ServiceWithTagChurn is designed to detect a class of issues where
|
||||
// we would have unnecessary catalog churn for services with tags. See issues
|
||||
// #3259, #3642, and #3845.
|
||||
func TestAgent_ServiceWithTagChurn(t *testing.T) {
|
||||
// TestAgent_IndexChurn is designed to detect a class of issues where
|
||||
// we would have unnecessary catalog churn from anti-entropy. See issues
|
||||
// #3259, #3642, #3845, and #3866.
|
||||
func TestAgent_IndexChurn(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
t.Run("no tags", func(t *testing.T) {
|
||||
verifyIndexChurn(t, nil)
|
||||
})
|
||||
|
||||
t.Run("with tags", func(t *testing.T) {
|
||||
verifyIndexChurn(t, []string{"foo", "bar"})
|
||||
})
|
||||
}
|
||||
|
||||
// verifyIndexChurn registers some things and runs anti-entropy a bunch of times
|
||||
// in a row to make sure there are no index bumps.
|
||||
func verifyIndexChurn(t *testing.T, tags []string) {
|
||||
t.Helper()
|
||||
|
||||
a := NewTestAgent(t.Name(), "")
|
||||
defer a.Shutdown()
|
||||
|
||||
|
@ -559,7 +574,7 @@ func TestAgent_ServiceWithTagChurn(t *testing.T) {
|
|||
ID: "redis",
|
||||
Service: "redis",
|
||||
Port: 8000,
|
||||
Tags: []string{"has", "tags"},
|
||||
Tags: tags,
|
||||
}
|
||||
if err := a.AddService(svc, nil, true, ""); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
|
@ -567,6 +582,7 @@ func TestAgent_ServiceWithTagChurn(t *testing.T) {
|
|||
|
||||
chk := &structs.HealthCheck{
|
||||
CheckID: "redis-check",
|
||||
Name: "Service-level check",
|
||||
ServiceID: "redis",
|
||||
Status: api.HealthCritical,
|
||||
}
|
||||
|
@ -577,18 +593,34 @@ func TestAgent_ServiceWithTagChurn(t *testing.T) {
|
|||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
|
||||
chk = &structs.HealthCheck{
|
||||
CheckID: "node-check",
|
||||
Name: "Node-level check",
|
||||
Status: api.HealthCritical,
|
||||
}
|
||||
chkt = &structs.CheckType{
|
||||
TTL: time.Hour,
|
||||
}
|
||||
if err := a.AddCheck(chk, chkt, true, ""); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
|
||||
if err := a.sync.State.SyncFull(); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
|
||||
args := &structs.ServiceSpecificRequest{
|
||||
Datacenter: "dc1",
|
||||
ServiceName: "redis",
|
||||
}
|
||||
var before structs.IndexedHealthChecks
|
||||
if err := a.RPC("Health.ServiceChecks", args, &before); err != nil {
|
||||
var before structs.IndexedCheckServiceNodes
|
||||
if err := a.RPC("Health.ServiceNodes", args, &before); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
if got, want := len(before.HealthChecks), 1; got != want {
|
||||
if got, want := len(before.Nodes), 1; got != want {
|
||||
t.Fatalf("got %d want %d", got, want)
|
||||
}
|
||||
if got, want := len(before.Nodes[0].Checks), 3; /* incl. serfHealth */ got != want {
|
||||
t.Fatalf("got %d want %d", got, want)
|
||||
}
|
||||
|
||||
|
@ -598,8 +630,8 @@ func TestAgent_ServiceWithTagChurn(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
var after structs.IndexedHealthChecks
|
||||
if err := a.RPC("Health.ServiceChecks", args, &after); err != nil {
|
||||
var after structs.IndexedCheckServiceNodes
|
||||
if err := a.RPC("Health.ServiceNodes", args, &after); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
verify.Values(t, "", after, before)
|
||||
|
|
|
@ -201,9 +201,11 @@ func (s *HTTPServer) CatalogServiceNodes(resp http.ResponseWriter, req *http.Req
|
|||
if out.ServiceNodes == nil {
|
||||
out.ServiceNodes = make(structs.ServiceNodes, 0)
|
||||
}
|
||||
for _, s := range out.ServiceNodes {
|
||||
for i, s := range out.ServiceNodes {
|
||||
if s.ServiceTags == nil {
|
||||
s.ServiceTags = make([]string, 0)
|
||||
clone := *s
|
||||
clone.ServiceTags = make([]string, 0)
|
||||
out.ServiceNodes[i] = &clone
|
||||
}
|
||||
}
|
||||
metrics.IncrCounterWithLabels([]string{"client", "api", "success", "catalog_service_nodes"}, 1,
|
||||
|
@ -244,6 +246,15 @@ func (s *HTTPServer) CatalogNodeServices(resp http.ResponseWriter, req *http.Req
|
|||
s.agent.TranslateAddresses(args.Datacenter, out.NodeServices.Node)
|
||||
}
|
||||
|
||||
// TODO: The NodeServices object in IndexedNodeServices is a pointer to
|
||||
// something that's created for each request by the state store way down
|
||||
// in https://github.com/hashicorp/consul/blob/v1.0.4/agent/consul/state/catalog.go#L953-L963.
|
||||
// Since this isn't a pointer to a real state store object, it's safe to
|
||||
// modify out.NodeServices.Services in the loop below without making a
|
||||
// copy here. Same for the Tags in each service entry, since that was
|
||||
// created by .ToNodeService() which made a copy. This is safe as-is but
|
||||
// this whole business is tricky and subtle. See #3867 for more context.
|
||||
|
||||
// Use empty list instead of nil
|
||||
if out.NodeServices != nil {
|
||||
for _, s := range out.NodeServices.Services {
|
||||
|
|
|
@ -130,6 +130,38 @@ func TestEncodeKVasRFC1464(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestDNS_Over_TCP(t *testing.T) {
|
||||
t.Parallel()
|
||||
a := NewTestAgent(t.Name(), "")
|
||||
defer a.Shutdown()
|
||||
|
||||
// Register node
|
||||
args := &structs.RegisterRequest{
|
||||
Datacenter: "dc1",
|
||||
Node: "Foo",
|
||||
Address: "127.0.0.1",
|
||||
}
|
||||
|
||||
var out struct{}
|
||||
if err := a.RPC("Catalog.Register", args, &out); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
|
||||
m := new(dns.Msg)
|
||||
m.SetQuestion("foo.node.dc1.consul.", dns.TypeANY)
|
||||
|
||||
c := new(dns.Client)
|
||||
c.Net = "tcp"
|
||||
in, _, err := c.Exchange(m, a.DNSAddr())
|
||||
if err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
|
||||
if len(in.Answer) != 1 {
|
||||
t.Fatalf("empty lookup: %#v", in)
|
||||
}
|
||||
}
|
||||
|
||||
func TestDNS_NodeLookup(t *testing.T) {
|
||||
t.Parallel()
|
||||
a := NewTestAgent(t.Name(), "")
|
||||
|
|
|
@ -42,9 +42,11 @@ func (s *HTTPServer) HealthChecksInState(resp http.ResponseWriter, req *http.Req
|
|||
if out.HealthChecks == nil {
|
||||
out.HealthChecks = make(structs.HealthChecks, 0)
|
||||
}
|
||||
for _, c := range out.HealthChecks {
|
||||
for i, c := range out.HealthChecks {
|
||||
if c.ServiceTags == nil {
|
||||
c.ServiceTags = make([]string, 0)
|
||||
clone := *c
|
||||
clone.ServiceTags = make([]string, 0)
|
||||
out.HealthChecks[i] = &clone
|
||||
}
|
||||
}
|
||||
return out.HealthChecks, nil
|
||||
|
@ -80,9 +82,11 @@ func (s *HTTPServer) HealthNodeChecks(resp http.ResponseWriter, req *http.Reques
|
|||
if out.HealthChecks == nil {
|
||||
out.HealthChecks = make(structs.HealthChecks, 0)
|
||||
}
|
||||
for _, c := range out.HealthChecks {
|
||||
for i, c := range out.HealthChecks {
|
||||
if c.ServiceTags == nil {
|
||||
c.ServiceTags = make([]string, 0)
|
||||
clone := *c
|
||||
clone.ServiceTags = make([]string, 0)
|
||||
out.HealthChecks[i] = &clone
|
||||
}
|
||||
}
|
||||
return out.HealthChecks, nil
|
||||
|
@ -120,9 +124,11 @@ func (s *HTTPServer) HealthServiceChecks(resp http.ResponseWriter, req *http.Req
|
|||
if out.HealthChecks == nil {
|
||||
out.HealthChecks = make(structs.HealthChecks, 0)
|
||||
}
|
||||
for _, c := range out.HealthChecks {
|
||||
for i, c := range out.HealthChecks {
|
||||
if c.ServiceTags == nil {
|
||||
c.ServiceTags = make([]string, 0)
|
||||
clone := *c
|
||||
clone.ServiceTags = make([]string, 0)
|
||||
out.HealthChecks[i] = &clone
|
||||
}
|
||||
}
|
||||
return out.HealthChecks, nil
|
||||
|
@ -194,19 +200,20 @@ func (s *HTTPServer) HealthServiceNodes(resp http.ResponseWriter, req *http.Requ
|
|||
out.Nodes = make(structs.CheckServiceNodes, 0)
|
||||
}
|
||||
for i := range out.Nodes {
|
||||
// TODO (slackpad) It's lame that this isn't a slice of pointers
|
||||
// but it's not a well-scoped change to fix this. We should
|
||||
// change this at the next opportunity.
|
||||
if out.Nodes[i].Checks == nil {
|
||||
out.Nodes[i].Checks = make(structs.HealthChecks, 0)
|
||||
}
|
||||
for _, c := range out.Nodes[i].Checks {
|
||||
for j, c := range out.Nodes[i].Checks {
|
||||
if c.ServiceTags == nil {
|
||||
c.ServiceTags = make([]string, 0)
|
||||
clone := *c
|
||||
clone.ServiceTags = make([]string, 0)
|
||||
out.Nodes[i].Checks[j] = &clone
|
||||
}
|
||||
}
|
||||
if out.Nodes[i].Service != nil && out.Nodes[i].Service.Tags == nil {
|
||||
out.Nodes[i].Service.Tags = make([]string, 0)
|
||||
clone := *out.Nodes[i].Service
|
||||
clone.Tags = make([]string, 0)
|
||||
out.Nodes[i].Service = &clone
|
||||
}
|
||||
}
|
||||
return out.Nodes, nil
|
||||
|
|
|
@ -610,6 +610,8 @@ type IndexedServiceNodes struct {
|
|||
}
|
||||
|
||||
type IndexedNodeServices struct {
|
||||
// TODO: This should not be a pointer, see comments in
|
||||
// agent/catalog_endpoint.go.
|
||||
NodeServices *NodeServices
|
||||
QueryMeta
|
||||
}
|
||||
|
|
|
@ -98,6 +98,8 @@ type AgentServiceCheck struct {
|
|||
Status string `json:",omitempty"`
|
||||
Notes string `json:",omitempty"`
|
||||
TLSSkipVerify bool `json:",omitempty"`
|
||||
GRPC string `json:",omitempty"`
|
||||
GRPCUseTLS bool `json:",omitempty"`
|
||||
|
||||
// In Consul 0.7 and later, checks that are associated with a service
|
||||
// may also contain this optional DeregisterCriticalServiceAfter field,
|
||||
|
|
|
@ -82,7 +82,7 @@ func decorate(s string) string {
|
|||
}
|
||||
|
||||
func Run(t Failer, f func(r *R)) {
|
||||
run(TwoSeconds(), t, f)
|
||||
run(DefaultFailer(), t, f)
|
||||
}
|
||||
|
||||
func RunWith(r Retryer, t Failer, f func(r *R)) {
|
||||
|
@ -133,6 +133,11 @@ func run(r Retryer, t Failer, f func(r *R)) {
|
|||
}
|
||||
}
|
||||
|
||||
// DefaultFailer provides default retry.Run() behavior for unit tests.
|
||||
func DefaultFailer() *Timer {
|
||||
return &Timer{Timeout: 7 * time.Second, Wait: 25 * time.Millisecond}
|
||||
}
|
||||
|
||||
// TwoSeconds repeats an operation for two seconds and waits 25ms in between.
|
||||
func TwoSeconds() *Timer {
|
||||
return &Timer{Timeout: 2 * time.Second, Wait: 25 * time.Millisecond}
|
||||
|
|
|
@ -22,16 +22,20 @@ func (p *Provider) Help() string {
|
|||
subscription_id: The id of the subscription
|
||||
secret_access_key: The authentication credential
|
||||
|
||||
Use these configuration parameters when using network interfaces:
|
||||
Use these configuration parameters when using tags:
|
||||
|
||||
tag_name: The name of the tag to filter on
|
||||
tag_value: The value of the tag to filter on
|
||||
|
||||
Use these configuration parameters when using vm scale sets:
|
||||
Use these configuration parameters when using Virtual Machine Scale Sets:
|
||||
|
||||
resource_group: The name of the resource group to filter on
|
||||
vm_scale_set: The name of the virtual machine scale set to filter on
|
||||
|
||||
When using tags the only permission needed is the 'ListAll' method for 'NetworkInterfaces'.
|
||||
When using vm scale sets the only Role Action needed is "Microsoft.Compute/virtualMachineScaleSets/*/read".
|
||||
When using tags the only permission needed is the 'ListAll' method for
|
||||
'NetworkInterfaces'. When using Virtual Machine Scale Sets the only Role
|
||||
Action needed is 'Microsoft.Compute/virtualMachineScaleSets/*/read'.
|
||||
|
||||
It is recommended you make a dedicated key used only for auto-joining.
|
||||
`
|
||||
}
|
||||
|
@ -111,7 +115,11 @@ func fetchAddrsWithTags(tagName string, tagValue string, vmnet network.Interface
|
|||
continue
|
||||
}
|
||||
tv := (*v.Tags)[tagName] // *string
|
||||
if tv == nil || *tv != tagValue {
|
||||
if tv == nil {
|
||||
l.Printf("[DEBUG] discover-azure: Interface %s did not have tag: %s", id, tagName)
|
||||
continue
|
||||
}
|
||||
if *tv != tagValue {
|
||||
l.Printf("[DEBUG] discover-azure: Interface %s tag value was: %s which did not match: %s", id, *tv, tagValue)
|
||||
continue
|
||||
}
|
||||
|
|
|
@ -495,6 +495,7 @@ func (srv *Server) serveTCP(l net.Listener) error {
|
|||
rw.Close()
|
||||
return
|
||||
}
|
||||
srv.inFlight.Add(1)
|
||||
srv.serve(rw.RemoteAddr(), handler, m, nil, nil, rw)
|
||||
}()
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@
|
|||
{"path":"github.com/hashicorp/go-discover","checksumSHA1":"yG6DzmNiB7kOdLcgvI1v9GacoSA=","revision":"4e49190abe2f3801a8653d7745a14bc445381615","revisionTime":"2018-01-20T00:19:59Z"},
|
||||
{"path":"github.com/hashicorp/go-discover/provider/aliyun","checksumSHA1":"ZmU/47XUGUQpFP6E8T6Tl8QKszI=","revision":"c98e36ab72ce62b7d8fbc7f7e76f9a60e163cb45","revisionTime":"2017-10-30T10:26:55Z","tree":true},
|
||||
{"path":"github.com/hashicorp/go-discover/provider/aws","checksumSHA1":"lyPRg8aZKgGiNkMILk/VKwOqMy4=","revision":"c98e36ab72ce62b7d8fbc7f7e76f9a60e163cb45","revisionTime":"2017-10-30T10:26:55Z","tree":true},
|
||||
{"path":"github.com/hashicorp/go-discover/provider/azure","checksumSHA1":"+5VD2xEOEvTZhWn2Uk009wD6XvY=","revision":"4e49190abe2f3801a8653d7745a14bc445381615","revisionTime":"2018-01-20T00:19:59Z","tree":true},
|
||||
{"path":"github.com/hashicorp/go-discover/provider/azure","checksumSHA1":"xgO+vNiKIhQlH8cPbkX9TCZXBKQ=","revision":"8f2af0ac44208783caab4dd65462ffb965229c60","revisionTime":"2018-02-08T17:16:50Z","tree":true},
|
||||
{"path":"github.com/hashicorp/go-discover/provider/digitalocean","checksumSHA1":"qoy/euk2dwrONYMUsaAPznHHpxQ=","revision":"c98e36ab72ce62b7d8fbc7f7e76f9a60e163cb45","revisionTime":"2017-10-30T10:26:55Z","tree":true},
|
||||
{"path":"github.com/hashicorp/go-discover/provider/gce","checksumSHA1":"tnKls5vtzpNQAj7b987N4i81HvY=","revision":"7642001b443a3723e2aba277054f16d1df172d97","revisionTime":"2018-01-03T21:14:29Z","tree":true},
|
||||
{"path":"github.com/hashicorp/go-discover/provider/os","checksumSHA1":"LZwn9B00AjulYRCKapmJWFAamoo=","revision":"c98e36ab72ce62b7d8fbc7f7e76f9a60e163cb45","revisionTime":"2017-10-30T10:26:55Z","tree":true},
|
||||
|
|
|
@ -15,7 +15,7 @@ var (
|
|||
//
|
||||
// Version must conform to the format expected by github.com/hashicorp/go-version
|
||||
// for tests to work.
|
||||
Version = "1.0.5"
|
||||
Version = "1.0.7"
|
||||
|
||||
// A pre-release marker for the version. If this is "" (empty string)
|
||||
// then it means that it is a final release. Otherwise, this is a pre-release
|
||||
|
|
|
@ -2,7 +2,7 @@ set :base_url, "https://www.consul.io/"
|
|||
|
||||
activate :hashicorp do |h|
|
||||
h.name = "consul"
|
||||
h.version = "1.0.4"
|
||||
h.version = "1.0.6"
|
||||
h.github_slug = "hashicorp/consul"
|
||||
end
|
||||
|
||||
|
|
|
@ -121,6 +121,15 @@ The table below shows this endpoint's support for
|
|||
container using the specified `Shell`. Note that `Shell` is currently only
|
||||
supported for Docker checks.
|
||||
|
||||
- `GRPC` `(string: "")` - Specifies a `gRPC` check's endpoint that supports the standard
|
||||
[gRPC health checking protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
|
||||
The state of the check will be updated at the given `Interval` by probing the configured
|
||||
endpoint.
|
||||
|
||||
- `GRPCUseTLS` `(bool: false)` - Specifies whether to use TLS for this `gRPC` health check.
|
||||
If TLS is enabled, then by default, a valid TLS certificate is expected. Certificate
|
||||
verification can be turned off by setting `TLSSkipVerify` to `true`.
|
||||
|
||||
- `HTTP` `(string: "")` - Specifies an `HTTP` check to perform a `GET` request
|
||||
against the value of `HTTP` (expected to be a URL) every `Interval`. If the
|
||||
response is any `2xx` code, the check is `passing`. If the response is `429
|
||||
|
@ -135,6 +144,10 @@ The table below shows this endpoint's support for
|
|||
- `Header` `(map[string][]string: {})` - Specifies a set of headers that should
|
||||
be set for `HTTP` checks. Each header can have multiple values.
|
||||
|
||||
- `Timeout` `(duration: 10s)` - Specifies a timeout for outgoing connections in the
|
||||
case of a Script, HTTP, TCP, or gRPC check. Can be specified in the form of "10s"
|
||||
or "5m" (i.e., 10 seconds or 5 minutes, respectively).
|
||||
|
||||
- `TLSSkipVerify` `(bool: false)` - Specifies if the certificate for an HTTPS
|
||||
check should not be verified.
|
||||
|
||||
|
|
Loading…
Reference in New Issue