Merge branch 'master' of github.com:hashicorp/consul into rpc-limiting

# Conflicts:
#	agent/agent.go
#	agent/consul/client.go
This commit is contained in:
Matt Keeler 2018-06-11 16:11:36 -04:00
commit c5d9c2362f
789 changed files with 82761 additions and 8896 deletions

54
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@ -0,0 +1,54 @@
---
name: Bug Report
about: You're experiencing an issue with Consul that is different than the documented behavior.
---
When filing a bug, please include the following headings if
possible. Any example text in this template can be deleted.
#### Overview of the Issue
A paragraph or two about the issue you're experiencing.
#### Reproduction Steps
Steps to reproduce this issue, eg:
1. Create a cluster with n client nodes n and n server nodes
1. Run `curl ...`
1. View error
### Consul info for both Client and Server
<details>
<summary>Client info</summary>
```
output from client 'consul info' command here
```
</details>
<details>
<summary>Server info</summary>
```
output from server 'consul info' command here
```
</details>
### Operating system and Environment details
OS, Architecture, and any other information you can provide
about the environment.
### Log Fragments
Include appropriate Client or Server log fragments.
If the log is longer than a few dozen lines, please
include the URL to the [gist](https://gist.github.com/)
of the log instead of posting it in the issue.
Use `-log-level=TRACE` on the client and server to capture the maximum log detail.

View File

@ -0,0 +1,15 @@
---
name: Feature Request
about: If you have something you think Consul could improve or add support for.
---
Please search the existing issues for relevant feature requests, and use the reaction feature (https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to add upvotes to pre-existing requests.
#### Feature Description
A written overview of the feature.
#### Use Case(s)
Any relevant use-cases that you see.

11
.github/ISSUE_TEMPLATE/question.md vendored Normal file
View File

@ -0,0 +1,11 @@
---
name: Question
about: If you have a question, please check out our other community resources instead of opening an issue.
---
Issues on GitHub are intended to be related to bugs or feature requests, so we recommend using our other community resources instead of asking here.
- [FAQ (Frequently asked questions)](https://www.consul.io/docs/faq.html)
- [Consul Guides](https://www.consul.io/docs/guides/index.html)
- Any other questions can be sent to the [consul mailing list](https://www.consul.io/community.html)

19
.github/ISSUE_TEMPLATE/ui_issues.md vendored Normal file
View File

@ -0,0 +1,19 @@
---
name: Web UI Feedback
about: You have usage feedback for either version of the web UI
---
**Old UI or New UI**
If you're using Consul 1.1.0 or later, you can use the new UI with the change of
an environment variable. Let us know which UI you're using so that we can help.
**Describe the problem you're having**
A clear and concise description of what could be better. If you have screenshots
of a bug, place them here.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Share inspiration**
How have you seen this problem solved well in other UIs?

1
.gitignore vendored
View File

@ -5,6 +5,7 @@
*.swp
*.test
.DS_Store
.fseventsd
.vagrant/
/pkg
Thumbs.db

View File

@ -1,4 +1,54 @@
## (UNRELEASED)
## UNRELEASED
FEATURES:
* dns: Enable PTR record lookups for services with IPs that have no registered node [[PR-4083](https://github.com/hashicorp/consul/pull/4083)]
IMPROVEMENTS:
* agent: A Consul user-agent string is now sent to providers when making retry-join requests [GH-4013](https://github.com/hashicorp/consul/pull/4013)
BUG FIXES:
* agent: Fixed an issue where watches were being duplicated on reload. [[GH-4179](https://github.com/hashicorp/consul/issues/4179)]
* agent: Fixed an issue with Agent watches on a HTTPS only agent would fail to use TLS. [[GH-4076](https://github.com/hashicorp/consul/issues/4076)]
* agent: Fixed bug that would cause unnecessary and frequent logging yamux keepalives [[GH-3040](https://github.com/hashicorp/consul/issues/3040)]
* dns: Re-enable full DNS compression [[GH-4071](https://github.com/hashicorp/consul/issues/4071)]
## 1.1.0 (May 11, 2018)
FEATURES:
* UI: The web UI has been completely redesigned and rebuilt and is in an opt-in beta period.
Setting the `CONSUL_UI_BETA` environment variable to `1` or `true` will replace the existing UI
with the new one. The existing UI will be deprecated and removed in a future release. [[GH-4086](https://github.com/hashicorp/consul/pull/4086)]
* api: Added support for Prometheus client format in metrics endpoint with `?format=prometheus` (see [docs](https://www.consul.io/api/agent.html#view-metrics)) [[GH-4014](https://github.com/hashicorp/consul/issues/4014)]
* agent: New Cloud Auto-join provider: Joyent Triton. [[GH-4108](https://github.com/hashicorp/consul/pull/4108)]
* agent: (Consul Enterprise) Implemented license management with license propagation within a datacenter.
BREAKING CHANGES:
* agent: The following previously deprecated fields and config options have been removed [[GH-4097](https://github.com/hashicorp/consul/pull/4097)]:
- `CheckID` has been removed from config file check definitions (use `id` instead).
- `script` has been removed from config file check definitions (use `args` instead).
- `enableTagOverride` is no longer valid in service definitions (use `enable_tag_override` instead).
- The [deprecated set of metric names](https://consul.io/docs/upgrade-specific.html#metric-names-updated) (beginning with `consul.consul.`) has been removed along with the `enable_deprecated_names` option from the metrics configuration.
IMPROVEMENTS:
* agent: Improve DNS performance on large clusters [[GH-4036](https://github.com/hashicorp/consul/issues/4036)]
* agent: `start_join`, `start_join_wan`, `retry_join`, `retry_join_wan` config params now all support go-sockaddr templates [[GH-4102](https://github.com/hashicorp/consul/pull/4102)]
* server: Added new configuration options `raft_snapshot_interval` and `raft_snapshot_threshold` to allow operators to configure how often servers take raft snapshots. The default values for these have been tuned for large and busy clusters with high write load. [[GH-4105](https://github.com/hashicorp/consul/pull/4105/)]
BUG FIXES:
* agent: Only call signal.Notify once during agent startup [[PR-4024](https://github.com/hashicorp/consul/pull/4024)]
* agent: Add support for the new Service Meta field in agent config [[GH-4045](https://github.com/hashicorp/consul/issues/4045)]
* api: Add support for the new Service Meta field in API client [[GH-4045](https://github.com/hashicorp/consul/issues/4045)]
* agent: Updated serf library for two bug fixes - allow enough time for leave intents to propagate [[GH-510](https://github.com/hashicorp/serf/pull/510)] and preventing a deadlock [[GH-507](https://github.com/hashicorp/serf/pull/510)]
* agent: When node-level checks (e.g. maintenance mode) were deleted, some watchers currently in between blocking calls may have missed the change in index. See [[GH-3970](https://github.com/hashicorp/consul/pull/3970)]
## 1.0.7 (April 13, 2018)
IMPROVEMENTS:
@ -13,7 +63,9 @@ IMPROVEMENTS:
* dns: Introduced a new config param to limit the number of A/AAAA records returned. [[GH-3940](https://github.com/hashicorp/consul/issues/3940)]
* dns: Upgrade vendored DNS library to pick up bugfixes and improvements. [[GH-3978](https://github.com/hashicorp/consul/issues/3978)]
* server: Updated yamux library to pick up a performance improvement. [[GH-3982](https://github.com/hashicorp/consul/issues/3982)]
* server: Add near=\_ip support for prepared queries [[GH-3798](https://github.com/hashicorp/consul/issues/3798)]
* api: Add support for GZIP compression in HTTP responses. [[GH-3687](https://github.com/hashicorp/consul/issues/3687)]
* api: Add `IgnoreCheckIDs` to Prepared Query definition to allow temporarily bypassing faulty health checks [[GH-3727](https://github.com/hashicorp/consul/issues/3727)]
BUG FIXES:

View File

@ -121,8 +121,7 @@ ui:
# also run as part of the release build script when it verifies that there are no
# changes to the UI assets that aren't checked in.
static-assets:
@go-bindata-assetfs -pkg agent -prefix pkg ./pkg/web_ui/...
@mv bindata_assetfs.go agent/
@go-bindata-assetfs -pkg agent -prefix pkg -o agent/bindata_assetfs.go ./pkg/web_ui/...
$(MAKE) format
tools:

View File

@ -1,35 +0,0 @@
If you have a question, please direct it to the
[consul mailing list](https://www.consul.io/community.html) if it hasn't been
addressed in either the [FAQ](https://www.consul.io/docs/faq.html) or in one
of the [Consul Guides](https://www.consul.io/docs/guides/index.html).
When filing a bug, please include the following:
### Description of the Issue (and unexpected/desired result)
### Reproduction steps
### `consul version` for both Client and Server
Client: `[client version here]`
Server: `[server version here]`
### `consul info` for both Client and Server
Client:
```
[Client `consul info` here]
```
Server:
```
[Server `consul info` here]
```
### Operating system and Environment details
### Log Fragments or Link to [gist](https://gist.github.com/)
Include appropriate Client or Server log fragments. If the log is longer
than a few dozen lines, please include the URL to the
[gist](https://gist.github.com/).
TIP: Use `-log-level=TRACE` on the client and server to capture the maximum log detail.

View File

@ -143,11 +143,9 @@ func (m *aclManager) lookupACL(a *Agent, id string) (acl.ACL, error) {
cached = raw.(*aclCacheEntry)
}
if cached != nil && time.Now().Before(cached.Expires) {
metrics.IncrCounter([]string{"consul", "acl", "cache_hit"}, 1)
metrics.IncrCounter([]string{"acl", "cache_hit"}, 1)
return cached.ACL, nil
}
metrics.IncrCounter([]string{"consul", "acl", "cache_miss"}, 1)
metrics.IncrCounter([]string{"acl", "cache_miss"}, 1)
// At this point we might have a stale cached ACL, or none at all, so

View File

@ -74,6 +74,7 @@ type delegate interface {
Shutdown() error
Stats() map[string]map[string]string
ReloadConfig(config *consul.Config) error
enterpriseDelegate
}
// notifier is called after a successful JoinLAN.
@ -647,14 +648,19 @@ func (a *Agent) reloadWatches(cfg *config.RuntimeConfig) error {
// Determine the primary http(s) endpoint.
var netaddr net.Addr
https := false
if len(cfg.HTTPAddrs) > 0 {
netaddr = cfg.HTTPAddrs[0]
} else {
netaddr = cfg.HTTPSAddrs[0]
https = true
}
addr := netaddr.String()
if netaddr.Network() == "unix" {
addr = "unix://" + addr
https = false
} else if https {
addr = "https://" + addr
}
// Fire off a goroutine for each new watch plan.
@ -670,7 +676,19 @@ func (a *Agent) reloadWatches(cfg *config.RuntimeConfig) error {
wp.Handler = makeHTTPWatchHandler(a.LogOutput, httpConfig)
}
wp.LogOutput = a.LogOutput
if err := wp.Run(addr); err != nil {
config := api.DefaultConfig()
if https {
if a.config.CAPath != "" {
config.TLSConfig.CAPath = a.config.CAPath
}
if a.config.CAFile != "" {
config.TLSConfig.CAFile = a.config.CAFile
}
config.TLSConfig.Address = addr
}
if err := wp.RunWithConfig(addr, config); err != nil {
a.logger.Printf("[ERR] agent: Failed to run watch: %v", err)
}
}(wp)
@ -764,6 +782,12 @@ func (a *Agent) consulConfig() (*consul.Config, error) {
if a.config.RaftProtocol != 0 {
base.RaftConfig.ProtocolVersion = raft.ProtocolVersion(a.config.RaftProtocol)
}
if a.config.RaftSnapshotThreshold != 0 {
base.RaftConfig.SnapshotThreshold = uint64(a.config.RaftSnapshotThreshold)
}
if a.config.RaftSnapshotInterval != 0 {
base.RaftConfig.SnapshotInterval = a.config.RaftSnapshotInterval
}
if a.config.ACLMasterToken != "" {
base.ACLMasterToken = a.config.ACLMasterToken
}
@ -1845,7 +1869,6 @@ func (a *Agent) AddCheck(check *structs.HealthCheck, chkType *structs.CheckType,
CheckID: check.CheckID,
DockerContainerID: chkType.DockerContainerID,
Shell: chkType.Shell,
Script: chkType.Script,
ScriptArgs: chkType.ScriptArgs,
Interval: chkType.Interval,
Logger: a.logger,
@ -1876,7 +1899,6 @@ func (a *Agent) AddCheck(check *structs.HealthCheck, chkType *structs.CheckType,
monitor := &checks.CheckMonitor{
Notify: a.State,
CheckID: check.CheckID,
Script: chkType.Script,
ScriptArgs: chkType.ScriptArgs,
Interval: chkType.Interval,
Timeout: chkType.Timeout,

View File

@ -19,6 +19,8 @@ import (
"github.com/hashicorp/logutils"
"github.com/hashicorp/serf/coordinate"
"github.com/hashicorp/serf/serf"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
type Self struct {
@ -75,6 +77,14 @@ func (s *HTTPServer) AgentSelf(resp http.ResponseWriter, req *http.Request) (int
}, nil
}
// enablePrometheusOutput will look for Prometheus mime-type or format Query parameter the same way as Nomad
func enablePrometheusOutput(req *http.Request) bool {
if format := req.URL.Query().Get("format"); format == "prometheus" {
return true
}
return false
}
func (s *HTTPServer) AgentMetrics(resp http.ResponseWriter, req *http.Request) (interface{}, error) {
// Fetch the ACL token, if any, and enforce agent policy.
var token string
@ -86,7 +96,21 @@ func (s *HTTPServer) AgentMetrics(resp http.ResponseWriter, req *http.Request) (
if rule != nil && !rule.AgentRead(s.agent.config.NodeName) {
return nil, acl.ErrPermissionDenied
}
if enablePrometheusOutput(req) {
if s.agent.config.TelemetryPrometheusRetentionTime < 1 {
resp.WriteHeader(http.StatusUnsupportedMediaType)
fmt.Fprint(resp, "Prometheus is not enable since its retention time is not positive")
return nil, nil
}
handlerOptions := promhttp.HandlerOpts{
ErrorLog: s.agent.logger,
ErrorHandling: promhttp.ContinueOnError,
}
handler := promhttp.HandlerFor(prometheus.DefaultGatherer, handlerOptions)
handler.ServeHTTP(resp, req)
return nil, nil
}
return s.agent.MemSink.DisplayMetrics(resp, req)
}
@ -131,9 +155,18 @@ func (s *HTTPServer) AgentServices(resp http.ResponseWriter, req *http.Request)
// Use empty list instead of nil
for id, s := range services {
if s.Tags == nil {
if s.Tags == nil || s.Meta == nil {
clone := *s
clone.Tags = make([]string, 0)
if s.Tags == nil {
clone.Tags = make([]string, 0)
} else {
clone.Tags = s.Tags
}
if s.Meta == nil {
clone.Meta = make(map[string]string)
} else {
clone.Meta = s.Meta
}
services[id] = &clone
}
}

View File

@ -732,14 +732,6 @@ func TestAgent_RegisterCheck_Scripts(t *testing.T) {
name string
check map[string]interface{}
}{
{
"< Consul 1.0.0",
map[string]interface{}{
"Name": "test",
"Interval": "2s",
"Script": "true",
},
},
{
"== Consul 1.0.0",
map[string]interface{}{
@ -1220,6 +1212,7 @@ func TestAgent_RegisterService(t *testing.T) {
args := &structs.ServiceDefinition{
Name: "test",
Meta: map[string]string{"hello": "world"},
Tags: []string{"master"},
Port: 8000,
Check: structs.CheckType{
@ -1248,6 +1241,9 @@ func TestAgent_RegisterService(t *testing.T) {
if _, ok := a.State.Services()["test"]; !ok {
t.Fatalf("missing test service")
}
if val := a.State.Service("test").Meta["hello"]; val != "world" {
t.Fatalf("Missing meta: %v", a.State.Service("test").Meta)
}
// Ensure we have a check mapping
checks := a.State.Checks()
@ -1270,7 +1266,7 @@ func TestAgent_RegisterService_TranslateKeys(t *testing.T) {
a := NewTestAgent(t.Name(), "")
defer a.Shutdown()
json := `{"name":"test", "port":8000, "enable_tag_override": true}`
json := `{"name":"test", "port":8000, "enable_tag_override": true, "meta": {"some": "meta"}}`
req, _ := http.NewRequest("PUT", "/v1/agent/service/register", strings.NewReader(json))
obj, err := a.srv.AgentRegisterService(nil, req)
@ -1280,10 +1276,10 @@ func TestAgent_RegisterService_TranslateKeys(t *testing.T) {
if obj != nil {
t.Fatalf("bad: %v", obj)
}
svc := &structs.NodeService{
ID: "test",
Service: "test",
Meta: map[string]string{"some": "meta"},
Port: 8000,
EnableTagOverride: true,
}

View File

@ -651,8 +651,8 @@ func TestAgent_AddCheck(t *testing.T) {
Status: api.HealthCritical,
}
chk := &structs.CheckType{
Script: "exit 0",
Interval: 15 * time.Second,
ScriptArgs: []string{"exit", "0"},
Interval: 15 * time.Second,
}
err := a.AddCheck(health, chk, false, "")
if err != nil {
@ -690,8 +690,8 @@ func TestAgent_AddCheck_StartPassing(t *testing.T) {
Status: api.HealthPassing,
}
chk := &structs.CheckType{
Script: "exit 0",
Interval: 15 * time.Second,
ScriptArgs: []string{"exit", "0"},
Interval: 15 * time.Second,
}
err := a.AddCheck(health, chk, false, "")
if err != nil {
@ -729,8 +729,8 @@ func TestAgent_AddCheck_MinInterval(t *testing.T) {
Status: api.HealthCritical,
}
chk := &structs.CheckType{
Script: "exit 0",
Interval: time.Microsecond,
ScriptArgs: []string{"exit", "0"},
Interval: time.Microsecond,
}
err := a.AddCheck(health, chk, false, "")
if err != nil {
@ -764,8 +764,8 @@ func TestAgent_AddCheck_MissingService(t *testing.T) {
ServiceID: "baz",
}
chk := &structs.CheckType{
Script: "exit 0",
Interval: time.Microsecond,
ScriptArgs: []string{"exit", "0"},
Interval: time.Microsecond,
}
err := a.AddCheck(health, chk, false, "")
if err == nil || err.Error() != `ServiceID "baz" does not exist` {
@ -829,8 +829,8 @@ func TestAgent_AddCheck_ExecDisable(t *testing.T) {
Status: api.HealthCritical,
}
chk := &structs.CheckType{
Script: "exit 0",
Interval: 15 * time.Second,
ScriptArgs: []string{"exit", "0"},
Interval: 15 * time.Second,
}
err := a.AddCheck(health, chk, false, "")
if err == nil || !strings.Contains(err.Error(), "Scripts are disabled on this agent") {
@ -904,8 +904,8 @@ func TestAgent_RemoveCheck(t *testing.T) {
Status: api.HealthCritical,
}
chk := &structs.CheckType{
Script: "exit 0",
Interval: 15 * time.Second,
ScriptArgs: []string{"exit", "0"},
Interval: 15 * time.Second,
}
err := a.AddCheck(health, chk, false, "")
if err != nil {
@ -1315,8 +1315,8 @@ func TestAgent_PersistCheck(t *testing.T) {
Status: api.HealthPassing,
}
chkType := &structs.CheckType{
Script: "/bin/true",
Interval: 10 * time.Second,
ScriptArgs: []string{"/bin/true"},
Interval: 10 * time.Second,
}
file := filepath.Join(a.Config.DataDir, checksDir, checkIDHash(check.CheckID))
@ -1473,7 +1473,7 @@ func TestAgent_PurgeCheckOnDuplicate(t *testing.T) {
id = "mem"
name = "memory check"
notes = "my cool notes"
script = "/bin/check-redis.py"
args = ["/bin/check-redis.py"]
interval = "30s"
}
`)
@ -2206,3 +2206,23 @@ func TestAgent_reloadWatches(t *testing.T) {
t.Fatalf("bad: %s", err)
}
}
func TestAgent_reloadWatchesHTTPS(t *testing.T) {
t.Parallel()
a := TestAgent{Name: t.Name(), UseTLS: true}
a.Start()
defer a.Shutdown()
// Normal watch with http addr set, should succeed
newConf := *a.config
newConf.Watches = []map[string]interface{}{
{
"type": "key",
"key": "asdf",
"args": []interface{}{"ls"},
},
}
if err := a.reloadWatches(&newConf); err != nil {
t.Fatalf("bad: %s", err)
}
}

File diff suppressed because one or more lines are too long

View File

@ -511,14 +511,6 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) {
}
}
// Add a filter rule if needed for enabling the deprecated metric names
enableDeprecatedNames := b.boolVal(c.Telemetry.EnableDeprecatedNames)
if enableDeprecatedNames {
telemetryAllowedPrefixes = append(telemetryAllowedPrefixes, "consul.consul")
} else {
telemetryBlockedPrefixes = append(telemetryBlockedPrefixes, "consul.consul")
}
// raft performance scaling
performanceRaftMultiplier := b.intVal(c.Performance.RaftMultiplier)
if performanceRaftMultiplier < 1 || uint(performanceRaftMultiplier) > consul.MaxRaftMultiplier {
@ -626,6 +618,7 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) {
TelemetryDisableHostname: b.boolVal(c.Telemetry.DisableHostname),
TelemetryDogstatsdAddr: b.stringVal(c.Telemetry.DogstatsdAddr),
TelemetryDogstatsdTags: c.Telemetry.DogstatsdTags,
TelemetryPrometheusRetentionTime: b.durationVal("prometheus_retention_time", c.Telemetry.PrometheusRetentionTime),
TelemetryFilterDefault: b.boolVal(c.Telemetry.FilterDefault),
TelemetryAllowedPrefixes: telemetryAllowedPrefixes,
TelemetryBlockedPrefixes: telemetryBlockedPrefixes,
@ -680,15 +673,17 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) {
RPCProtocol: b.intVal(c.RPCProtocol),
RPCRateLimit: rate.Limit(b.float64Val(c.Limits.RPCRate)),
RaftProtocol: b.intVal(c.RaftProtocol),
RaftSnapshotThreshold: b.intVal(c.RaftSnapshotThreshold),
RaftSnapshotInterval: b.durationVal("raft_snapshot_interval", c.RaftSnapshotInterval),
ReconnectTimeoutLAN: b.durationVal("reconnect_timeout", c.ReconnectTimeoutLAN),
ReconnectTimeoutWAN: b.durationVal("reconnect_timeout_wan", c.ReconnectTimeoutWAN),
RejoinAfterLeave: b.boolVal(c.RejoinAfterLeave),
RetryJoinIntervalLAN: b.durationVal("retry_interval", c.RetryJoinIntervalLAN),
RetryJoinIntervalWAN: b.durationVal("retry_interval_wan", c.RetryJoinIntervalWAN),
RetryJoinLAN: c.RetryJoinLAN,
RetryJoinLAN: b.expandAllOptionalAddrs("retry_join", c.RetryJoinLAN),
RetryJoinMaxAttemptsLAN: b.intVal(c.RetryJoinMaxAttemptsLAN),
RetryJoinMaxAttemptsWAN: b.intVal(c.RetryJoinMaxAttemptsWAN),
RetryJoinWAN: c.RetryJoinWAN,
RetryJoinWAN: b.expandAllOptionalAddrs("retry_join_wan", c.RetryJoinWAN),
SegmentName: b.stringVal(c.SegmentName),
Segments: segments,
SerfAdvertiseAddrLAN: serfAdvertiseAddrLAN,
@ -703,8 +698,8 @@ func (b *Builder) Build() (rt RuntimeConfig, err error) {
Services: services,
SessionTTLMin: b.durationVal("session_ttl_min", c.SessionTTLMin),
SkipLeaveOnInt: skipLeaveOnInt,
StartJoinAddrsLAN: c.StartJoinAddrsLAN,
StartJoinAddrsWAN: c.StartJoinAddrsWAN,
StartJoinAddrsLAN: b.expandAllOptionalAddrs("start_join", c.StartJoinAddrsLAN),
StartJoinAddrsWAN: b.expandAllOptionalAddrs("start_join_wan", c.StartJoinAddrsWAN),
SyslogFacility: b.stringVal(c.SyslogFacility),
TLSCipherSuites: b.tlsCipherSuites("tls_cipher_suites", c.TLSCipherSuites),
TLSMinVersion: b.stringVal(c.TLSMinVersion),
@ -966,7 +961,6 @@ func (b *Builder) checkVal(v *CheckDefinition) *structs.CheckDefinition {
ServiceID: b.stringVal(v.ServiceID),
Token: b.stringVal(v.Token),
Status: b.stringVal(v.Status),
Script: b.stringVal(v.Script),
ScriptArgs: v.ScriptArgs,
HTTP: b.stringVal(v.HTTP),
Header: v.Header,
@ -997,11 +991,18 @@ func (b *Builder) serviceVal(v *ServiceDefinition) *structs.ServiceDefinition {
checks = append(checks, b.checkVal(v.Check).CheckType())
}
meta := make(map[string]string)
if err := structs.ValidateMetadata(v.Meta, false); err != nil {
b.err = multierror.Append(fmt.Errorf("invalid meta for service %s: %v", b.stringVal(v.Name), err))
} else {
meta = v.Meta
}
return &structs.ServiceDefinition{
ID: b.stringVal(v.ID),
Name: b.stringVal(v.Name),
Tags: v.Tags,
Address: b.stringVal(v.Address),
Meta: meta,
Port: b.intVal(v.Port),
Token: b.stringVal(v.Token),
EnableTagOverride: b.boolVal(v.EnableTagOverride),
@ -1124,6 +1125,43 @@ func (b *Builder) expandAddrs(name string, s *string) []net.Addr {
return addrs
}
// expandOptionalAddrs expands the go-sockaddr template in s and returns the
// result as a list of strings. If s does not contain a go-sockaddr template,
// the result list will contain the input string as a single element with no
// error set. In contrast to expandAddrs, expandOptionalAddrs does not validate
// if the result contains valid addresses and returns a list of strings.
// However, if the expansion of the go-sockaddr template fails an error is set.
func (b *Builder) expandOptionalAddrs(name string, s *string) []string {
if s == nil || *s == "" {
return nil
}
x, err := template.Parse(*s)
if err != nil {
b.err = multierror.Append(b.err, fmt.Errorf("%s: error parsing %q: %s", name, s, err))
return nil
}
if x != *s {
// A template has been expanded, split the results from go-sockaddr
return strings.Fields(x)
} else {
// No template has been expanded, pass through the input
return []string{*s}
}
}
func (b *Builder) expandAllOptionalAddrs(name string, addrs []string) []string {
out := make([]string, 0, len(addrs))
for _, a := range addrs {
expanded := b.expandOptionalAddrs(name, &a)
if expanded != nil {
out = append(out, expanded...)
}
}
return out
}
// expandIPs expands the go-sockaddr template in s and returns a list of
// *net.IPAddr. If one of the expanded addresses is a unix socket
// address an error is set and nil is returned.

View File

@ -94,17 +94,14 @@ func Parse(data string, format string) (c Config, err error) {
// CamelCase and snake_case. Since changing either format would break
// existing setups we have to support both and slowly transition to one of
// the formats. Also, there is at least one case where we use the "wrong"
// key and want to map that to the new key to support deprecation
// (`check.id` vs `service.check.CheckID`) See [GH-3179]. TranslateKeys
// maps potentially CamelCased values to the snake_case that is used in the
// config file parser. If both the CamelCase and snake_case values are set,
// the snake_case value is used and the other value is discarded.
// key and want to map that to the new key to support deprecation -
// see [GH-3179]. TranslateKeys maps potentially CamelCased values to the
// snake_case that is used in the config file parser. If both the CamelCase
// and snake_case values are set the snake_case value is used and the other
// value is discarded.
TranslateKeys(m, map[string]string{
"check_id": "id",
"checkid": "id",
"deregistercriticalserviceafter": "deregister_critical_service_after",
"dockercontainerid": "docker_container_id",
"enabletagoverride": "enable_tag_override",
"scriptargs": "args",
"serviceid": "service_id",
"tlsskipverify": "tls_skip_verify",
@ -197,6 +194,8 @@ type Config struct {
Ports Ports `json:"ports,omitempty" hcl:"ports" mapstructure:"ports"`
RPCProtocol *int `json:"protocol,omitempty" hcl:"protocol" mapstructure:"protocol"`
RaftProtocol *int `json:"raft_protocol,omitempty" hcl:"raft_protocol" mapstructure:"raft_protocol"`
RaftSnapshotThreshold *int `json:"raft_snapshot_threshold,omitempty" hcl:"raft_snapshot_threshold" mapstructure:"raft_snapshot_threshold"`
RaftSnapshotInterval *string `json:"raft_snapshot_interval,omitempty" hcl:"raft_snapshot_interval" mapstructure:"raft_snapshot_interval"`
ReconnectTimeoutLAN *string `json:"reconnect_timeout,omitempty" hcl:"reconnect_timeout" mapstructure:"reconnect_timeout"`
ReconnectTimeoutWAN *string `json:"reconnect_timeout_wan,omitempty" hcl:"reconnect_timeout_wan" mapstructure:"reconnect_timeout_wan"`
RejoinAfterLeave *bool `json:"rejoin_after_leave,omitempty" hcl:"rejoin_after_leave" mapstructure:"rejoin_after_leave"`
@ -319,6 +318,7 @@ type ServiceDefinition struct {
Name *string `json:"name,omitempty" hcl:"name" mapstructure:"name"`
Tags []string `json:"tags,omitempty" hcl:"tags" mapstructure:"tags"`
Address *string `json:"address,omitempty" hcl:"address" mapstructure:"address"`
Meta map[string]string `json:"meta,omitempty" hcl:"meta" mapstructure:"meta"`
Port *int `json:"port,omitempty" hcl:"port" mapstructure:"port"`
Check *CheckDefinition `json:"check,omitempty" hcl:"check" mapstructure:"check"`
Checks []CheckDefinition `json:"checks,omitempty" hcl:"checks" mapstructure:"checks"`
@ -333,7 +333,6 @@ type CheckDefinition struct {
ServiceID *string `json:"service_id,omitempty" hcl:"service_id" mapstructure:"service_id"`
Token *string `json:"token,omitempty" hcl:"token" mapstructure:"token"`
Status *string `json:"status,omitempty" hcl:"status" mapstructure:"status"`
Script *string `json:"script,omitempty" hcl:"script" mapstructure:"script"`
ScriptArgs []string `json:"args,omitempty" hcl:"args" mapstructure:"args"`
HTTP *string `json:"http,omitempty" hcl:"http" mapstructure:"http"`
Header map[string][]string `json:"header,omitempty" hcl:"header" mapstructure:"header"`
@ -394,9 +393,9 @@ type Telemetry struct {
FilterDefault *bool `json:"filter_default,omitempty" hcl:"filter_default" mapstructure:"filter_default"`
PrefixFilter []string `json:"prefix_filter,omitempty" hcl:"prefix_filter" mapstructure:"prefix_filter"`
MetricsPrefix *string `json:"metrics_prefix,omitempty" hcl:"metrics_prefix" mapstructure:"metrics_prefix"`
PrometheusRetentionTime *string `json:"prometheus_retention_time,omitempty" hcl:"prometheus_retention_time" mapstructure:"prometheus_retention_time"`
StatsdAddr *string `json:"statsd_address,omitempty" hcl:"statsd_address" mapstructure:"statsd_address"`
StatsiteAddr *string `json:"statsite_address,omitempty" hcl:"statsite_address" mapstructure:"statsite_address"`
EnableDeprecatedNames *bool `json:"enable_deprecated_names" hcl:"enable_deprecated_names" mapstructure:"enable_deprecated_names"`
}
type Ports struct {
@ -408,30 +407,6 @@ type Ports struct {
Server *int `json:"server,omitempty" hcl:"server" mapstructure:"server"`
}
type RetryJoinAzure struct {
ClientID *string `json:"client_id,omitempty" hcl:"client_id" mapstructure:"client_id"`
SecretAccessKey *string `json:"secret_access_key,omitempty" hcl:"secret_access_key" mapstructure:"secret_access_key"`
SubscriptionID *string `json:"subscription_id,omitempty" hcl:"subscription_id" mapstructure:"subscription_id"`
TagName *string `json:"tag_name,omitempty" hcl:"tag_name" mapstructure:"tag_name"`
TagValue *string `json:"tag_value,omitempty" hcl:"tag_value" mapstructure:"tag_value"`
TenantID *string `json:"tenant_id,omitempty" hcl:"tenant_id" mapstructure:"tenant_id"`
}
type RetryJoinEC2 struct {
AccessKeyID *string `json:"access_key_id,omitempty" hcl:"access_key_id" mapstructure:"access_key_id"`
Region *string `json:"region,omitempty" hcl:"region" mapstructure:"region"`
SecretAccessKey *string `json:"secret_access_key,omitempty" hcl:"secret_access_key" mapstructure:"secret_access_key"`
TagKey *string `json:"tag_key,omitempty" hcl:"tag_key" mapstructure:"tag_key"`
TagValue *string `json:"tag_value,omitempty" hcl:"tag_value" mapstructure:"tag_value"`
}
type RetryJoinGCE struct {
CredentialsFile *string `json:"credentials_file,omitempty" hcl:"credentials_file" mapstructure:"credentials_file"`
ProjectName *string `json:"project_name,omitempty" hcl:"project_name" mapstructure:"project_name"`
TagValue *string `json:"tag_value,omitempty" hcl:"tag_value" mapstructure:"tag_value"`
ZonePattern *string `json:"zone_pattern,omitempty" hcl:"zone_pattern" mapstructure:"zone_pattern"`
}
type UnixSocket struct {
Group *string `json:"group,omitempty" hcl:"group" mapstructure:"group"`
Mode *string `json:"mode,omitempty" hcl:"mode" mapstructure:"mode"`

View File

@ -425,6 +425,14 @@ type RuntimeConfig struct {
// hcl: telemetry { dogstatsd_tags = []string }
TelemetryDogstatsdTags []string
// PrometheusRetentionTime is the retention time for prometheus metrics if greater than 0.
// A value of 0 disable Prometheus support. Regarding Prometheus, it is considered a good
// practice to put large values here (such as a few days), and at least the interval between
// prometheus requests.
//
// hcl: telemetry { prometheus_retention_time = "duration" }
TelemetryPrometheusRetentionTime time.Duration
// TelemetryFilterDefault is the default for whether to allow a metric that's not
// covered by the filter.
//
@ -891,6 +899,17 @@ type RuntimeConfig struct {
// hcl: raft_protocol = int
RaftProtocol int
// RaftSnapshotThreshold sets the minimum threshold of raft commits after which
// a snapshot is created. Defaults to 8192
//
// hcl: raft_snapshot_threshold = int
RaftSnapshotThreshold int
// RaftSnapshotInterval sets the interval to use when checking whether to create
// a new snapshot. Defaults to 5 seconds.
// hcl: raft_snapshot_threshold = int
RaftSnapshotInterval time.Duration
// ReconnectTimeoutLAN specifies the amount of time to wait to reconnect with
// another agent before deciding it's permanently gone. This can be used to
// control the time it takes to reap failed nodes from the cluster.

View File

@ -1121,6 +1121,46 @@ func TestConfigFlagsAndEdgecases(t *testing.T) {
rt.DataDir = dataDir
},
},
{
desc: "start_join address template",
args: []string{`-data-dir=` + dataDir},
json: []string{`{ "start_join": ["{{ printf \"1.2.3.4 4.3.2.1\" }}"] }`},
hcl: []string{`start_join = ["{{ printf \"1.2.3.4 4.3.2.1\" }}"]`},
patch: func(rt *RuntimeConfig) {
rt.StartJoinAddrsLAN = []string{"1.2.3.4", "4.3.2.1"}
rt.DataDir = dataDir
},
},
{
desc: "start_join_wan address template",
args: []string{`-data-dir=` + dataDir},
json: []string{`{ "start_join_wan": ["{{ printf \"1.2.3.4 4.3.2.1\" }}"] }`},
hcl: []string{`start_join_wan = ["{{ printf \"1.2.3.4 4.3.2.1\" }}"]`},
patch: func(rt *RuntimeConfig) {
rt.StartJoinAddrsWAN = []string{"1.2.3.4", "4.3.2.1"}
rt.DataDir = dataDir
},
},
{
desc: "retry_join address template",
args: []string{`-data-dir=` + dataDir},
json: []string{`{ "retry_join": ["{{ printf \"1.2.3.4 4.3.2.1\" }}"] }`},
hcl: []string{`retry_join = ["{{ printf \"1.2.3.4 4.3.2.1\" }}"]`},
patch: func(rt *RuntimeConfig) {
rt.RetryJoinLAN = []string{"1.2.3.4", "4.3.2.1"}
rt.DataDir = dataDir
},
},
{
desc: "retry_join_wan address template",
args: []string{`-data-dir=` + dataDir},
json: []string{`{ "retry_join_wan": ["{{ printf \"1.2.3.4 4.3.2.1\" }}"] }`},
hcl: []string{`retry_join_wan = ["{{ printf \"1.2.3.4 4.3.2.1\" }}"]`},
patch: func(rt *RuntimeConfig) {
rt.RetryJoinWAN = []string{"1.2.3.4", "4.3.2.1"}
rt.DataDir = dataDir
},
},
// ------------------------------------------------------------
// precedence rules
@ -1811,28 +1851,10 @@ func TestConfigFlagsAndEdgecases(t *testing.T) {
patch: func(rt *RuntimeConfig) {
rt.DataDir = dataDir
rt.TelemetryAllowedPrefixes = []string{"foo"}
rt.TelemetryBlockedPrefixes = []string{"bar", "consul.consul"}
rt.TelemetryBlockedPrefixes = []string{"bar"}
},
warns: []string{`Filter rule must begin with either '+' or '-': "nix"`},
},
{
desc: "telemetry.enable_deprecated_names adds allow rule for whitelist",
args: []string{
`-data-dir=` + dataDir,
},
json: []string{`{
"telemetry": { "enable_deprecated_names": true, "filter_default": false }
}`},
hcl: []string{`
telemetry = { enable_deprecated_names = true filter_default = false }
`},
patch: func(rt *RuntimeConfig) {
rt.DataDir = dataDir
rt.TelemetryFilterDefault = false
rt.TelemetryAllowedPrefixes = []string{"consul.consul"}
rt.TelemetryBlockedPrefixes = []string{}
},
},
{
desc: "encrypt has invalid key",
args: []string{
@ -1883,17 +1905,17 @@ func TestConfigFlagsAndEdgecases(t *testing.T) {
`-data-dir=` + dataDir,
},
json: []string{
`{ "check": { "name": "a", "script": "/bin/true" } }`,
`{ "check": { "name": "b", "script": "/bin/false" } }`,
`{ "check": { "name": "a", "args": ["/bin/true"] } }`,
`{ "check": { "name": "b", "args": ["/bin/false"] } }`,
},
hcl: []string{
`check = { name = "a" script = "/bin/true" }`,
`check = { name = "b" script = "/bin/false" }`,
`check = { name = "a" args = ["/bin/true"] }`,
`check = { name = "b" args = ["/bin/false"] }`,
},
patch: func(rt *RuntimeConfig) {
rt.Checks = []*structs.CheckDefinition{
&structs.CheckDefinition{Name: "a", Script: "/bin/true"},
&structs.CheckDefinition{Name: "b", Script: "/bin/false"},
&structs.CheckDefinition{Name: "a", ScriptArgs: []string{"/bin/true"}},
&structs.CheckDefinition{Name: "b", ScriptArgs: []string{"/bin/false"}},
}
rt.DataDir = dataDir
},
@ -1923,20 +1945,59 @@ func TestConfigFlagsAndEdgecases(t *testing.T) {
},
json: []string{
`{ "service": { "name": "a", "port": 80 } }`,
`{ "service": { "name": "b", "port": 90 } }`,
`{ "service": { "name": "b", "port": 90, "meta": {"my": "value"} } }`,
},
hcl: []string{
`service = { name = "a" port = 80 }`,
`service = { name = "b" port = 90 }`,
`service = { name = "b" port = 90 meta={my="value"}}`,
},
patch: func(rt *RuntimeConfig) {
rt.Services = []*structs.ServiceDefinition{
&structs.ServiceDefinition{Name: "a", Port: 80},
&structs.ServiceDefinition{Name: "b", Port: 90},
&structs.ServiceDefinition{Name: "b", Port: 90, Meta: map[string]string{"my": "value"}},
}
rt.DataDir = dataDir
},
},
{
desc: "service with wrong meta: too long key",
args: []string{
`-data-dir=` + dataDir,
},
json: []string{
`{ "service": { "name": "a", "port": 80, "meta": { "` + randomString(520) + `": "metaValue" } } }`,
},
hcl: []string{
`service = { name = "a" port = 80, meta={` + randomString(520) + `="metaValue"} }`,
},
err: `Key is too long`,
},
{
desc: "service with wrong meta: too long value",
args: []string{
`-data-dir=` + dataDir,
},
json: []string{
`{ "service": { "name": "a", "port": 80, "meta": { "a": "` + randomString(520) + `" } } }`,
},
hcl: []string{
`service = { name = "a" port = 80, meta={a="` + randomString(520) + `"} }`,
},
err: `Value is too long`,
},
{
desc: "service with wrong meta: too many meta",
args: []string{
`-data-dir=` + dataDir,
},
json: []string{
`{ "service": { "name": "a", "port": 80, "meta": { ` + metaPairs(70, "json") + `} } }`,
},
hcl: []string{
`service = { name = "a" port = 80 meta={` + metaPairs(70, "hcl") + `} }`,
},
err: `invalid meta for service a: Node metadata cannot contain more than 64 key`,
},
{
desc: "translated keys",
args: []string{
@ -1947,9 +2008,9 @@ func TestConfigFlagsAndEdgecases(t *testing.T) {
"service": {
"name": "a",
"port": 80,
"EnableTagOverride": true,
"enable_tag_override": true,
"check": {
"CheckID": "x",
"id": "x",
"name": "y",
"DockerContainerID": "z",
"DeregisterCriticalServiceAfter": "10s",
@ -1962,9 +2023,9 @@ func TestConfigFlagsAndEdgecases(t *testing.T) {
`service = {
name = "a"
port = 80
EnableTagOverride = true
enable_tag_override = true
check = {
CheckID = "x"
id = "x"
name = "y"
DockerContainerID = "z"
DeregisterCriticalServiceAfter = "10s"
@ -2226,7 +2287,6 @@ func TestFullConfig(t *testing.T) {
"service_id": "L8G0QNmR",
"token": "oo4BCTgJ",
"status": "qLykAl5u",
"script": "dhGfIF8n",
"args": ["f3BemRjy", "e5zgpef7"],
"http": "29B93haH",
"header": {
@ -2251,7 +2311,6 @@ func TestFullConfig(t *testing.T) {
"service_id": "lSulPcyz",
"token": "toO59sh8",
"status": "9RlWsXMV",
"script": "8qbd8tWw",
"args": ["4BAJttck", "4D2NPtTQ"],
"http": "dohLcyQ2",
"header": {
@ -2275,7 +2334,6 @@ func TestFullConfig(t *testing.T) {
"service_id": "CmUUcRna",
"token": "a3nQzHuy",
"status": "irj26nf3",
"script": "FJsI1oXt",
"args": ["9s526ogY", "gSlOHj1w"],
"http": "yzhgsQ7Y",
"header": {
@ -2363,6 +2421,8 @@ func TestFullConfig(t *testing.T) {
},
"protocol": 30793,
"raft_protocol": 19016,
"raft_snapshot_threshold": 16384,
"raft_snapshot_interval": "30s",
"reconnect_timeout": "23739s",
"reconnect_timeout_wan": "26694s",
"recursors": [ "63.38.39.58", "92.49.18.18" ],
@ -2397,17 +2457,19 @@ func TestFullConfig(t *testing.T) {
"service": {
"id": "dLOXpSCI",
"name": "o1ynPkp0",
"meta": {
"mymeta": "data"
},
"tags": ["nkwshvM5", "NTDWn3ek"],
"address": "cOlSOhbp",
"token": "msy7iWER",
"port": 24237,
"enable_tag_override": true,
"check": {
"check_id": "RMi85Dv8",
"id": "RMi85Dv8",
"name": "iehanzuq",
"status": "rCvn53TH",
"notes": "fti5lfF3",
"script": "rtj34nfd",
"args": ["16WRUmwS", "QWk7j7ae"],
"http": "dl3Fgme3",
"header": {
@ -2430,7 +2492,6 @@ func TestFullConfig(t *testing.T) {
"name": "sgV4F7Pk",
"notes": "yP5nKbW0",
"status": "7oLMEyfu",
"script": "NlUQ3nTE",
"args": ["5wEZtZpv", "0Ihyk8cS"],
"http": "KyDjGY9H",
"header": {
@ -2452,7 +2513,6 @@ func TestFullConfig(t *testing.T) {
"name": "IEqrzrsd",
"notes": "SVqApqeM",
"status": "XXkVoZXt",
"script": "IXLZTM6E",
"args": ["wD05Bvao", "rLYB7kQC"],
"http": "kyICZsn8",
"header": {
@ -2481,11 +2541,10 @@ func TestFullConfig(t *testing.T) {
"port": 72219,
"enable_tag_override": true,
"check": {
"check_id": "qmfeO5if",
"id": "qmfeO5if",
"name": "atDGP7n5",
"status": "pDQKEhWL",
"notes": "Yt8EDLev",
"script": "MDu7wjlD",
"args": ["81EDZLPa", "bPY5X8xd"],
"http": "qzHYvmJO",
"header": {
@ -2517,7 +2576,6 @@ func TestFullConfig(t *testing.T) {
"name": "9OOS93ne",
"notes": "CQy86DH0",
"status": "P0SWDvrk",
"script": "6BhLJ7R9",
"args": ["EXvkYIuG", "BATOyt6h"],
"http": "u97ByEiW",
"header": {
@ -2539,7 +2597,6 @@ func TestFullConfig(t *testing.T) {
"name": "PQSaPWlT",
"notes": "jKChDOdl",
"status": "5qFz6OZn",
"script": "PbdxFZ3K",
"args": ["NMtYWlT9", "vj74JXsm"],
"http": "1LBDJhw4",
"header": {
@ -2587,8 +2644,8 @@ func TestFullConfig(t *testing.T) {
"dogstatsd_tags": [ "3N81zSUB","Xtj8AnXZ" ],
"filter_default": true,
"prefix_filter": [ "+oJotS8XJ","-cazlEhGn" ],
"enable_deprecated_names": true,
"metrics_prefix": "ftO6DySn",
"prometheus_retention_time": "15s",
"statsd_address": "drce87cy",
"statsite_address": "HpFwKB8R"
},
@ -2663,7 +2720,6 @@ func TestFullConfig(t *testing.T) {
service_id = "L8G0QNmR"
token = "oo4BCTgJ"
status = "qLykAl5u"
script = "dhGfIF8n"
args = ["f3BemRjy", "e5zgpef7"]
http = "29B93haH"
header = {
@ -2688,7 +2744,6 @@ func TestFullConfig(t *testing.T) {
service_id = "lSulPcyz"
token = "toO59sh8"
status = "9RlWsXMV"
script = "8qbd8tWw"
args = ["4BAJttck", "4D2NPtTQ"]
http = "dohLcyQ2"
header = {
@ -2712,7 +2767,6 @@ func TestFullConfig(t *testing.T) {
service_id = "CmUUcRna"
token = "a3nQzHuy"
status = "irj26nf3"
script = "FJsI1oXt"
args = ["9s526ogY", "gSlOHj1w"]
http = "yzhgsQ7Y"
header = {
@ -2800,6 +2854,8 @@ func TestFullConfig(t *testing.T) {
}
protocol = 30793
raft_protocol = 19016
raft_snapshot_threshold = 16384
raft_snapshot_interval = "30s"
reconnect_timeout = "23739s"
reconnect_timeout_wan = "26694s"
recursors = [ "63.38.39.58", "92.49.18.18" ]
@ -2834,17 +2890,19 @@ func TestFullConfig(t *testing.T) {
service = {
id = "dLOXpSCI"
name = "o1ynPkp0"
meta = {
mymeta = "data"
}
tags = ["nkwshvM5", "NTDWn3ek"]
address = "cOlSOhbp"
token = "msy7iWER"
port = 24237
enable_tag_override = true
check = {
check_id = "RMi85Dv8"
id = "RMi85Dv8"
name = "iehanzuq"
status = "rCvn53TH"
notes = "fti5lfF3"
script = "rtj34nfd"
args = ["16WRUmwS", "QWk7j7ae"]
http = "dl3Fgme3"
header = {
@ -2867,7 +2925,6 @@ func TestFullConfig(t *testing.T) {
name = "sgV4F7Pk"
notes = "yP5nKbW0"
status = "7oLMEyfu"
script = "NlUQ3nTE"
args = ["5wEZtZpv", "0Ihyk8cS"]
http = "KyDjGY9H"
header = {
@ -2889,7 +2946,6 @@ func TestFullConfig(t *testing.T) {
name = "IEqrzrsd"
notes = "SVqApqeM"
status = "XXkVoZXt"
script = "IXLZTM6E"
args = ["wD05Bvao", "rLYB7kQC"]
http = "kyICZsn8"
header = {
@ -2918,11 +2974,10 @@ func TestFullConfig(t *testing.T) {
port = 72219
enable_tag_override = true
check = {
check_id = "qmfeO5if"
id = "qmfeO5if"
name = "atDGP7n5"
status = "pDQKEhWL"
notes = "Yt8EDLev"
script = "MDu7wjlD"
args = ["81EDZLPa", "bPY5X8xd"]
http = "qzHYvmJO"
header = {
@ -2954,7 +3009,6 @@ func TestFullConfig(t *testing.T) {
name = "9OOS93ne"
notes = "CQy86DH0"
status = "P0SWDvrk"
script = "6BhLJ7R9"
args = ["EXvkYIuG", "BATOyt6h"]
http = "u97ByEiW"
header = {
@ -2976,7 +3030,6 @@ func TestFullConfig(t *testing.T) {
name = "PQSaPWlT"
notes = "jKChDOdl"
status = "5qFz6OZn"
script = "PbdxFZ3K"
args = ["NMtYWlT9", "vj74JXsm"]
http = "1LBDJhw4"
header = {
@ -3024,8 +3077,8 @@ func TestFullConfig(t *testing.T) {
dogstatsd_tags = [ "3N81zSUB","Xtj8AnXZ" ]
filter_default = true
prefix_filter = [ "+oJotS8XJ","-cazlEhGn" ]
enable_deprecated_names = true
metrics_prefix = "ftO6DySn"
prometheus_retention_time = "15s"
statsd_address = "drce87cy"
statsite_address = "HpFwKB8R"
}
@ -3239,7 +3292,6 @@ func TestFullConfig(t *testing.T) {
ServiceID: "lSulPcyz",
Token: "toO59sh8",
Status: "9RlWsXMV",
Script: "8qbd8tWw",
ScriptArgs: []string{"4BAJttck", "4D2NPtTQ"},
HTTP: "dohLcyQ2",
Header: map[string][]string{
@ -3263,7 +3315,6 @@ func TestFullConfig(t *testing.T) {
ServiceID: "CmUUcRna",
Token: "a3nQzHuy",
Status: "irj26nf3",
Script: "FJsI1oXt",
ScriptArgs: []string{"9s526ogY", "gSlOHj1w"},
HTTP: "yzhgsQ7Y",
Header: map[string][]string{
@ -3287,7 +3338,6 @@ func TestFullConfig(t *testing.T) {
ServiceID: "L8G0QNmR",
Token: "oo4BCTgJ",
Status: "qLykAl5u",
Script: "dhGfIF8n",
ScriptArgs: []string{"f3BemRjy", "e5zgpef7"},
HTTP: "29B93haH",
Header: map[string][]string{
@ -3363,6 +3413,8 @@ func TestFullConfig(t *testing.T) {
RPCRateLimit: 12029.43,
RPCMaxBurst: 44848,
RaftProtocol: 19016,
RaftSnapshotThreshold: 16384,
RaftSnapshotInterval: 30 * time.Second,
ReconnectTimeoutLAN: 23739 * time.Second,
ReconnectTimeoutWAN: 26694 * time.Second,
RejoinAfterLeave: true,
@ -3407,7 +3459,6 @@ func TestFullConfig(t *testing.T) {
Name: "atDGP7n5",
Status: "pDQKEhWL",
Notes: "Yt8EDLev",
Script: "MDu7wjlD",
ScriptArgs: []string{"81EDZLPa", "bPY5X8xd"},
HTTP: "qzHYvmJO",
Header: map[string][]string{
@ -3440,7 +3491,6 @@ func TestFullConfig(t *testing.T) {
Name: "9OOS93ne",
Notes: "CQy86DH0",
Status: "P0SWDvrk",
Script: "6BhLJ7R9",
ScriptArgs: []string{"EXvkYIuG", "BATOyt6h"},
HTTP: "u97ByEiW",
Header: map[string][]string{
@ -3462,7 +3512,6 @@ func TestFullConfig(t *testing.T) {
Name: "PQSaPWlT",
Notes: "jKChDOdl",
Status: "5qFz6OZn",
Script: "PbdxFZ3K",
ScriptArgs: []string{"NMtYWlT9", "vj74JXsm"},
HTTP: "1LBDJhw4",
Header: map[string][]string{
@ -3487,6 +3536,7 @@ func TestFullConfig(t *testing.T) {
Tags: []string{"nkwshvM5", "NTDWn3ek"},
Address: "cOlSOhbp",
Token: "msy7iWER",
Meta: map[string]string{"mymeta": "data"},
Port: 24237,
EnableTagOverride: true,
Checks: structs.CheckTypes{
@ -3495,7 +3545,6 @@ func TestFullConfig(t *testing.T) {
Name: "sgV4F7Pk",
Notes: "yP5nKbW0",
Status: "7oLMEyfu",
Script: "NlUQ3nTE",
ScriptArgs: []string{"5wEZtZpv", "0Ihyk8cS"},
HTTP: "KyDjGY9H",
Header: map[string][]string{
@ -3517,7 +3566,6 @@ func TestFullConfig(t *testing.T) {
Name: "IEqrzrsd",
Notes: "SVqApqeM",
Status: "XXkVoZXt",
Script: "IXLZTM6E",
ScriptArgs: []string{"wD05Bvao", "rLYB7kQC"},
HTTP: "kyICZsn8",
Header: map[string][]string{
@ -3539,7 +3587,6 @@ func TestFullConfig(t *testing.T) {
Name: "iehanzuq",
Status: "rCvn53TH",
Notes: "fti5lfF3",
Script: "rtj34nfd",
ScriptArgs: []string{"16WRUmwS", "QWk7j7ae"},
HTTP: "dl3Fgme3",
Header: map[string][]string{
@ -3585,9 +3632,10 @@ func TestFullConfig(t *testing.T) {
TelemetryDogstatsdAddr: "0wSndumK",
TelemetryDogstatsdTags: []string{"3N81zSUB", "Xtj8AnXZ"},
TelemetryFilterDefault: true,
TelemetryAllowedPrefixes: []string{"oJotS8XJ", "consul.consul"},
TelemetryAllowedPrefixes: []string{"oJotS8XJ"},
TelemetryBlockedPrefixes: []string{"cazlEhGn"},
TelemetryMetricsPrefix: "ftO6DySn",
TelemetryPrometheusRetentionTime: 15 * time.Second,
TelemetryStatsdAddr: "drce87cy",
TelemetryStatsiteAddr: "HpFwKB8R",
TLSCipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384},
@ -3958,7 +4006,6 @@ func TestSanitize(t *testing.T) {
"Method": "",
"Name": "zoo",
"Notes": "",
"Script": "",
"ScriptArgs": [],
"ServiceID": "",
"Shell": "",
@ -4048,6 +4095,8 @@ func TestSanitize(t *testing.T) {
"RPCProtocol": 0,
"RPCRateLimit": 0,
"RaftProtocol": 0,
"RaftSnapshotInterval": "0s",
"RaftSnapshotThreshold": 0,
"ReconnectTimeoutLAN": "0s",
"ReconnectTimeoutWAN": "0s",
"RejoinAfterLeave": false,
@ -4090,7 +4139,6 @@ func TestSanitize(t *testing.T) {
"Method": "",
"Name": "blurb",
"Notes": "",
"Script": "",
"ScriptArgs": [],
"Shell": "",
"Status": "",
@ -4140,6 +4188,7 @@ func TestSanitize(t *testing.T) {
"TelemetryDogstatsdTags": [],
"TelemetryFilterDefault": false,
"TelemetryMetricsPrefix": "",
"TelemetryPrometheusRetentionTime": "0s",
"TelemetryStatsdAddr": "",
"TelemetryStatsiteAddr": "",
"TranslateWANAddrs": false,

View File

@ -41,7 +41,6 @@ type aclCacheEntry struct {
// assumes its running in the ACL datacenter, or in a non-ACL datacenter when
// using its replicated ACLs during an outage.
func (s *Server) aclLocalFault(id string) (string, string, error) {
defer metrics.MeasureSince([]string{"consul", "acl", "fault"}, time.Now())
defer metrics.MeasureSince([]string{"acl", "fault"}, time.Now())
// Query the state store.
@ -75,7 +74,6 @@ func (s *Server) resolveToken(id string) (acl.ACL, error) {
if len(authDC) == 0 {
return nil, nil
}
defer metrics.MeasureSince([]string{"consul", "acl", "resolveToken"}, time.Now())
defer metrics.MeasureSince([]string{"acl", "resolveToken"}, time.Now())
// Handle the anonymous token
@ -159,11 +157,9 @@ func (c *aclCache) lookupACL(id, authDC string) (acl.ACL, error) {
// Check for live cache.
if cached != nil && time.Now().Before(cached.Expires) {
metrics.IncrCounter([]string{"consul", "acl", "cache_hit"}, 1)
metrics.IncrCounter([]string{"acl", "cache_hit"}, 1)
return cached.ACL, nil
}
metrics.IncrCounter([]string{"consul", "acl", "cache_miss"}, 1)
metrics.IncrCounter([]string{"acl", "cache_miss"}, 1)
// Attempt to refresh the policy from the ACL datacenter via an RPC.
@ -226,7 +222,6 @@ func (c *aclCache) lookupACL(id, authDC string) (acl.ACL, error) {
// Fake up an ACL datacenter reply and inject it into the cache.
// Note we use the local TTL here, so this'll be used for that
// amount of time even once the ACL datacenter becomes available.
metrics.IncrCounter([]string{"consul", "acl", "replication_hit"}, 1)
metrics.IncrCounter([]string{"acl", "replication_hit"}, 1)
reply.ETag = makeACLETag(parent, policy)
reply.TTL = c.config.ACLTTL
@ -678,6 +673,7 @@ func vetRegisterWithACL(rule acl.ACL, subj *structs.RegisterRequest,
ID: subj.Service.ID,
Service: subj.Service.Service,
Tags: subj.Service.Tags,
Meta: subj.Service.Meta,
Address: subj.Service.Address,
Port: subj.Service.Port,
EnableTagOverride: subj.Service.EnableTagOverride,

View File

@ -145,7 +145,6 @@ func (a *ACL) Apply(args *structs.ACLRequest, reply *string) error {
if done, err := a.srv.forward("ACL.Apply", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "acl", "apply"}, time.Now())
defer metrics.MeasureSince([]string{"acl", "apply"}, time.Now())
// Verify we are allowed to serve this request

View File

@ -149,7 +149,6 @@ func (s *Server) fetchLocalACLs() (structs.ACLs, error) {
// datacenter. The lastIndex parameter is a hint about which remote index we
// have replicated to, so this is expected to block until something changes.
func (s *Server) fetchRemoteACLs(lastRemoteIndex uint64) (*structs.IndexedACLs, error) {
defer metrics.MeasureSince([]string{"consul", "leader", "fetchRemoteACLs"}, time.Now())
defer metrics.MeasureSince([]string{"leader", "fetchRemoteACLs"}, time.Now())
args := structs.DCSpecificRequest{
@ -170,7 +169,6 @@ func (s *Server) fetchRemoteACLs(lastRemoteIndex uint64) (*structs.IndexedACLs,
// UpdateLocalACLs is given a list of changes to apply in order to bring the
// local ACLs in-line with the remote ACLs from the ACL datacenter.
func (s *Server) updateLocalACLs(changes structs.ACLRequests) error {
defer metrics.MeasureSince([]string{"consul", "leader", "updateLocalACLs"}, time.Now())
defer metrics.MeasureSince([]string{"leader", "updateLocalACLs"}, time.Now())
minTimePerOp := time.Second / time.Duration(s.config.ACLReplicationApplyLimit)
@ -218,7 +216,6 @@ func (s *Server) replicateACLs(lastRemoteIndex uint64) (uint64, error) {
// Measure everything after the remote query, which can block for long
// periods of time. This metric is a good measure of how expensive the
// replication process is.
defer metrics.MeasureSince([]string{"consul", "leader", "replicateACLs"}, time.Now())
defer metrics.MeasureSince([]string{"leader", "replicateACLs"}, time.Now())
local, err := s.fetchLocalACLs()

View File

@ -55,13 +55,10 @@ func (d *AutopilotDelegate) IsServer(m serf.Member) (*autopilot.ServerInfo, erro
// Heartbeat a metric for monitoring if we're the leader
func (d *AutopilotDelegate) NotifyHealth(health autopilot.OperatorHealthReply) {
if d.server.raft.State() == raft.Leader {
metrics.SetGauge([]string{"consul", "autopilot", "failure_tolerance"}, float32(health.FailureTolerance))
metrics.SetGauge([]string{"autopilot", "failure_tolerance"}, float32(health.FailureTolerance))
if health.Healthy {
metrics.SetGauge([]string{"consul", "autopilot", "healthy"}, 1)
metrics.SetGauge([]string{"autopilot", "healthy"}, 1)
} else {
metrics.SetGauge([]string{"consul", "autopilot", "healthy"}, 0)
metrics.SetGauge([]string{"autopilot", "healthy"}, 0)
}
}

View File

@ -24,7 +24,6 @@ func (c *Catalog) Register(args *structs.RegisterRequest, reply *struct{}) error
if done, err := c.srv.forward("Catalog.Register", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "catalog", "register"}, time.Now())
defer metrics.MeasureSince([]string{"catalog", "register"}, time.Now())
// Verify the args.
@ -117,7 +116,6 @@ func (c *Catalog) Deregister(args *structs.DeregisterRequest, reply *struct{}) e
if done, err := c.srv.forward("Catalog.Deregister", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "catalog", "deregister"}, time.Now())
defer metrics.MeasureSince([]string{"catalog", "deregister"}, time.Now())
// Verify the args
@ -242,7 +240,7 @@ func (c *Catalog) ServiceNodes(args *structs.ServiceSpecificRequest, reply *stru
}
// Verify the arguments
if args.ServiceName == "" {
if args.ServiceName == "" && args.ServiceAddress == "" {
return fmt.Errorf("Must provide service name")
}
@ -258,6 +256,9 @@ func (c *Catalog) ServiceNodes(args *structs.ServiceSpecificRequest, reply *stru
} else {
index, services, err = state.ServiceNodes(ws, args.ServiceName)
}
if args.ServiceAddress != "" {
index, services, err = state.ServiceAddressNodes(ws, args.ServiceAddress)
}
if err != nil {
return err
}
@ -279,19 +280,13 @@ func (c *Catalog) ServiceNodes(args *structs.ServiceSpecificRequest, reply *stru
// Provide some metrics
if err == nil {
metrics.IncrCounterWithLabels([]string{"consul", "catalog", "service", "query"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}})
metrics.IncrCounterWithLabels([]string{"catalog", "service", "query"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}})
if args.ServiceTag != "" {
metrics.IncrCounterWithLabels([]string{"consul", "catalog", "service", "query-tag"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}, {Name: "tag", Value: args.ServiceTag}})
metrics.IncrCounterWithLabels([]string{"catalog", "service", "query-tag"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}, {Name: "tag", Value: args.ServiceTag}})
}
if len(reply.ServiceNodes) == 0 {
metrics.IncrCounterWithLabels([]string{"consul", "catalog", "service", "not-found"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}})
metrics.IncrCounterWithLabels([]string{"catalog", "service", "not-found"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}})
}

View File

@ -73,6 +73,9 @@ type Client struct {
shutdown bool
shutdownCh chan struct{}
shutdownLock sync.Mutex
// embedded struct to hold all the enterprise specific data
EnterpriseClient
}
// NewClient is used to construct a new Consul client from the
@ -131,7 +134,12 @@ func NewClientLogger(config *Config, logger *log.Logger) (*Client, error) {
shutdownCh: make(chan struct{}),
}
c.rpcLimiter.Store(rate.NewLimiter(config.RPCRate, config.RPCMaxBurst))
c.rpcLimiter.Store(rate.NewLimiter(config.RPCRate, config.RPCMaxBurst))
if err := c.initEnterprise(); err != nil {
c.Shutdown()
return nil, err
}
// Initialize the LAN Serf
c.serf, err = c.setupSerf(config.SerfLANConfig,
@ -149,6 +157,11 @@ func NewClientLogger(config *Config, logger *log.Logger) (*Client, error) {
// handlers depend on the router and the router depends on Serf.
go c.lanEventHandler()
if err := c.startEnterprise(); err != nil {
c.Shutdown()
return nil, err
}
return c, nil
}
@ -251,10 +264,8 @@ TRY:
}
// Enforce the RPC limit.
metrics.IncrCounter([]string{"consul", "client", "rpc"}, 1)
metrics.IncrCounter([]string{"client", "rpc"}, 1)
if !c.rpcLimiter.Load().(*rate.Limiter).Allow() {
metrics.IncrCounter([]string{"consul", "client", "rpc", "exceeded"}, 1)
metrics.IncrCounter([]string{"client", "rpc", "exceeded"}, 1)
return structs.ErrRPCRateExceeded
}
@ -295,10 +306,8 @@ func (c *Client) SnapshotRPC(args *structs.SnapshotRequest, in io.Reader, out io
}
// Enforce the RPC limit.
metrics.IncrCounter([]string{"consul", "client", "rpc"}, 1)
metrics.IncrCounter([]string{"client", "rpc"}, 1)
if !c.rpcLimiter.Load().(*rate.Limiter).Allow() {
metrics.IncrCounter([]string{"consul", "client", "rpc", "exceeded"}, 1)
metrics.IncrCounter([]string{"client", "rpc", "exceeded"}, 1)
return structs.ErrRPCRateExceeded
}
@ -348,6 +357,17 @@ func (c *Client) Stats() map[string]map[string]string {
"serf_lan": c.serf.Stats(),
"runtime": runtimeStats(),
}
for outerKey, outerValue := range c.enterpriseStats() {
if _, ok := stats[outerKey]; ok {
for innerKey, innerValue := range outerValue {
stats[outerKey][innerKey] = innerValue
}
} else {
stats[outerKey] = outerValue
}
}
return stats
}

View File

@ -135,6 +135,8 @@ func (c *Client) localEvent(event serf.UserEvent) {
c.config.UserEventHandler(event)
}
default:
c.logger.Printf("[WARN] consul: Unhandled local event: %v", event)
if !c.handleEnterpriseUserEvents(event) {
c.logger.Printf("[WARN] consul: Unhandled local event: %v", event)
}
}
}

View File

@ -448,8 +448,11 @@ func DefaultConfig() *Config {
// Disable shutdown on removal
conf.RaftConfig.ShutdownOnRemove = false
// Check every 5 seconds to see if there are enough new entries for a snapshot
conf.RaftConfig.SnapshotInterval = 5 * time.Second
// Check every 5 seconds to see if there are enough new entries for a snapshot, can be overridden
conf.RaftConfig.SnapshotInterval = 30 * time.Second
// Snapshots are created every 16384 entries by default, can be overridden
conf.RaftConfig.SnapshotThreshold = 16384
return conf
}

View File

@ -0,0 +1,25 @@
// +build !ent
package consul
import (
"github.com/hashicorp/serf/serf"
)
type EnterpriseClient struct{}
func (c *Client) initEnterprise() error {
return nil
}
func (c *Client) startEnterprise() error {
return nil
}
func (c *Client) handleEnterpriseUserEvents(event serf.UserEvent) bool {
return false
}
func (c *Client) enterpriseStats() map[string]map[string]string {
return nil
}

View File

@ -0,0 +1,32 @@
// +build !ent
package consul
import (
"net"
"github.com/hashicorp/consul/agent/pool"
"github.com/hashicorp/serf/serf"
)
type EnterpriseServer struct{}
func (s *Server) initEnterprise() error {
return nil
}
func (s *Server) startEnterprise() error {
return nil
}
func (s *Server) handleEnterpriseUserEvents(event serf.UserEvent) bool {
return false
}
func (s *Server) handleEnterpriseRPCConn(rtype pool.RPCType, conn net.Conn, isTLS bool) bool {
return false
}
func (s *Server) enterpriseStats() map[string]map[string]string {
return nil
}

View File

@ -23,7 +23,6 @@ func init() {
}
func (c *FSM) applyRegister(buf []byte, index uint64) interface{} {
defer metrics.MeasureSince([]string{"consul", "fsm", "register"}, time.Now())
defer metrics.MeasureSince([]string{"fsm", "register"}, time.Now())
var req structs.RegisterRequest
if err := structs.Decode(buf, &req); err != nil {
@ -39,7 +38,6 @@ func (c *FSM) applyRegister(buf []byte, index uint64) interface{} {
}
func (c *FSM) applyDeregister(buf []byte, index uint64) interface{} {
defer metrics.MeasureSince([]string{"consul", "fsm", "deregister"}, time.Now())
defer metrics.MeasureSince([]string{"fsm", "deregister"}, time.Now())
var req structs.DeregisterRequest
if err := structs.Decode(buf, &req); err != nil {
@ -73,8 +71,6 @@ func (c *FSM) applyKVSOperation(buf []byte, index uint64) interface{} {
if err := structs.Decode(buf, &req); err != nil {
panic(fmt.Errorf("failed to decode request: %v", err))
}
defer metrics.MeasureSinceWithLabels([]string{"consul", "fsm", "kvs"}, time.Now(),
[]metrics.Label{{Name: "op", Value: string(req.Op)}})
defer metrics.MeasureSinceWithLabels([]string{"fsm", "kvs"}, time.Now(),
[]metrics.Label{{Name: "op", Value: string(req.Op)}})
switch req.Op {
@ -120,8 +116,6 @@ func (c *FSM) applySessionOperation(buf []byte, index uint64) interface{} {
if err := structs.Decode(buf, &req); err != nil {
panic(fmt.Errorf("failed to decode request: %v", err))
}
defer metrics.MeasureSinceWithLabels([]string{"consul", "fsm", "session"}, time.Now(),
[]metrics.Label{{Name: "op", Value: string(req.Op)}})
defer metrics.MeasureSinceWithLabels([]string{"fsm", "session"}, time.Now(),
[]metrics.Label{{Name: "op", Value: string(req.Op)}})
switch req.Op {
@ -143,8 +137,6 @@ func (c *FSM) applyACLOperation(buf []byte, index uint64) interface{} {
if err := structs.Decode(buf, &req); err != nil {
panic(fmt.Errorf("failed to decode request: %v", err))
}
defer metrics.MeasureSinceWithLabels([]string{"consul", "fsm", "acl"}, time.Now(),
[]metrics.Label{{Name: "op", Value: string(req.Op)}})
defer metrics.MeasureSinceWithLabels([]string{"fsm", "acl"}, time.Now(),
[]metrics.Label{{Name: "op", Value: string(req.Op)}})
switch req.Op {
@ -177,8 +169,6 @@ func (c *FSM) applyTombstoneOperation(buf []byte, index uint64) interface{} {
if err := structs.Decode(buf, &req); err != nil {
panic(fmt.Errorf("failed to decode request: %v", err))
}
defer metrics.MeasureSinceWithLabels([]string{"consul", "fsm", "tombstone"}, time.Now(),
[]metrics.Label{{Name: "op", Value: string(req.Op)}})
defer metrics.MeasureSinceWithLabels([]string{"fsm", "tombstone"}, time.Now(),
[]metrics.Label{{Name: "op", Value: string(req.Op)}})
switch req.Op {
@ -199,7 +189,6 @@ func (c *FSM) applyCoordinateBatchUpdate(buf []byte, index uint64) interface{} {
if err := structs.Decode(buf, &updates); err != nil {
panic(fmt.Errorf("failed to decode batch updates: %v", err))
}
defer metrics.MeasureSince([]string{"consul", "fsm", "coordinate", "batch-update"}, time.Now())
defer metrics.MeasureSince([]string{"fsm", "coordinate", "batch-update"}, time.Now())
if err := c.state.CoordinateBatchUpdate(index, updates); err != nil {
return err
@ -215,8 +204,6 @@ func (c *FSM) applyPreparedQueryOperation(buf []byte, index uint64) interface{}
panic(fmt.Errorf("failed to decode request: %v", err))
}
defer metrics.MeasureSinceWithLabels([]string{"consul", "fsm", "prepared-query"}, time.Now(),
[]metrics.Label{{Name: "op", Value: string(req.Op)}})
defer metrics.MeasureSinceWithLabels([]string{"fsm", "prepared-query"}, time.Now(),
[]metrics.Label{{Name: "op", Value: string(req.Op)}})
switch req.Op {
@ -235,7 +222,6 @@ func (c *FSM) applyTxn(buf []byte, index uint64) interface{} {
if err := structs.Decode(buf, &req); err != nil {
panic(fmt.Errorf("failed to decode request: %v", err))
}
defer metrics.MeasureSince([]string{"consul", "fsm", "txn"}, time.Now())
defer metrics.MeasureSince([]string{"fsm", "txn"}, time.Now())
results, errors := c.state.TxnRW(index, req.Ops)
return structs.TxnResponse{
@ -249,7 +235,6 @@ func (c *FSM) applyAutopilotUpdate(buf []byte, index uint64) interface{} {
if err := structs.Decode(buf, &req); err != nil {
panic(fmt.Errorf("failed to decode request: %v", err))
}
defer metrics.MeasureSince([]string{"consul", "fsm", "autopilot"}, time.Now())
defer metrics.MeasureSince([]string{"fsm", "autopilot"}, time.Now())
if req.CAS {

View File

@ -57,7 +57,6 @@ func registerRestorer(msg structs.MessageType, fn restorer) {
// Persist saves the FSM snapshot out to the given sink.
func (s *snapshot) Persist(sink raft.SnapshotSink) error {
defer metrics.MeasureSince([]string{"consul", "fsm", "persist"}, time.Now())
defer metrics.MeasureSince([]string{"fsm", "persist"}, time.Now())
// Write the header

View File

@ -139,19 +139,13 @@ func (h *Health) ServiceNodes(args *structs.ServiceSpecificRequest, reply *struc
// Provide some metrics
if err == nil {
metrics.IncrCounterWithLabels([]string{"consul", "health", "service", "query"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}})
metrics.IncrCounterWithLabels([]string{"health", "service", "query"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}})
if args.ServiceTag != "" {
metrics.IncrCounterWithLabels([]string{"consul", "health", "service", "query-tag"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}, {Name: "tag", Value: args.ServiceTag}})
metrics.IncrCounterWithLabels([]string{"health", "service", "query-tag"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}, {Name: "tag", Value: args.ServiceTag}})
}
if len(reply.Nodes) == 0 {
metrics.IncrCounterWithLabels([]string{"consul", "health", "service", "not-found"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}})
metrics.IncrCounterWithLabels([]string{"health", "service", "not-found"}, 1,
[]metrics.Label{{Name: "service", Value: args.ServiceName}})
}

View File

@ -81,7 +81,6 @@ func (k *KVS) Apply(args *structs.KVSRequest, reply *bool) error {
if done, err := k.srv.forward("KVS.Apply", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "kvs", "apply"}, time.Now())
defer metrics.MeasureSince([]string{"kvs", "apply"}, time.Now())
// Perform the pre-apply checks.

View File

@ -116,7 +116,6 @@ RECONCILE:
s.logger.Printf("[ERR] consul: failed to wait for barrier: %v", err)
goto WAIT
}
metrics.MeasureSince([]string{"consul", "leader", "barrier"}, start)
metrics.MeasureSince([]string{"leader", "barrier"}, start)
// Check if we need to handle initial leadership actions
@ -183,7 +182,6 @@ WAIT:
// previously inflight transactions have been committed and that our
// state is up-to-date.
func (s *Server) establishLeadership() error {
defer metrics.MeasureSince([]string{"consul", "leader", "establish_leadership"}, time.Now())
// This will create the anonymous token and master token (if that is
// configured).
if err := s.initializeACL(); err != nil {
@ -219,7 +217,6 @@ func (s *Server) establishLeadership() error {
// revokeLeadership is invoked once we step down as leader.
// This is used to cleanup any state that may be specific to a leader.
func (s *Server) revokeLeadership() error {
defer metrics.MeasureSince([]string{"consul", "leader", "revoke_leadership"}, time.Now())
// Disable the tombstone GC, since it is only useful as a leader
s.tombstoneGC.SetEnabled(false)
@ -444,7 +441,6 @@ func (s *Server) reconcileMember(member serf.Member) error {
s.logger.Printf("[WARN] consul: skipping reconcile of node %v", member)
return nil
}
defer metrics.MeasureSince([]string{"consul", "leader", "reconcileMember"}, time.Now())
defer metrics.MeasureSince([]string{"leader", "reconcileMember"}, time.Now())
var err error
switch member.Status {
@ -805,7 +801,6 @@ func (s *Server) removeConsulServer(m serf.Member, port int) error {
// through Raft to ensure consistency. We do this outside the leader loop
// to avoid blocking.
func (s *Server) reapTombstones(index uint64) {
defer metrics.MeasureSince([]string{"consul", "leader", "reapTombstones"}, time.Now())
defer metrics.MeasureSince([]string{"leader", "reapTombstones"}, time.Now())
req := structs.TombstoneRequest{
Datacenter: s.config.Datacenter,

View File

@ -6,6 +6,7 @@ import (
"testing"
"github.com/hashicorp/consul/agent/structs"
"github.com/hashicorp/consul/types"
"github.com/mitchellh/copystructure"
)
@ -32,6 +33,15 @@ var (
"${agent.segment}",
},
},
IgnoreCheckIDs: []types.CheckID{
"${name.full}",
"${name.prefix}",
"${name.suffix}",
"${match(0)}",
"${match(1)}",
"${match(2)}",
"${agent.segment}",
},
Tags: []string{
"${name.full}",
"${name.prefix}",
@ -124,6 +134,7 @@ func TestTemplate_Compile(t *testing.T) {
query.Template.Type = structs.QueryTemplateTypeNamePrefixMatch
query.Template.Regexp = "^(hello)there$"
query.Service.Service = "${name.full}"
query.Service.IgnoreCheckIDs = []types.CheckID{"${match(1)}", "${agent.segment}"}
query.Service.Tags = []string{"${match(1)}", "${agent.segment}"}
backup, err := copystructure.Copy(query)
if err != nil {
@ -151,6 +162,10 @@ func TestTemplate_Compile(t *testing.T) {
},
Service: structs.ServiceQuery{
Service: "hellothere",
IgnoreCheckIDs: []types.CheckID{
"hello",
"segment-foo",
},
Tags: []string{
"hello",
"segment-foo",

View File

@ -32,7 +32,6 @@ func (p *PreparedQuery) Apply(args *structs.PreparedQueryRequest, reply *string)
if done, err := p.srv.forward("PreparedQuery.Apply", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "prepared-query", "apply"}, time.Now())
defer metrics.MeasureSince([]string{"prepared-query", "apply"}, time.Now())
// Validate the ID. We must create new IDs before applying to the Raft
@ -287,7 +286,6 @@ func (p *PreparedQuery) Explain(args *structs.PreparedQueryExecuteRequest,
if done, err := p.srv.forward("PreparedQuery.Explain", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "prepared-query", "explain"}, time.Now())
defer metrics.MeasureSince([]string{"prepared-query", "explain"}, time.Now())
// We have to do this ourselves since we are not doing a blocking RPC.
@ -335,7 +333,6 @@ func (p *PreparedQuery) Execute(args *structs.PreparedQueryExecuteRequest,
if done, err := p.srv.forward("PreparedQuery.Execute", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "prepared-query", "execute"}, time.Now())
defer metrics.MeasureSince([]string{"prepared-query", "execute"}, time.Now())
// We have to do this ourselves since we are not doing a blocking RPC.
@ -393,6 +390,31 @@ func (p *PreparedQuery) Execute(args *structs.PreparedQueryExecuteRequest,
// Respect the magic "_agent" flag.
if qs.Node == "_agent" {
qs.Node = args.Agent.Node
} else if qs.Node == "_ip" {
if args.Source.Ip != "" {
_, nodes, err := state.Nodes(nil)
if err != nil {
return err
}
for _, node := range nodes {
if args.Source.Ip == node.Address {
qs.Node = node.Node
break
}
}
} else {
p.srv.logger.Printf("[WARN] Prepared Query using near=_ip requires " +
"the source IP to be set but none was provided. No distance " +
"sorting will be done.")
}
// Either a source IP was given but we couldnt find the associated node
// or no source ip was given. In both cases we should wipe the Node value
if qs.Node == "_ip" {
qs.Node = ""
}
}
// Perform the distance sort
@ -446,7 +468,6 @@ func (p *PreparedQuery) ExecuteRemote(args *structs.PreparedQueryExecuteRemoteRe
if done, err := p.srv.forward("PreparedQuery.ExecuteRemote", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "prepared-query", "execute_remote"}, time.Now())
defer metrics.MeasureSince([]string{"prepared-query", "execute_remote"}, time.Now())
// We have to do this ourselves since we are not doing a blocking RPC.
@ -496,7 +517,8 @@ func (p *PreparedQuery) execute(query *structs.PreparedQuery,
}
// Filter out any unhealthy nodes.
nodes = nodes.Filter(query.Service.OnlyPassing)
nodes = nodes.FilterIgnore(query.Service.OnlyPassing,
query.Service.IgnoreCheckIDs)
// Apply the node metadata filters, if any.
if len(query.Service.NodeMeta) > 0 {

View File

@ -17,6 +17,7 @@ import (
"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/testrpc"
"github.com/hashicorp/consul/testutil/retry"
"github.com/hashicorp/consul/types"
"github.com/hashicorp/net-rpc-msgpackrpc"
"github.com/hashicorp/serf/coordinate"
)
@ -2076,6 +2077,41 @@ func TestPreparedQuery_Execute(t *testing.T) {
}
}
// Make the query ignore all our health checks (which have "failing" ID
// implicitly from their name).
query.Query.Service.IgnoreCheckIDs = []types.CheckID{"failing"}
if err := msgpackrpc.CallWithCodec(codec1, "PreparedQuery.Apply", &query, &query.Query.ID); err != nil {
t.Fatalf("err: %v", err)
}
// We should end up with 10 nodes again
{
req := structs.PreparedQueryExecuteRequest{
Datacenter: "dc1",
QueryIDOrName: query.Query.ID,
QueryOptions: structs.QueryOptions{Token: execToken},
}
var reply structs.PreparedQueryExecuteResponse
if err := msgpackrpc.CallWithCodec(codec1, "PreparedQuery.Execute", &req, &reply); err != nil {
t.Fatalf("err: %v", err)
}
if len(reply.Nodes) != 10 ||
reply.Datacenter != "dc1" ||
reply.Service != query.Query.Service.Service ||
!reflect.DeepEqual(reply.DNS, query.Query.DNS) ||
!reply.QueryMeta.KnownLeader {
t.Fatalf("bad: %v", reply)
}
}
// Undo that so all the following tests aren't broken!
query.Query.Service.IgnoreCheckIDs = nil
if err := msgpackrpc.CallWithCodec(codec1, "PreparedQuery.Apply", &query, &query.Query.ID); err != nil {
t.Fatalf("err: %v", err)
}
// Make the query more picky by adding a tag filter. This just proves we
// call into the tag filter, it is tested more thoroughly in a separate
// test.

View File

@ -59,7 +59,6 @@ func (s *Server) listen(listener net.Listener) {
}
go s.handleConn(conn, false)
metrics.IncrCounter([]string{"consul", "rpc", "accept_conn"}, 1)
metrics.IncrCounter([]string{"rpc", "accept_conn"}, 1)
}
}
@ -97,7 +96,6 @@ func (s *Server) handleConn(conn net.Conn, isTLS bool) {
s.handleConsulConn(conn)
case pool.RPCRaft:
metrics.IncrCounter([]string{"consul", "rpc", "raft_handoff"}, 1)
metrics.IncrCounter([]string{"rpc", "raft_handoff"}, 1)
s.raftLayer.Handoff(conn)
@ -117,9 +115,10 @@ func (s *Server) handleConn(conn net.Conn, isTLS bool) {
s.handleSnapshotConn(conn)
default:
s.logger.Printf("[ERR] consul.rpc: unrecognized RPC byte: %v %s", typ, logConn(conn))
conn.Close()
return
if !s.handleEnterpriseRPCConn(typ, conn, isTLS) {
s.logger.Printf("[ERR] consul.rpc: unrecognized RPC byte: %v %s", typ, logConn(conn))
conn.Close()
}
}
}
@ -156,12 +155,10 @@ func (s *Server) handleConsulConn(conn net.Conn) {
if err := s.rpcServer.ServeRequest(rpcCodec); err != nil {
if err != io.EOF && !strings.Contains(err.Error(), "closed") {
s.logger.Printf("[ERR] consul.rpc: RPC error: %v %s", err, logConn(conn))
metrics.IncrCounter([]string{"consul", "rpc", "request_error"}, 1)
metrics.IncrCounter([]string{"rpc", "request_error"}, 1)
}
return
}
metrics.IncrCounter([]string{"consul", "rpc", "request"}, 1)
metrics.IncrCounter([]string{"rpc", "request"}, 1)
}
}
@ -288,8 +285,6 @@ func (s *Server) forwardDC(method, dc string, args interface{}, reply interface{
return structs.ErrNoDCPath
}
metrics.IncrCounterWithLabels([]string{"consul", "rpc", "cross-dc"}, 1,
[]metrics.Label{{Name: "datacenter", Value: dc}})
metrics.IncrCounterWithLabels([]string{"rpc", "cross-dc"}, 1,
[]metrics.Label{{Name: "datacenter", Value: dc}})
if err := s.connPool.RPC(dc, server.Addr, server.Version, method, server.UseTLS, args, reply); err != nil {
@ -401,7 +396,6 @@ RUN_QUERY:
}
// Run the query.
metrics.IncrCounter([]string{"consul", "rpc", "query"}, 1)
metrics.IncrCounter([]string{"rpc", "query"}, 1)
// Operate on a consistent set of state. This makes sure that the
@ -452,7 +446,6 @@ func (s *Server) setQueryMeta(m *structs.QueryMeta) {
// consistentRead is used to ensure we do not perform a stale
// read. This is done by verifying leadership before the read.
func (s *Server) consistentRead() error {
defer metrics.MeasureSince([]string{"consul", "rpc", "consistentRead"}, time.Now())
defer metrics.MeasureSince([]string{"rpc", "consistentRead"}, time.Now())
future := s.raft.VerifyLeader()
if err := future.Error(); err != nil {

View File

@ -59,7 +59,6 @@ func (s *Server) floodSegments(config *Config) {
// all live nodes are registered, all failed nodes are marked as such, and all
// left nodes are de-registered.
func (s *Server) reconcile() (err error) {
defer metrics.MeasureSince([]string{"consul", "leader", "reconcile"}, time.Now())
defer metrics.MeasureSince([]string{"leader", "reconcile"}, time.Now())
members := s.serfLAN.Members()
knownMembers := make(map[string]struct{})

View File

@ -208,6 +208,9 @@ type Server struct {
shutdown bool
shutdownCh chan struct{}
shutdownLock sync.Mutex
// embedded struct to hold all the enterprise specific data
EnterpriseServer
}
func NewServer(config *Config) (*Server, error) {
@ -297,6 +300,12 @@ func NewServerLogger(config *Config, logger *log.Logger, tokens *token.Store) (*
shutdownCh: shutdownCh,
}
// Initialize enterprise specific server functionality
if err := s.initEnterprise(); err != nil {
s.Shutdown()
return nil, err
}
// Initialize the stats fetcher that autopilot will use.
s.statsFetcher = NewStatsFetcher(logger, s.connPool, s.config.Datacenter)
@ -338,6 +347,12 @@ func NewServerLogger(config *Config, logger *log.Logger, tokens *token.Store) (*
return nil, fmt.Errorf("Failed to start Raft: %v", err)
}
// Start enterprise specific functionality
if err := s.startEnterprise(); err != nil {
s.Shutdown()
return nil, err
}
// Serf and dynamic bind ports
//
// The LAN serf cluster announces the port of the WAN serf cluster
@ -1019,6 +1034,17 @@ func (s *Server) Stats() map[string]map[string]string {
if s.serfWAN != nil {
stats["serf_wan"] = s.serfWAN.Stats()
}
for outerKey, outerValue := range s.enterpriseStats() {
if _, ok := stats[outerKey]; ok {
for innerKey, innerValue := range outerValue {
stats[outerKey][innerKey] = innerValue
}
} else {
stats[outerKey] = outerValue
}
}
return stats
}

View File

@ -198,7 +198,9 @@ func (s *Server) localEvent(event serf.UserEvent) {
s.config.UserEventHandler(event)
}
default:
s.logger.Printf("[WARN] consul: Unhandled local event: %v", event)
if !s.handleEnterpriseUserEvents(event) {
s.logger.Printf("[WARN] consul: Unhandled local event: %v", event)
}
}
}

View File

@ -23,7 +23,6 @@ func (s *Session) Apply(args *structs.SessionRequest, reply *string) error {
if done, err := s.srv.forward("Session.Apply", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "session", "apply"}, time.Now())
defer metrics.MeasureSince([]string{"session", "apply"}, time.Now())
// Verify the args
@ -222,7 +221,6 @@ func (s *Session) Renew(args *structs.SessionSpecificRequest,
if done, err := s.srv.forward("Session.Renew", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "session", "renew"}, time.Now())
defer metrics.MeasureSince([]string{"session", "renew"}, time.Now())
// Get the session, from local state.

View File

@ -84,7 +84,6 @@ func (s *Server) createSessionTimer(id string, ttl time.Duration) {
// invalidateSession is invoked when a session TTL is reached and we
// need to invalidate the session.
func (s *Server) invalidateSession(id string) {
defer metrics.MeasureSince([]string{"consul", "session_ttl", "invalidate"}, time.Now())
defer metrics.MeasureSince([]string{"session_ttl", "invalidate"}, time.Now())
// Clear the session timer
@ -134,7 +133,6 @@ func (s *Server) sessionStats() {
for {
select {
case <-time.After(5 * time.Second):
metrics.SetGauge([]string{"consul", "session_ttl", "active"}, float32(s.sessionTimers.Len()))
metrics.SetGauge([]string{"session_ttl", "active"}, float32(s.sessionTimers.Len()))
case <-s.shutdownCh:

View File

@ -855,6 +855,36 @@ func serviceTagFilter(sn *structs.ServiceNode, tag string) bool {
return true
}
// ServiceAddressNodes returns the nodes associated with a given service, filtering
// out services that don't match the given serviceAddress
func (s *Store) ServiceAddressNodes(ws memdb.WatchSet, address string) (uint64, structs.ServiceNodes, error) {
tx := s.db.Txn(false)
defer tx.Abort()
// List all the services.
services, err := tx.Get("services", "id")
if err != nil {
return 0, nil, fmt.Errorf("failed service lookup: %s", err)
}
ws.Add(services.WatchCh())
// Gather all the services and apply the tag filter.
var results structs.ServiceNodes
for service := services.Next(); service != nil; service = services.Next() {
svc := service.(*structs.ServiceNode)
if svc.ServiceAddress == address {
results = append(results, svc)
}
}
// Fill in the node details.
results, err = s.parseServiceNodes(tx, ws, results)
if err != nil {
return 0, nil, fmt.Errorf("failed parsing service nodes: %s", err)
}
return 0, results, nil
}
// parseServiceNodes iterates over a services query and fills in the node details,
// returning a ServiceNodes slice.
func (s *Store) parseServiceNodes(tx *memdb.Txn, ws memdb.WatchSet, services structs.ServiceNodes) (structs.ServiceNodes, error) {
@ -1089,6 +1119,21 @@ func (s *Store) EnsureCheck(idx uint64, hc *structs.HealthCheck) error {
return nil
}
// updateAllServiceIndexesOfNode updates the Raft index of all the services associated with this node
func (s *Store) updateAllServiceIndexesOfNode(tx *memdb.Txn, idx uint64, nodeID string) error {
services, err := tx.Get("services", "node", nodeID)
if err != nil {
return fmt.Errorf("failed updating services for node %s: %s", nodeID, err)
}
for service := services.Next(); service != nil; service = services.Next() {
svc := service.(*structs.ServiceNode).ToNodeService()
if err := tx.Insert("index", &IndexEntry{serviceIndexName(svc.Service), idx}); err != nil {
return fmt.Errorf("failed updating index: %s", err)
}
}
return nil
}
// ensureCheckTransaction is used as the inner method to handle inserting
// a health check into the state store. It ensures safety against inserting
// checks with no matching node or service.
@ -1142,15 +1187,9 @@ func (s *Store) ensureCheckTxn(tx *memdb.Txn, idx uint64, hc *structs.HealthChec
}
} else {
// Update the status for all the services associated with this node
services, err := tx.Get("services", "node", hc.Node)
err = s.updateAllServiceIndexesOfNode(tx, idx, hc.Node)
if err != nil {
return fmt.Errorf("failed updating services for node %s: %s", hc.Node, err)
}
for service := services.Next(); service != nil; service = services.Next() {
svc := service.(*structs.ServiceNode).ToNodeService()
if err := tx.Insert("index", &IndexEntry{serviceIndexName(svc.Service), idx}); err != nil {
return fmt.Errorf("failed updating index: %s", err)
}
return err
}
}
@ -1393,19 +1432,20 @@ func (s *Store) deleteCheckTxn(tx *memdb.Txn, idx uint64, node string, checkID t
return nil
}
existing := hc.(*structs.HealthCheck)
if existing != nil && existing.ServiceID != "" {
service, err := tx.First("services", "id", node, existing.ServiceID)
if err != nil {
return fmt.Errorf("failed service lookup: %s", err)
}
if service == nil {
return ErrMissingService
}
// Updated index of service
svc := service.(*structs.ServiceNode)
if err = tx.Insert("index", &IndexEntry{serviceIndexName(svc.ServiceName), idx}); err != nil {
return fmt.Errorf("failed updating index: %s", err)
if existing != nil {
// When no service is linked to this service, update all services of node
if existing.ServiceID != "" {
if err = tx.Insert("index", &IndexEntry{serviceIndexName(existing.ServiceName), idx}); err != nil {
return fmt.Errorf("failed updating index: %s", err)
}
} else {
err = s.updateAllServiceIndexesOfNode(tx, idx, existing.Node)
if err != nil {
return fmt.Errorf("Failed to update services linked to deleted healthcheck: %s", err)
}
if err := tx.Insert("index", &IndexEntry{"services", idx}); err != nil {
return fmt.Errorf("failed updating index: %s", err)
}
}
}

View File

@ -2132,6 +2132,7 @@ func TestStateStore_DeleteCheck(t *testing.T) {
// Register a node and a node-level health check.
testRegisterNode(t, s, 1, "node1")
testRegisterCheck(t, s, 2, "node1", "", "check1", api.HealthPassing)
testRegisterService(t, s, 2, "node1", "service1")
// Make sure the check is there.
ws := memdb.NewWatchSet()
@ -2143,13 +2144,23 @@ func TestStateStore_DeleteCheck(t *testing.T) {
t.Fatalf("bad: %#v", checks)
}
ensureServiceVersion(t, s, ws, "service1", 2, 1)
// Delete the check.
if err := s.DeleteCheck(3, "node1", "check1"); err != nil {
t.Fatalf("err: %s", err)
}
if idx, check, err := s.NodeCheck("node1", "check1"); idx != 3 || err != nil || check != nil {
t.Fatalf("Node check should have been deleted idx=%d, node=%v, err=%s", idx, check, err)
}
if idx := s.maxIndex("checks"); idx != 3 {
t.Fatalf("bad index for checks: %d", idx)
}
if !watchFired(ws) {
t.Fatalf("bad")
}
// All services linked to this node should have their index updated
ensureServiceVersion(t, s, ws, "service1", 3, 1)
// Check is gone
ws = memdb.NewWatchSet()

View File

@ -46,7 +46,6 @@ func (t *Txn) Apply(args *structs.TxnRequest, reply *structs.TxnResponse) error
if done, err := t.srv.forward("Txn.Apply", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "txn", "apply"}, time.Now())
defer metrics.MeasureSince([]string{"txn", "apply"}, time.Now())
// Run the pre-checks before we send the transaction into Raft.
@ -90,7 +89,6 @@ func (t *Txn) Read(args *structs.TxnReadRequest, reply *structs.TxnReadResponse)
if done, err := t.srv.forward("Txn.Read", args, args, reply); done {
return err
}
defer metrics.MeasureSince([]string{"consul", "txn", "read"}, time.Now())
defer metrics.MeasureSince([]string{"txn", "read"}, time.Now())
// We have to do this ourselves since we are not doing a blocking RPC.

View File

@ -12,6 +12,7 @@ import (
"regexp"
"github.com/armon/go-metrics"
"github.com/coredns/coredns/plugin/pkg/dnsutil"
"github.com/hashicorp/consul/agent/config"
"github.com/hashicorp/consul/agent/consul"
"github.com/hashicorp/consul/agent/structs"
@ -158,8 +159,6 @@ START:
func (d *DNSServer) handlePtr(resp dns.ResponseWriter, req *dns.Msg) {
q := req.Question[0]
defer func(s time.Time) {
metrics.MeasureSinceWithLabels([]string{"consul", "dns", "ptr_query"}, s,
[]metrics.Label{{Name: "node", Value: d.agent.config.NodeName}})
metrics.MeasureSinceWithLabels([]string{"dns", "ptr_query"}, s,
[]metrics.Label{{Name: "node", Value: d.agent.config.NodeName}})
d.logger.Printf("[DEBUG] dns: request for %v (%v) from client %s (%s)",
@ -209,6 +208,34 @@ func (d *DNSServer) handlePtr(resp dns.ResponseWriter, req *dns.Msg) {
}
}
// only look into the services if we didn't find a node
if len(m.Answer) == 0 {
// lookup the service address
serviceAddress := dnsutil.ExtractAddressFromReverse(qName)
sargs := structs.ServiceSpecificRequest{
Datacenter: datacenter,
QueryOptions: structs.QueryOptions{
Token: d.agent.tokens.UserToken(),
AllowStale: d.config.AllowStale,
},
ServiceAddress: serviceAddress,
}
var sout structs.IndexedServiceNodes
if err := d.agent.RPC("Catalog.ServiceNodes", &sargs, &sout); err == nil {
for _, n := range sout.ServiceNodes {
if n.ServiceAddress == serviceAddress {
ptr := &dns.PTR{
Hdr: dns.RR_Header{Name: q.Name, Rrtype: dns.TypePTR, Class: dns.ClassINET, Ttl: 0},
Ptr: fmt.Sprintf("%s.service.%s", n.ServiceName, d.domain),
}
m.Answer = append(m.Answer, ptr)
break
}
}
}
}
// nothing found locally, recurse
if len(m.Answer) == 0 {
d.handleRecurse(resp, req)
@ -230,8 +257,6 @@ func (d *DNSServer) handlePtr(resp dns.ResponseWriter, req *dns.Msg) {
func (d *DNSServer) handleQuery(resp dns.ResponseWriter, req *dns.Msg) {
q := req.Question[0]
defer func(s time.Time) {
metrics.MeasureSinceWithLabels([]string{"consul", "dns", "domain_query"}, s,
[]metrics.Label{{Name: "node", Value: d.agent.config.NodeName}})
metrics.MeasureSinceWithLabels([]string{"dns", "domain_query"}, s,
[]metrics.Label{{Name: "node", Value: d.agent.config.NodeName}})
d.logger.Printf("[DEBUG] dns: request for name %v type %v class %v (took %v) from client %s (%s)",
@ -270,7 +295,7 @@ func (d *DNSServer) handleQuery(resp dns.ResponseWriter, req *dns.Msg) {
m.SetRcode(req, dns.RcodeNotImplemented)
default:
d.dispatch(network, req, m)
d.dispatch(network, resp.RemoteAddr(), req, m)
}
// Handle EDNS
@ -362,7 +387,7 @@ func (d *DNSServer) nameservers(edns bool) (ns []dns.RR, extra []dns.RR) {
}
// dispatch is used to parse a request and invoke the correct handler
func (d *DNSServer) dispatch(network string, req, resp *dns.Msg) {
func (d *DNSServer) dispatch(network string, remoteAddr net.Addr, req, resp *dns.Msg) {
// By default the query is in the default datacenter
datacenter := d.agent.config.Datacenter
@ -439,7 +464,7 @@ PARSE:
// Allow a "." in the query name, just join all the parts.
query := strings.Join(labels[:n-1], ".")
d.preparedQueryLookup(network, datacenter, query, req, resp)
d.preparedQueryLookup(network, datacenter, query, remoteAddr, req, resp)
case "addr":
if n != 2 {
@ -542,7 +567,6 @@ RPC:
d.logger.Printf("[WARN] dns: Query results too stale, re-requesting")
goto RPC
} else if out.LastContact > staleCounterThreshold {
metrics.IncrCounter([]string{"consul", "dns", "stale_queries"}, 1)
metrics.IncrCounter([]string{"dns", "stale_queries"}, 1)
}
}
@ -717,15 +741,39 @@ func syncExtra(index map[string]dns.RR, resp *dns.Msg) {
resp.Extra = extra
}
// dnsBinaryTruncate find the optimal number of records using a fast binary search and return
// it in order to return a DNS answer lower than maxSize parameter.
func dnsBinaryTruncate(resp *dns.Msg, maxSize int, index map[string]dns.RR, hasExtra bool) int {
originalAnswser := resp.Answer
startIndex := 0
endIndex := len(resp.Answer) + 1
for endIndex-startIndex > 1 {
median := startIndex + (endIndex-startIndex)/2
resp.Answer = originalAnswser[:median]
if hasExtra {
syncExtra(index, resp)
}
aLen := resp.Len()
if aLen <= maxSize {
if maxSize-aLen < 10 {
// We are good, increasing will go out of bounds
return median
}
startIndex = median
} else {
endIndex = median
}
}
return startIndex
}
// trimTCPResponse limit the MaximumSize of messages to 64k as it is the limit
// of DNS responses
func (d *DNSServer) trimTCPResponse(req, resp *dns.Msg) (trimmed bool) {
hasExtra := len(resp.Extra) > 0
// There is some overhead, 65535 does not work
maxSize := 65533 // 64k - 2 bytes
// In order to compute properly, we have to avoid compress first
compressed := resp.Compress
resp.Compress = false
maxSize := 65523 // 64k - 12 bytes DNS raw overhead
// We avoid some function calls and allocations by only handling the
// extra data when necessary.
@ -733,12 +781,13 @@ func (d *DNSServer) trimTCPResponse(req, resp *dns.Msg) (trimmed bool) {
originalSize := resp.Len()
originalNumRecords := len(resp.Answer)
// Beyond 2500 records, performance gets bad
// Limit the number of records at once, anyway, it won't fit in 64k
// For SRV Records, the max is around 500 records, for A, less than 2k
truncateAt := 2048
// It is not possible to return more than 4k records even with compression
// Since we are performing binary search it is not a big deal, but it
// improves a bit performance, even with binary search
truncateAt := 4096
if req.Question[0].Qtype == dns.TypeSRV {
truncateAt = 640
// More than 1024 SRV records do not fit in 64k
truncateAt = 1024
}
if len(resp.Answer) > truncateAt {
resp.Answer = resp.Answer[:truncateAt]
@ -750,9 +799,15 @@ func (d *DNSServer) trimTCPResponse(req, resp *dns.Msg) (trimmed bool) {
truncated := false
// This enforces the given limit on 64k, the max limit for DNS messages
for len(resp.Answer) > 0 && resp.Len() > maxSize {
for len(resp.Answer) > 1 && resp.Len() > maxSize {
truncated = true
resp.Answer = resp.Answer[:len(resp.Answer)-1]
// More than 100 bytes, find with a binary search
if resp.Len()-maxSize > 100 {
bestIndex := dnsBinaryTruncate(resp, maxSize, index, hasExtra)
resp.Answer = resp.Answer[:bestIndex]
} else {
resp.Answer = resp.Answer[:len(resp.Answer)-1]
}
if hasExtra {
syncExtra(index, resp)
}
@ -762,8 +817,6 @@ func (d *DNSServer) trimTCPResponse(req, resp *dns.Msg) (trimmed bool) {
req.Question,
len(resp.Answer), originalNumRecords, resp.Len(), originalSize)
}
// Restore compression if any
resp.Compress = compressed
return truncated
}
@ -793,7 +846,10 @@ func trimUDPResponse(req, resp *dns.Msg, udpAnswerLimit int) (trimmed bool) {
// This cuts UDP responses to a useful but limited number of responses.
maxAnswers := lib.MinInt(maxUDPAnswerLimit, udpAnswerLimit)
compress := resp.Compress
if maxSize == defaultMaxUDPSize && numAnswers > maxAnswers {
// We disable computation of Len ONLY for non-eDNS request (512 bytes)
resp.Compress = false
resp.Answer = resp.Answer[:maxAnswers]
if hasExtra {
syncExtra(index, resp)
@ -806,14 +862,22 @@ func trimUDPResponse(req, resp *dns.Msg, udpAnswerLimit int) (trimmed bool) {
// that will not exceed 512 bytes uncompressed, which is more conservative and
// will allow our responses to be compliant even if some downstream server
// uncompresses them.
compress := resp.Compress
resp.Compress = false
for len(resp.Answer) > 0 && resp.Len() > maxSize {
resp.Answer = resp.Answer[:len(resp.Answer)-1]
// Even when size is too big for one single record, try to send it anyway
// (usefull for 512 bytes messages)
for len(resp.Answer) > 1 && resp.Len() > maxSize {
// More than 100 bytes, find with a binary search
if resp.Len()-maxSize > 100 {
bestIndex := dnsBinaryTruncate(resp, maxSize, index, hasExtra)
resp.Answer = resp.Answer[:bestIndex]
} else {
resp.Answer = resp.Answer[:len(resp.Answer)-1]
}
if hasExtra {
syncExtra(index, resp)
}
}
// For 512 non-eDNS responses, while we compute size non-compressed,
// we send result compressed
resp.Compress = compress
return len(resp.Answer) < numAnswers
@ -852,7 +916,6 @@ func (d *DNSServer) lookupServiceNodes(datacenter, service, tag string) (structs
}
if args.AllowStale && out.LastContact > staleCounterThreshold {
metrics.IncrCounter([]string{"consul", "dns", "stale_queries"}, 1)
metrics.IncrCounter([]string{"dns", "stale_queries"}, 1)
}
@ -917,8 +980,25 @@ func (d *DNSServer) serviceLookup(network, datacenter, service, tag string, req,
}
}
func ednsSubnetForRequest(req *dns.Msg) *dns.EDNS0_SUBNET {
// IsEdns0 returns the EDNS RR if present or nil otherwise
edns := req.IsEdns0()
if edns == nil {
return nil
}
for _, o := range edns.Option {
if subnet, ok := o.(*dns.EDNS0_SUBNET); ok {
return subnet
}
}
return nil
}
// preparedQueryLookup is used to handle a prepared query.
func (d *DNSServer) preparedQueryLookup(network, datacenter, query string, req, resp *dns.Msg) {
func (d *DNSServer) preparedQueryLookup(network, datacenter, query string, remoteAddr net.Addr, req, resp *dns.Msg) {
// Execute the prepared query.
args := structs.PreparedQueryExecuteRequest{
Datacenter: datacenter,
@ -939,6 +1019,21 @@ func (d *DNSServer) preparedQueryLookup(network, datacenter, query string, req,
},
}
subnet := ednsSubnetForRequest(req)
if subnet != nil {
args.Source.Ip = subnet.Address.String()
} else {
switch v := remoteAddr.(type) {
case *net.UDPAddr:
args.Source.Ip = v.IP.String()
case *net.TCPAddr:
args.Source.Ip = v.IP.String()
case *net.IPAddr:
args.Source.Ip = v.IP.String()
}
}
// TODO (slackpad) - What's a safe limit we can set here? It seems like
// with dup filtering done at this level we need to get everything to
// match the previous behavior. We can optimize by pushing more filtering
@ -971,7 +1066,6 @@ RPC:
d.logger.Printf("[WARN] dns: Query results too stale, re-requesting")
goto RPC
} else if out.LastContact > staleCounterThreshold {
metrics.IncrCounter([]string{"consul", "dns", "stale_queries"}, 1)
metrics.IncrCounter([]string{"dns", "stale_queries"}, 1)
}
}
@ -1194,7 +1288,7 @@ func (d *DNSServer) resolveCNAME(name string) []dns.RR {
resp := &dns.Msg{}
req.SetQuestion(name, dns.TypeANY)
d.dispatch("udp", req, resp)
d.dispatch("udp", nil, req, resp)
return resp.Answer
}

View File

@ -14,8 +14,10 @@ import (
"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/lib"
"github.com/hashicorp/consul/testutil/retry"
"github.com/hashicorp/serf/coordinate"
"github.com/miekg/dns"
"github.com/pascaldekloe/goe/verify"
"github.com/stretchr/testify/require"
)
const (
@ -669,6 +671,196 @@ func TestDNS_ReverseLookup_IPV6(t *testing.T) {
}
}
func TestDNS_ServiceReverseLookup(t *testing.T) {
t.Parallel()
a := NewTestAgent(t.Name(), "")
defer a.Shutdown()
// Register a node with a service.
{
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: "foo",
Address: "127.0.0.1",
Service: &structs.NodeService{
Service: "db",
Tags: []string{"master"},
Port: 12345,
Address: "127.0.0.2",
},
}
var out struct{}
if err := a.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
}
m := new(dns.Msg)
m.SetQuestion("2.0.0.127.in-addr.arpa.", dns.TypeANY)
c := new(dns.Client)
in, _, err := c.Exchange(m, a.DNSAddr())
if err != nil {
t.Fatalf("err: %v", err)
}
if len(in.Answer) != 1 {
t.Fatalf("Bad: %#v", in)
}
ptrRec, ok := in.Answer[0].(*dns.PTR)
if !ok {
t.Fatalf("Bad: %#v", in.Answer[0])
}
if ptrRec.Ptr != "db.service.consul." {
t.Fatalf("Bad: %#v", ptrRec)
}
}
func TestDNS_ServiceReverseLookup_IPV6(t *testing.T) {
t.Parallel()
a := NewTestAgent(t.Name(), "")
defer a.Shutdown()
// Register a node with a service.
{
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: "foo",
Address: "2001:db8::1",
Service: &structs.NodeService{
Service: "db",
Tags: []string{"master"},
Port: 12345,
Address: "2001:db8::ff00:42:8329",
},
}
var out struct{}
if err := a.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
}
m := new(dns.Msg)
m.SetQuestion("9.2.3.8.2.4.0.0.0.0.f.f.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa.", dns.TypeANY)
c := new(dns.Client)
in, _, err := c.Exchange(m, a.DNSAddr())
if err != nil {
t.Fatalf("err: %v", err)
}
if len(in.Answer) != 1 {
t.Fatalf("Bad: %#v", in)
}
ptrRec, ok := in.Answer[0].(*dns.PTR)
if !ok {
t.Fatalf("Bad: %#v", in.Answer[0])
}
if ptrRec.Ptr != "db.service.consul." {
t.Fatalf("Bad: %#v", ptrRec)
}
}
func TestDNS_ServiceReverseLookup_CustomDomain(t *testing.T) {
t.Parallel()
a := NewTestAgent(t.Name(), `
domain = "custom"
`)
defer a.Shutdown()
// Register a node with a service.
{
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: "foo",
Address: "127.0.0.1",
Service: &structs.NodeService{
Service: "db",
Tags: []string{"master"},
Port: 12345,
Address: "127.0.0.2",
},
}
var out struct{}
if err := a.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
}
m := new(dns.Msg)
m.SetQuestion("2.0.0.127.in-addr.arpa.", dns.TypeANY)
c := new(dns.Client)
in, _, err := c.Exchange(m, a.DNSAddr())
if err != nil {
t.Fatalf("err: %v", err)
}
if len(in.Answer) != 1 {
t.Fatalf("Bad: %#v", in)
}
ptrRec, ok := in.Answer[0].(*dns.PTR)
if !ok {
t.Fatalf("Bad: %#v", in.Answer[0])
}
if ptrRec.Ptr != "db.service.custom." {
t.Fatalf("Bad: %#v", ptrRec)
}
}
func TestDNS_ServiceReverseLookupNodeAddress(t *testing.T) {
t.Parallel()
a := NewTestAgent(t.Name(), "")
defer a.Shutdown()
// Register a node with a service.
{
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: "foo",
Address: "127.0.0.1",
Service: &structs.NodeService{
Service: "db",
Tags: []string{"master"},
Port: 12345,
Address: "127.0.0.1",
},
}
var out struct{}
if err := a.RPC("Catalog.Register", args, &out); err != nil {
t.Fatalf("err: %v", err)
}
}
m := new(dns.Msg)
m.SetQuestion("1.0.0.127.in-addr.arpa.", dns.TypeANY)
c := new(dns.Client)
in, _, err := c.Exchange(m, a.DNSAddr())
if err != nil {
t.Fatalf("err: %v", err)
}
if len(in.Answer) != 1 {
t.Fatalf("Bad: %#v", in)
}
ptrRec, ok := in.Answer[0].(*dns.PTR)
if !ok {
t.Fatalf("Bad: %#v", in.Answer[0])
}
if ptrRec.Ptr != "foo.node.dc1.consul." {
t.Fatalf("Bad: %#v", ptrRec)
}
}
func TestDNS_ServiceLookup(t *testing.T) {
t.Parallel()
a := NewTestAgent(t.Name(), "")
@ -1835,6 +2027,247 @@ func TestDNS_ServiceLookup_TagPeriod(t *testing.T) {
}
}
func TestDNS_PreparedQueryNearIPEDNS(t *testing.T) {
ipCoord := lib.GenerateCoordinate(1 * time.Millisecond)
serviceNodes := []struct {
name string
address string
coord *coordinate.Coordinate
}{
{"foo1", "198.18.0.1", lib.GenerateCoordinate(1 * time.Millisecond)},
{"foo2", "198.18.0.2", lib.GenerateCoordinate(10 * time.Millisecond)},
{"foo3", "198.18.0.3", lib.GenerateCoordinate(30 * time.Millisecond)},
}
t.Parallel()
a := NewTestAgent(t.Name(), "")
defer a.Shutdown()
added := 0
// Register nodes with a service
for _, cfg := range serviceNodes {
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: cfg.name,
Address: cfg.address,
Service: &structs.NodeService{
Service: "db",
Port: 12345,
},
}
var out struct{}
err := a.RPC("Catalog.Register", args, &out)
require.NoError(t, err)
// Send coordinate updates
coordArgs := structs.CoordinateUpdateRequest{
Datacenter: "dc1",
Node: cfg.name,
Coord: cfg.coord,
}
err = a.RPC("Coordinate.Update", &coordArgs, &out)
require.NoError(t, err)
added += 1
}
fmt.Printf("Added %d service nodes\n", added)
// Register a node without a service
{
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: "bar",
Address: "198.18.0.9",
}
var out struct{}
err := a.RPC("Catalog.Register", args, &out)
require.NoError(t, err)
// Send coordinate updates for a few nodes.
coordArgs := structs.CoordinateUpdateRequest{
Datacenter: "dc1",
Node: "bar",
Coord: ipCoord,
}
err = a.RPC("Coordinate.Update", &coordArgs, &out)
require.NoError(t, err)
}
// Register a prepared query Near = _ip
{
args := &structs.PreparedQueryRequest{
Datacenter: "dc1",
Op: structs.PreparedQueryCreate,
Query: &structs.PreparedQuery{
Name: "some.query.we.like",
Service: structs.ServiceQuery{
Service: "db",
Near: "_ip",
},
},
}
var id string
err := a.RPC("PreparedQuery.Apply", args, &id)
require.NoError(t, err)
}
retry.Run(t, func(r *retry.R) {
m := new(dns.Msg)
m.SetQuestion("some.query.we.like.query.consul.", dns.TypeA)
m.SetEdns0(4096, false)
o := new(dns.OPT)
o.Hdr.Name = "."
o.Hdr.Rrtype = dns.TypeOPT
e := new(dns.EDNS0_SUBNET)
e.Code = dns.EDNS0SUBNET
e.Family = 1
e.SourceNetmask = 32
e.SourceScope = 0
e.Address = net.ParseIP("198.18.0.9").To4()
o.Option = append(o.Option, e)
m.Extra = append(m.Extra, o)
c := new(dns.Client)
in, _, err := c.Exchange(m, a.DNSAddr())
if err != nil {
r.Fatalf("Error with call to dns.Client.Exchange: %s", err)
}
if len(serviceNodes) != len(in.Answer) {
r.Fatalf("Expecting %d A RRs in response, Actual found was %d", len(serviceNodes), len(in.Answer))
}
for i, rr := range in.Answer {
if aRec, ok := rr.(*dns.A); ok {
if actual := aRec.A.String(); serviceNodes[i].address != actual {
r.Fatalf("Expecting A RR #%d = %s, Actual RR was %s", i, serviceNodes[i].address, actual)
}
} else {
r.Fatalf("DNS Answer contained a non-A RR")
}
}
})
}
func TestDNS_PreparedQueryNearIP(t *testing.T) {
ipCoord := lib.GenerateCoordinate(1 * time.Millisecond)
serviceNodes := []struct {
name string
address string
coord *coordinate.Coordinate
}{
{"foo1", "198.18.0.1", lib.GenerateCoordinate(1 * time.Millisecond)},
{"foo2", "198.18.0.2", lib.GenerateCoordinate(10 * time.Millisecond)},
{"foo3", "198.18.0.3", lib.GenerateCoordinate(30 * time.Millisecond)},
}
t.Parallel()
a := NewTestAgent(t.Name(), "")
defer a.Shutdown()
added := 0
// Register nodes with a service
for _, cfg := range serviceNodes {
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: cfg.name,
Address: cfg.address,
Service: &structs.NodeService{
Service: "db",
Port: 12345,
},
}
var out struct{}
err := a.RPC("Catalog.Register", args, &out)
require.NoError(t, err)
// Send coordinate updates
coordArgs := structs.CoordinateUpdateRequest{
Datacenter: "dc1",
Node: cfg.name,
Coord: cfg.coord,
}
err = a.RPC("Coordinate.Update", &coordArgs, &out)
require.NoError(t, err)
added += 1
}
fmt.Printf("Added %d service nodes\n", added)
// Register a node without a service
{
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: "bar",
Address: "198.18.0.9",
}
var out struct{}
err := a.RPC("Catalog.Register", args, &out)
require.NoError(t, err)
// Send coordinate updates for a few nodes.
coordArgs := structs.CoordinateUpdateRequest{
Datacenter: "dc1",
Node: "bar",
Coord: ipCoord,
}
err = a.RPC("Coordinate.Update", &coordArgs, &out)
require.NoError(t, err)
}
// Register a prepared query Near = _ip
{
args := &structs.PreparedQueryRequest{
Datacenter: "dc1",
Op: structs.PreparedQueryCreate,
Query: &structs.PreparedQuery{
Name: "some.query.we.like",
Service: structs.ServiceQuery{
Service: "db",
Near: "_ip",
},
},
}
var id string
err := a.RPC("PreparedQuery.Apply", args, &id)
require.NoError(t, err)
}
retry.Run(t, func(r *retry.R) {
m := new(dns.Msg)
m.SetQuestion("some.query.we.like.query.consul.", dns.TypeA)
c := new(dns.Client)
in, _, err := c.Exchange(m, a.DNSAddr())
if err != nil {
r.Fatalf("Error with call to dns.Client.Exchange: %s", err)
}
if len(serviceNodes) != len(in.Answer) {
r.Fatalf("Expecting %d A RRs in response, Actual found was %d", len(serviceNodes), len(in.Answer))
}
for i, rr := range in.Answer {
if aRec, ok := rr.(*dns.A); ok {
if actual := aRec.A.String(); serviceNodes[i].address != actual {
r.Fatalf("Expecting A RR #%d = %s, Actual RR was %s", i, serviceNodes[i].address, actual)
}
} else {
r.Fatalf("DNS Answer contained a non-A RR")
}
}
})
}
func TestDNS_ServiceLookup_PreparedQueryNamePeriod(t *testing.T) {
t.Parallel()
a := NewTestAgent(t.Name(), "")
@ -2740,6 +3173,46 @@ func TestDNS_ServiceLookup_Randomize(t *testing.T) {
}
}
func TestBinarySearch(t *testing.T) {
t.Parallel()
msgSrc := new(dns.Msg)
msgSrc.Compress = true
msgSrc.SetQuestion("redis.service.consul.", dns.TypeSRV)
for i := 0; i < 5000; i++ {
target := fmt.Sprintf("host-redis-%d-%d.test.acme.com.node.dc1.consul.", i/256, i%256)
msgSrc.Answer = append(msgSrc.Answer, &dns.SRV{Hdr: dns.RR_Header{Name: "redis.service.consul.", Class: 1, Rrtype: dns.TypeSRV, Ttl: 0x3c}, Port: 0x4c57, Target: target})
msgSrc.Extra = append(msgSrc.Extra, &dns.CNAME{Hdr: dns.RR_Header{Name: target, Class: 1, Rrtype: dns.TypeCNAME, Ttl: 0x3c}, Target: fmt.Sprintf("fx.168.%d.%d.", i/256, i%256)})
}
for _, compress := range []bool{true, false} {
for idx, maxSize := range []int{12, 256, 512, 8192, 65535} {
t.Run(fmt.Sprintf("binarySearch %d", maxSize), func(t *testing.T) {
msg := new(dns.Msg)
msgSrc.Compress = compress
msgSrc.SetQuestion("redis.service.consul.", dns.TypeSRV)
msg.Answer = msgSrc.Answer
msg.Extra = msgSrc.Extra
index := make(map[string]dns.RR, len(msg.Extra))
indexRRs(msg.Extra, index)
blen := dnsBinaryTruncate(msg, maxSize, index, true)
msg.Answer = msg.Answer[:blen]
syncExtra(index, msg)
predicted := msg.Len()
buf, err := msg.Pack()
if err != nil {
t.Error(err)
}
if predicted < len(buf) {
t.Fatalf("Bug in DNS library: %d != %d", predicted, len(buf))
}
if len(buf) > maxSize || (idx != 0 && len(buf) < 16) {
t.Fatalf("bad[%d]: %d > %d", idx, len(buf), maxSize)
}
})
}
}
}
func TestDNS_TCP_and_UDP_Truncate(t *testing.T) {
t.Parallel()
a := NewTestAgent(t.Name(), `
@ -2756,7 +3229,7 @@ func TestDNS_TCP_and_UDP_Truncate(t *testing.T) {
args := &structs.RegisterRequest{
Datacenter: "dc1",
Node: fmt.Sprintf("%s-%d.acme.com", service, i),
Address: fmt.Sprintf("127.%d.%d.%d", index, (i / 255), i%255),
Address: fmt.Sprintf("127.%d.%d.%d", 0, (i / 255), i%255),
Service: &structs.NodeService{
Service: service,
Port: 8000,
@ -2797,33 +3270,39 @@ func TestDNS_TCP_and_UDP_Truncate(t *testing.T) {
"tcp",
"udp",
}
for _, qType := range []uint16{dns.TypeANY, dns.TypeA, dns.TypeSRV} {
for _, question := range questions {
for _, protocol := range protocols {
for _, compress := range []bool{true, false} {
t.Run(fmt.Sprintf("lookup %s %s (qType:=%d) compressed=%v", question, protocol, qType, compress), func(t *testing.T) {
m := new(dns.Msg)
m.SetQuestion(question, dns.TypeANY)
if protocol == "udp" {
m.SetEdns0(8192, true)
}
c := new(dns.Client)
c.Net = protocol
m.Compress = compress
in, out, err := c.Exchange(m, a.DNSAddr())
if err != nil && err != dns.ErrTruncated {
t.Fatalf("err: %v", err)
}
for _, maxSize := range []uint16{8192, 65535} {
for _, qType := range []uint16{dns.TypeANY, dns.TypeA, dns.TypeSRV} {
for _, question := range questions {
for _, protocol := range protocols {
for _, compress := range []bool{true, false} {
t.Run(fmt.Sprintf("lookup %s %s (qType:=%d) compressed=%v", question, protocol, qType, compress), func(t *testing.T) {
m := new(dns.Msg)
m.SetQuestion(question, dns.TypeANY)
maxSz := maxSize
if protocol == "udp" {
maxSz = 8192
}
m.SetEdns0(uint16(maxSz), true)
c := new(dns.Client)
c.Net = protocol
m.Compress = compress
in, _, err := c.Exchange(m, a.DNSAddr())
if err != nil && err != dns.ErrTruncated {
t.Fatalf("err: %v", err)
}
// Check for the truncate bit
shouldBeTruncated := numServices > 4095
if shouldBeTruncated != in.Truncated || len(in.Answer) > 2000 || len(in.Answer) < 1 || in.Len() > 65535 {
// Check for the truncate bit
buf, err := m.Pack()
info := fmt.Sprintf("service %s question:=%s (%s) (%d total records) sz:= %d in %v",
service, question, protocol, numServices, len(in.Answer), out)
t.Fatalf("Should have truncated:=%v for %s", shouldBeTruncated, info)
}
})
service, question, protocol, numServices, len(in.Answer), in)
if err != nil {
t.Fatalf("Error while packing: %v ; info:=%s", err, info)
}
if len(buf) > int(maxSz) {
t.Fatalf("len(buf) := %d > maxSz=%d for %v", len(buf), maxSz, info)
}
})
}
}
}
}
@ -3068,17 +3547,17 @@ func testDNSServiceLookupResponseLimits(t *testing.T, answerLimit int, qType uin
case 0:
if (expectedService > 0 && len(in.Answer) != expectedService) ||
(expectedService < -1 && len(in.Answer) < lib.AbsInt(expectedService)) {
return false, fmt.Errorf("%d/%d answers received for type %v for %s", len(in.Answer), answerLimit, qType, question)
return false, fmt.Errorf("%d/%d answers received for type %v for %s, sz:=%d", len(in.Answer), answerLimit, qType, question, in.Len())
}
case 1:
if (expectedQuery > 0 && len(in.Answer) != expectedQuery) ||
(expectedQuery < -1 && len(in.Answer) < lib.AbsInt(expectedQuery)) {
return false, fmt.Errorf("%d/%d answers received for type %v for %s", len(in.Answer), answerLimit, qType, question)
return false, fmt.Errorf("%d/%d answers received for type %v for %s, sz:=%d", len(in.Answer), answerLimit, qType, question, in.Len())
}
case 2:
if (expectedQueryID > 0 && len(in.Answer) != expectedQueryID) ||
(expectedQueryID < -1 && len(in.Answer) < lib.AbsInt(expectedQueryID)) {
return false, fmt.Errorf("%d/%d answers received for type %v for %s", len(in.Answer), answerLimit, qType, question)
return false, fmt.Errorf("%d/%d answers received for type %v for %s, sz:=%d", len(in.Answer), answerLimit, qType, question, in.Len())
}
default:
panic("abort")
@ -3230,7 +3709,7 @@ func TestDNS_ServiceLookup_ARecordLimits(t *testing.T) {
t.Parallel()
err := checkDNSService(t, test.numNodesTotal, test.aRecordLimit, qType, test.expectedAResults, test.udpSize, test.udpAnswerLimit)
if err != nil {
t.Errorf("Expected lookup %s to pass: %v", test.name, err)
t.Fatalf("Expected lookup %s to pass: %v", test.name, err)
}
})
}
@ -3239,7 +3718,7 @@ func TestDNS_ServiceLookup_ARecordLimits(t *testing.T) {
t.Parallel()
err := checkDNSService(t, test.expectedSRVResults, test.aRecordLimit, dns.TypeSRV, test.numNodesTotal, test.udpSize, test.udpAnswerLimit)
if err != nil {
t.Errorf("Expected service SRV lookup %s to pass: %v", test.name, err)
t.Fatalf("Expected service SRV lookup %s to pass: %v", test.name, err)
}
})
}
@ -3285,27 +3764,27 @@ func TestDNS_ServiceLookup_AnswerLimits(t *testing.T) {
}
for _, test := range tests {
test := test // capture loop var
t.Run("A lookup", func(t *testing.T) {
t.Run(fmt.Sprintf("A lookup %v", test), func(t *testing.T) {
t.Parallel()
ok, err := testDNSServiceLookupResponseLimits(t, test.udpAnswerLimit, dns.TypeA, test.expectedAService, test.expectedAQuery, test.expectedAQueryID)
if !ok {
t.Errorf("Expected service A lookup %s to pass: %v", test.name, err)
t.Fatalf("Expected service A lookup %s to pass: %v", test.name, err)
}
})
t.Run("AAAA lookup", func(t *testing.T) {
t.Run(fmt.Sprintf("AAAA lookup %v", test), func(t *testing.T) {
t.Parallel()
ok, err := testDNSServiceLookupResponseLimits(t, test.udpAnswerLimit, dns.TypeAAAA, test.expectedAAAAService, test.expectedAAAAQuery, test.expectedAAAAQueryID)
if !ok {
t.Errorf("Expected service AAAA lookup %s to pass: %v", test.name, err)
t.Fatalf("Expected service AAAA lookup %s to pass: %v", test.name, err)
}
})
t.Run("ANY lookup", func(t *testing.T) {
t.Run(fmt.Sprintf("ANY lookup %v", test), func(t *testing.T) {
t.Parallel()
ok, err := testDNSServiceLookupResponseLimits(t, test.udpAnswerLimit, dns.TypeANY, test.expectedANYService, test.expectedANYQuery, test.expectedANYQueryID)
if !ok {
t.Errorf("Expected service ANY lookup %s to pass: %v", test.name, err)
t.Fatalf("Expected service ANY lookup %s to pass: %v", test.name, err)
}
})
}

View File

@ -0,0 +1,6 @@
// +build !ent
package agent
// enterpriseDelegate has no functions in OSS
type enterpriseDelegate interface{}

View File

@ -7,17 +7,18 @@ import (
"net/http"
"net/http/pprof"
"net/url"
"os"
"regexp"
"strconv"
"strings"
"time"
"github.com/NYTimes/gziphandler"
"github.com/armon/go-metrics"
"github.com/hashicorp/consul/acl"
"github.com/hashicorp/consul/agent/structs"
"github.com/hashicorp/go-cleanhttp"
"github.com/mitchellh/mapstructure"
"github.com/NYTimes/gziphandler"
)
// MethodNotAllowedError should be returned by a handler when the HTTP method is not allowed.
@ -30,6 +31,15 @@ func (e MethodNotAllowedError) Error() string {
return fmt.Sprintf("method %s not allowed", e.Method)
}
// BadRequestError should be returned by a handler when parameters or the payload are not valid
type BadRequestError struct {
Reason string
}
func (e BadRequestError) Error() string {
return fmt.Sprintf("Bad request: %s", e.Reason)
}
// HTTPServer provides an HTTP api for an agent.
type HTTPServer struct {
*http.Server
@ -41,6 +51,18 @@ type HTTPServer struct {
proto string
}
type redirectFS struct {
fs http.FileSystem
}
func (fs *redirectFS) Open(name string) (http.File, error) {
file, err := fs.fs.Open(name)
if err != nil {
file, err = fs.fs.Open("/index.html")
}
return file, err
}
// endpoint is a Consul-specific HTTP handler that takes the usual arguments in
// but returns a response object and error, both of which are handled in a
// common manner by Consul's HTTP server.
@ -110,7 +132,6 @@ func (s *HTTPServer) handler(enableDebug bool) http.Handler {
start := time.Now()
handler(resp, req)
key := append([]string{"http", req.Method}, parts...)
metrics.MeasureSince(append([]string{"consul"}, key...), start)
metrics.MeasureSince(key, start)
}
@ -135,11 +156,32 @@ func (s *HTTPServer) handler(enableDebug bool) http.Handler {
handleFuncMetrics("/debug/pprof/symbol", pprof.Symbol)
}
// Use the custom UI dir if provided.
if s.agent.config.UIDir != "" {
mux.Handle("/ui/", http.StripPrefix("/ui/", http.FileServer(http.Dir(s.agent.config.UIDir))))
} else if s.agent.config.EnableUI {
mux.Handle("/ui/", http.StripPrefix("/ui/", http.FileServer(assetFS())))
if s.IsUIEnabled() {
new_ui, err := strconv.ParseBool(os.Getenv("CONSUL_UI_BETA"))
if err != nil {
new_ui = false
}
var uifs http.FileSystem
// Use the custom UI dir if provided.
if s.agent.config.UIDir != "" {
uifs = http.Dir(s.agent.config.UIDir)
} else {
fs := assetFS()
if new_ui {
fs.Prefix += "/v2/"
} else {
fs.Prefix += "/v1/"
}
uifs = fs
}
if new_ui {
uifs = &redirectFS{fs: uifs}
}
mux.Handle("/ui/", http.StripPrefix("/ui/", http.FileServer(uifs)))
}
// Wrap the whole mux with a handler that bans URLs with non-printable
@ -216,6 +258,11 @@ func (s *HTTPServer) wrap(handler endpoint, methods []string) http.HandlerFunc {
return ok
}
isBadRequest := func(err error) bool {
_, ok := err.(BadRequestError)
return ok
}
addAllowHeader := func(methods []string) {
resp.Header().Add("Allow", strings.Join(methods, ","))
}
@ -236,6 +283,9 @@ func (s *HTTPServer) wrap(handler endpoint, methods []string) http.HandlerFunc {
addAllowHeader(err.(MethodNotAllowedError).Allow)
resp.WriteHeader(http.StatusMethodNotAllowed) // 405
fmt.Fprint(resp, err.Error())
case isBadRequest(err):
resp.WriteHeader(http.StatusBadRequest)
fmt.Fprint(resp, err.Error())
default:
resp.WriteHeader(http.StatusInternalServerError)
fmt.Fprint(resp, err.Error())
@ -498,11 +548,35 @@ func (s *HTTPServer) parseToken(req *http.Request, token *string) {
*token = s.agent.tokens.UserToken()
}
func sourceAddrFromRequest(req *http.Request) string {
xff := req.Header.Get("X-Forwarded-For")
forwardHosts := strings.Split(xff, ",")
if len(forwardHosts) > 0 {
forwardIp := net.ParseIP(strings.TrimSpace(forwardHosts[0]))
if forwardIp != nil {
return forwardIp.String()
}
}
host, _, err := net.SplitHostPort(req.RemoteAddr)
if err != nil {
return ""
}
ip := net.ParseIP(host)
if ip != nil {
return ip.String()
} else {
return ""
}
}
// parseSource is used to parse the ?near=<node> query parameter, used for
// sorting by RTT based on a source node. We set the source's DC to the target
// DC in the request, if given, or else the agent's DC.
func (s *HTTPServer) parseSource(req *http.Request, source *structs.QuerySource) {
s.parseDC(req, &source.Datacenter)
source.Ip = sourceAddrFromRequest(req)
if node := req.URL.Query().Get("near"); node != "" {
if node == "_agent" {
source.Node = s.agent.config.NodeName

View File

@ -60,6 +60,7 @@ func TestHTTPAPI_MethodNotAllowed_OSS(t *testing.T) {
if err != nil {
t.Fatal("client.Do failed: ", err)
}
defer resp.Body.Close()
allowed := method == "OPTIONS"
for _, allowedMethod := range allowedMethods {

View File

@ -10,6 +10,7 @@ import (
"testing"
"github.com/hashicorp/consul/agent/structs"
"github.com/hashicorp/consul/types"
)
// MockPreparedQuery is a fake endpoint that we inject into the Consul server
@ -87,9 +88,10 @@ func TestPreparedQuery_Create(t *testing.T) {
NearestN: 4,
Datacenters: []string{"dc1", "dc2"},
},
OnlyPassing: true,
Tags: []string{"foo", "bar"},
NodeMeta: map[string]string{"somekey": "somevalue"},
IgnoreCheckIDs: []types.CheckID{"broken_check"},
OnlyPassing: true,
Tags: []string{"foo", "bar"},
NodeMeta: map[string]string{"somekey": "somevalue"},
},
DNS: structs.QueryDNSOptions{
TTL: "10s",
@ -122,9 +124,10 @@ func TestPreparedQuery_Create(t *testing.T) {
"NearestN": 4,
"Datacenters": []string{"dc1", "dc2"},
},
"OnlyPassing": true,
"Tags": []string{"foo", "bar"},
"NodeMeta": map[string]string{"somekey": "somevalue"},
"IgnoreCheckIDs": []string{"broken_check"},
"OnlyPassing": true,
"Tags": []string{"foo", "bar"},
"NodeMeta": map[string]string{"somekey": "somevalue"},
},
"DNS": map[string]interface{}{
"TTL": "10s",
@ -325,6 +328,138 @@ func TestPreparedQuery_Execute(t *testing.T) {
}
})
t.Run("", func(t *testing.T) {
a := NewTestAgent(t.Name(), "")
defer a.Shutdown()
m := MockPreparedQuery{
executeFn: func(args *structs.PreparedQueryExecuteRequest, reply *structs.PreparedQueryExecuteResponse) error {
expected := &structs.PreparedQueryExecuteRequest{
Datacenter: "dc1",
QueryIDOrName: "my-id",
Limit: 5,
Source: structs.QuerySource{
Datacenter: "dc1",
Node: "_ip",
Ip: "127.0.0.1",
},
Agent: structs.QuerySource{
Datacenter: a.Config.Datacenter,
Node: a.Config.NodeName,
},
QueryOptions: structs.QueryOptions{
Token: "my-token",
RequireConsistent: true,
},
}
if !reflect.DeepEqual(args, expected) {
t.Fatalf("bad: %v", args)
}
// Just set something so we can tell this is returned.
reply.Failovers = 99
return nil
},
}
if err := a.registerEndpoint("PreparedQuery", &m); err != nil {
t.Fatalf("err: %v", err)
}
body := bytes.NewBuffer(nil)
req, _ := http.NewRequest("GET", "/v1/query/my-id/execute?token=my-token&consistent=true&near=_ip&limit=5", body)
req.Header.Add("X-Forwarded-For", "127.0.0.1")
resp := httptest.NewRecorder()
obj, err := a.srv.PreparedQuerySpecific(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 200 {
t.Fatalf("bad code: %d", resp.Code)
}
r, ok := obj.(structs.PreparedQueryExecuteResponse)
if !ok {
t.Fatalf("unexpected: %T", obj)
}
if r.Failovers != 99 {
t.Fatalf("bad: %v", r)
}
})
t.Run("", func(t *testing.T) {
a := NewTestAgent(t.Name(), "")
defer a.Shutdown()
m := MockPreparedQuery{
executeFn: func(args *structs.PreparedQueryExecuteRequest, reply *structs.PreparedQueryExecuteResponse) error {
expected := &structs.PreparedQueryExecuteRequest{
Datacenter: "dc1",
QueryIDOrName: "my-id",
Limit: 5,
Source: structs.QuerySource{
Datacenter: "dc1",
Node: "_ip",
Ip: "198.18.0.1",
},
Agent: structs.QuerySource{
Datacenter: a.Config.Datacenter,
Node: a.Config.NodeName,
},
QueryOptions: structs.QueryOptions{
Token: "my-token",
RequireConsistent: true,
},
}
if !reflect.DeepEqual(args, expected) {
t.Fatalf("bad: %v", args)
}
// Just set something so we can tell this is returned.
reply.Failovers = 99
return nil
},
}
if err := a.registerEndpoint("PreparedQuery", &m); err != nil {
t.Fatalf("err: %v", err)
}
body := bytes.NewBuffer(nil)
req, _ := http.NewRequest("GET", "/v1/query/my-id/execute?token=my-token&consistent=true&near=_ip&limit=5", body)
req.Header.Add("X-Forwarded-For", "198.18.0.1")
resp := httptest.NewRecorder()
obj, err := a.srv.PreparedQuerySpecific(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 200 {
t.Fatalf("bad code: %d", resp.Code)
}
r, ok := obj.(structs.PreparedQueryExecuteResponse)
if !ok {
t.Fatalf("unexpected: %T", obj)
}
if r.Failovers != 99 {
t.Fatalf("bad: %v", r)
}
req, _ = http.NewRequest("GET", "/v1/query/my-id/execute?token=my-token&consistent=true&near=_ip&limit=5", body)
req.Header.Add("X-Forwarded-For", "198.18.0.1, 198.19.0.1")
resp = httptest.NewRecorder()
obj, err = a.srv.PreparedQuerySpecific(resp, req)
if err != nil {
t.Fatalf("err: %v", err)
}
if resp.Code != 200 {
t.Fatalf("bad code: %d", resp.Code)
}
r, ok = obj.(structs.PreparedQueryExecuteResponse)
if !ok {
t.Fatalf("unexpected: %T", obj)
}
if r.Failovers != 99 {
t.Fatalf("bad: %v", r)
}
})
// Ensure the proper params are set when no special args are passed
t.Run("", func(t *testing.T) {
a := NewTestAgent(t.Name(), "")

View File

@ -6,6 +6,7 @@ import (
"strings"
"time"
"github.com/hashicorp/consul/lib"
discover "github.com/hashicorp/go-discover"
)
@ -67,7 +68,11 @@ func (r *retryJoiner) retryJoin() error {
return nil
}
disco := discover.Discover{}
disco, err := discover.New(discover.WithUserAgent(lib.UserAgent()))
if err != nil {
return err
}
r.logger.Printf("[INFO] agent: Retry join %s is supported for: %s", r.cluster, strings.Join(disco.Names(), " "))
r.logger.Printf("[INFO] agent: Joining %s cluster...", r.cluster)
attempt := 0

View File

@ -8,9 +8,12 @@ import (
)
func TestGoDiscoverRegistration(t *testing.T) {
d := discover.Discover{}
d, err := discover.New()
if err != nil {
t.Fatal(err)
}
got := d.Names()
want := []string{"aliyun", "aws", "azure", "digitalocean", "gce", "os", "scaleway", "softlayer"}
want := []string{"aliyun", "aws", "azure", "digitalocean", "gce", "os", "scaleway", "softlayer", "triton"}
if !reflect.DeepEqual(got, want) {
t.Fatalf("got go-discover providers %v want %v", got, want)
}

View File

@ -21,7 +21,6 @@ type CheckDefinition struct {
//
// ID (CheckID), Name, Status, Notes
//
Script string
ScriptArgs []string
HTTP string
Header map[string][]string
@ -63,7 +62,6 @@ func (c *CheckDefinition) CheckType() *CheckType {
Status: c.Status,
Notes: c.Notes,
Script: c.Script,
ScriptArgs: c.ScriptArgs,
HTTP: c.HTTP,
GRPC: c.GRPC,

View File

@ -83,7 +83,7 @@ func TestCheckDefinitionToCheckType(t *testing.T) {
ServiceID: "svcid",
Token: "tok",
Script: "/bin/foo",
ScriptArgs: []string{"/bin/foo"},
HTTP: "someurl",
TCP: "host:port",
Interval: 1 * time.Second,
@ -100,7 +100,7 @@ func TestCheckDefinitionToCheckType(t *testing.T) {
Status: "green",
Notes: "notes",
Script: "/bin/foo",
ScriptArgs: []string{"/bin/foo"},
HTTP: "someurl",
TCP: "host:port",
Interval: 1 * time.Second,

View File

@ -25,7 +25,6 @@ type CheckType struct {
// fields copied to CheckDefinition
// Update CheckDefinition when adding fields here
Script string
ScriptArgs []string
HTTP string
Header map[string][]string
@ -70,7 +69,7 @@ func (c *CheckType) Empty() bool {
// IsScript checks if this is a check that execs some kind of script.
func (c *CheckType) IsScript() bool {
return c.Script != "" || len(c.ScriptArgs) > 0
return len(c.ScriptArgs) > 0
}
// IsTTL checks if this is a TTL type

View File

@ -1,5 +1,7 @@
package structs
import "github.com/hashicorp/consul/types"
// QueryDatacenterOptions sets options about how we fail over if there are no
// healthy nodes in the local datacenter.
type QueryDatacenterOptions struct {
@ -34,6 +36,12 @@ type ServiceQuery struct {
// discarded)
OnlyPassing bool
// IgnoreCheckIDs is an optional list of health check IDs to ignore when
// considering which nodes are healthy. It is useful as an emergency measure
// to temporarily override some health check that is producing false negatives
// for example.
IgnoreCheckIDs []types.CheckID
// Near allows the query to always prefer the node nearest the given
// node. If the node does not exist, results are returned in their
// normal randomly-shuffled order. Supplying the magic "_agent" value

View File

@ -14,8 +14,8 @@ func TestAgentStructs_CheckTypes(t *testing.T) {
// Singular Check field works
svc.Check = CheckType{
Script: "/foo/bar",
Interval: 10 * time.Second,
ScriptArgs: []string{"/foo/bar"},
Interval: 10 * time.Second,
}
// Returns HTTP checks
@ -26,8 +26,8 @@ func TestAgentStructs_CheckTypes(t *testing.T) {
// Returns Script checks
svc.Checks = append(svc.Checks, &CheckType{
Script: "/foo/bar",
Interval: 10 * time.Second,
ScriptArgs: []string{"/foo/bar"},
Interval: 10 * time.Second,
})
// Returns TTL checks

View File

@ -258,6 +258,7 @@ type QuerySource struct {
Datacenter string
Segment string
Node string
Ip string
}
// DCSpecificRequest is used to query about a specific DC
@ -278,6 +279,7 @@ type ServiceSpecificRequest struct {
NodeMetaFilters map[string]string
ServiceName string
ServiceTag string
ServiceAddress string
TagFilter bool // Controls tag filtering
Source QuerySource
QueryOptions
@ -580,16 +582,33 @@ func (nodes CheckServiceNodes) Shuffle() {
// check if that option is selected). Note that this returns the filtered
// results AND modifies the receiver for performance.
func (nodes CheckServiceNodes) Filter(onlyPassing bool) CheckServiceNodes {
return nodes.FilterIgnore(onlyPassing, nil)
}
// FilterIgnore removes nodes that are failing health checks just like Filter.
// It also ignores the status of any check with an ID present in ignoreCheckIDs
// as if that check didn't exist. Note that this returns the filtered results
// AND modifies the receiver for performance.
func (nodes CheckServiceNodes) FilterIgnore(onlyPassing bool,
ignoreCheckIDs []types.CheckID) CheckServiceNodes {
n := len(nodes)
OUTER:
for i := 0; i < n; i++ {
node := nodes[i]
INNER:
for _, check := range node.Checks {
for _, ignore := range ignoreCheckIDs {
if check.CheckID == ignore {
// Skip this _check_ but keep looking at other checks for this node.
continue INNER
}
}
if check.Status == api.HealthCritical ||
(onlyPassing && check.Status != api.HealthPassing) {
nodes[i], nodes[n-1] = nodes[n-1], CheckServiceNode{}
n--
i--
// Skip this _node_ now we've swapped it off the end of the list.
continue OUTER
}
}

View File

@ -441,6 +441,20 @@ func TestStructs_CheckServiceNodes_Filter(t *testing.T) {
},
},
},
CheckServiceNode{
Node: &Node{
Node: "node4",
Address: "127.0.0.4",
},
Checks: HealthChecks{
// This check has a different ID to the others to ensure it is not
// ignored by accident
&HealthCheck{
CheckID: "failing2",
Status: api.HealthCritical,
},
},
},
}
// Test the case where warnings are allowed.
@ -473,6 +487,26 @@ func TestStructs_CheckServiceNodes_Filter(t *testing.T) {
t.Fatalf("bad: %v", filtered)
}
}
// Allow failing checks to be ignored (note that the test checks have empty
// CheckID which is valid).
{
twiddle := make(CheckServiceNodes, len(nodes))
if n := copy(twiddle, nodes); n != len(nodes) {
t.Fatalf("bad: %d", n)
}
filtered := twiddle.FilterIgnore(true, []types.CheckID{""})
expected := CheckServiceNodes{
nodes[0],
nodes[1],
nodes[2], // Node 3's critical check should be ignored.
// Node 4 should still be failing since it's got a critical check with a
// non-ignored ID.
}
if !reflect.DeepEqual(filtered, expected) {
t.Fatalf("bad: %v", filtered)
}
}
}
func TestStructs_DirEntry_Clone(t *testing.T) {

View File

@ -13,6 +13,7 @@ import (
// ServiceSummary is used to summarize a service
type ServiceSummary struct {
Name string
Tags []string
Nodes []string
ChecksPassing int
ChecksWarning int
@ -147,6 +148,7 @@ func summarizeServices(dump structs.NodeDump) []*ServiceSummary {
nodeServices := make([]*ServiceSummary, len(node.Services))
for idx, service := range node.Services {
sum := getService(service.Service)
sum.Tags = service.Tags
sum.Nodes = append(sum.Nodes, node.Node)
nodeServices[idx] = sum
}

View File

@ -156,9 +156,11 @@ func TestSummarizeServices(t *testing.T) {
Services: []*structs.NodeService{
&structs.NodeService{
Service: "api",
Tags: []string{"tag1", "tag2"},
},
&structs.NodeService{
Service: "web",
Tags: []string{},
},
},
Checks: []*structs.HealthCheck{
@ -182,6 +184,7 @@ func TestSummarizeServices(t *testing.T) {
Services: []*structs.NodeService{
&structs.NodeService{
Service: "web",
Tags: []string{},
},
},
Checks: []*structs.HealthCheck{
@ -197,6 +200,7 @@ func TestSummarizeServices(t *testing.T) {
Services: []*structs.NodeService{
&structs.NodeService{
Service: "cache",
Tags: []string{},
},
},
},
@ -209,6 +213,7 @@ func TestSummarizeServices(t *testing.T) {
expectAPI := &ServiceSummary{
Name: "api",
Tags: []string{"tag1", "tag2"},
Nodes: []string{"foo"},
ChecksPassing: 1,
ChecksWarning: 1,
@ -220,6 +225,7 @@ func TestSummarizeServices(t *testing.T) {
expectCache := &ServiceSummary{
Name: "cache",
Tags: []string{},
Nodes: []string{"zip"},
ChecksPassing: 0,
ChecksWarning: 0,
@ -231,6 +237,7 @@ func TestSummarizeServices(t *testing.T) {
expectWeb := &ServiceSummary{
Name: "web",
Tags: []string{},
Nodes: []string{"bar", "foo"},
ChecksPassing: 2,
ChecksWarning: 0,

View File

@ -23,6 +23,7 @@ type AgentService struct {
ID string
Service string
Tags []string
Meta map[string]string
Port int
Address string
EnableTagOverride bool
@ -85,7 +86,6 @@ type AgentServiceCheck struct {
CheckID string `json:",omitempty"`
Name string `json:",omitempty"`
Args []string `json:"ScriptArgs,omitempty"`
Script string `json:",omitempty"` // Deprecated, use Args.
DockerContainerID string `json:",omitempty"`
Shell string `json:",omitempty"` // Only supported for Docker.
Interval string `json:",omitempty"`

View File

@ -73,7 +73,7 @@ func TestAPI_AgentReload(t *testing.T) {
agent := c.Agent()
// Update the config file with a service definition
config := `{"service":{"name":"redis", "port":1234}}`
config := `{"service":{"name":"redis", "port":1234, "Meta": {"some": "meta"}}}`
err = ioutil.WriteFile(configFile.Name(), []byte(config), 0644)
if err != nil {
t.Fatalf("err: %v", err)
@ -95,6 +95,9 @@ func TestAPI_AgentReload(t *testing.T) {
if service.Port != 1234 {
t.Fatalf("bad: %v", service.Port)
}
if service.Meta["some"] != "meta" {
t.Fatalf("Missing metadata some:=meta in %v", service)
}
}
func TestAPI_AgentMembersOpts(t *testing.T) {
@ -691,7 +694,7 @@ func TestAPI_AgentChecks_Docker(t *testing.T) {
ServiceID: "redis",
AgentServiceCheck: AgentServiceCheck{
DockerContainerID: "f972c95ebf0e",
Script: "/bin/true",
Args: []string{"/bin/true"},
Shell: "/bin/bash",
Interval: "10s",
},

View File

@ -34,6 +34,12 @@ type ServiceQuery struct {
// local datacenter.
Failover QueryDatacenterOptions
// IgnoreCheckIDs is an optional list of health check IDs to ignore when
// considering which nodes are healthy. It is useful as an emergency measure
// to temporarily override some health check that is producing false negatives
// for example.
IgnoreCheckIDs []string
// If OnlyPassing is true then we will only include nodes with passing
// health checks (critical AND warning checks will cause a node to be
// discarded)

View File

@ -116,6 +116,53 @@ func TestAPI_PreparedQuery(t *testing.T) {
t.Fatalf("bad datacenter: %v", results)
}
// Add new node with failing health check.
reg2 := reg
reg2.Node = "failingnode"
reg2.Check = &AgentCheck{
Node: "failingnode",
ServiceID: "redis1",
ServiceName: "redis",
Name: "failingcheck",
Status: "critical",
}
retry.Run(t, func(r *retry.R) {
if _, err := catalog.Register(reg2, nil); err != nil {
r.Fatal(err)
}
if _, _, err := catalog.Node("failingnode", nil); err != nil {
r.Fatal(err)
}
})
// Execute by ID. Should return only healthy node.
results, _, err = query.Execute(def.ID, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
if len(results.Nodes) != 1 || results.Nodes[0].Node.Node != "foobar" {
t.Fatalf("bad: %v", results)
}
if wan, ok := results.Nodes[0].Node.TaggedAddresses["wan"]; !ok || wan != "127.0.0.1" {
t.Fatalf("bad: %v", results)
}
// Update PQ with ignore rule for the failing check
def.Service.IgnoreCheckIDs = []string{"failingcheck"}
_, err = query.Update(def, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
// Execute by ID. Should return BOTH nodes ignoring the failing check.
results, _, err = query.Execute(def.ID, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
if len(results.Nodes) != 2 {
t.Fatalf("got %d nodes, want 2", len(results.Nodes))
}
// Delete it.
_, err = query.Delete(def.ID, nil)
if err != nil {

View File

@ -15,6 +15,7 @@ import (
"github.com/armon/go-metrics"
"github.com/armon/go-metrics/circonus"
"github.com/armon/go-metrics/datadog"
"github.com/armon/go-metrics/prometheus"
"github.com/hashicorp/consul/agent"
"github.com/hashicorp/consul/agent/config"
"github.com/hashicorp/consul/command/flags"
@ -60,7 +61,6 @@ type cmd struct {
versionPrerelease string
versionHuman string
shutdownCh <-chan struct{}
args []string
flagArgs config.Flags
logFilter *logutils.LevelFilter
logOutput io.Writer
@ -84,14 +84,6 @@ func (c *cmd) Run(args []string) int {
// readConfig is responsible for setup of our configuration using
// the command line and any file configs
func (c *cmd) readConfig() *config.RuntimeConfig {
if err := c.flags.Parse(c.args); err != nil {
if !strings.Contains(err.Error(), "help requested") {
c.UI.Error(fmt.Sprintf("error parsing flags: %v", err))
}
return nil
}
c.flagArgs.Args = c.flags.Args()
b, err := config.NewBuilder(c.flagArgs)
if err != nil {
c.UI.Error(err.Error())
@ -208,6 +200,20 @@ func dogstatdSink(config *config.RuntimeConfig, hostname string) (metrics.Metric
return sink, nil
}
func prometheusSink(config *config.RuntimeConfig, hostname string) (metrics.MetricSink, error) {
if config.TelemetryPrometheusRetentionTime.Nanoseconds() < 1 {
return nil, nil
}
prometheusOpts := prometheus.PrometheusOpts{
Expiration: config.TelemetryPrometheusRetentionTime,
}
sink, err := prometheus.NewPrometheusSinkFrom(prometheusOpts)
if err != nil {
return nil, err
}
return sink, nil
}
func circonusSink(config *config.RuntimeConfig, hostname string) (metrics.MetricSink, error) {
if config.TelemetryCirconusAPIToken == "" && config.TelemetryCirconusSubmissionURL == "" {
return nil, nil
@ -284,6 +290,9 @@ func startupTelemetry(conf *config.RuntimeConfig) (*metrics.InmemSink, error) {
if err := addSink("circonus", circonusSink); err != nil {
return nil, err
}
if err := addSink("prometheus", prometheusSink); err != nil {
return nil, err
}
if len(sinks) > 0 {
sinks = append(sinks, memSink)
@ -297,7 +306,13 @@ func startupTelemetry(conf *config.RuntimeConfig) (*metrics.InmemSink, error) {
func (c *cmd) run(args []string) int {
// Parse our configs
c.args = args
if err := c.flags.Parse(args); err != nil {
if !strings.Contains(err.Error(), "help requested") {
c.UI.Error(fmt.Sprintf("error parsing flags: %v", err))
}
return 1
}
c.flagArgs.Args = c.flags.Args()
config := c.readConfig()
if config == nil {
return 1
@ -385,7 +400,6 @@ func (c *cmd) run(args []string) int {
// wait for signal
signalCh := make(chan os.Signal, 10)
signal.Notify(signalCh, os.Interrupt, syscall.SIGTERM, syscall.SIGHUP)
signal.Notify(signalCh, os.Interrupt, syscall.SIGTERM, syscall.SIGHUP, syscall.SIGPIPE)
for {
@ -489,7 +503,7 @@ func (c *cmd) handleReload(agent *agent.Agent, cfg *config.RuntimeConfig) (*conf
"Failed to reload configs: %v", err))
}
return cfg, errs
return newCfg, errs
}
func (c *cmd) Synopsis() string {

View File

@ -0,0 +1,42 @@
package helpers
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
)
func LoadDataSource(data string, testStdin io.Reader) (string, error) {
var stdin io.Reader = os.Stdin
if testStdin != nil {
stdin = testStdin
}
// Handle empty quoted shell parameters
if len(data) == 0 {
return "", nil
}
switch data[0] {
case '@':
data, err := ioutil.ReadFile(data[1:])
if err != nil {
return "", fmt.Errorf("Failed to read file: %s", err)
} else {
return string(data), nil
}
case '-':
if len(data) > 1 {
return data, nil
}
var b bytes.Buffer
if _, err := io.Copy(&b, stdin); err != nil {
return "", fmt.Errorf("Failed to read stdin: %s", err)
}
return b.String(), nil
default:
return data, nil
}
}

View File

@ -84,6 +84,7 @@ func (c *cmd) Run(args []string) int {
// Specifying a ModifyIndex for a non-CAS operation is not possible.
if c.modifyIndex != 0 && !c.cas {
c.UI.Error("Cannot specify -modify-index without -cas!")
return 1
}
// It is not valid to use a CAS and recurse in the same call

View File

@ -1,16 +1,14 @@
package put
import (
"bytes"
"encoding/base64"
"flag"
"fmt"
"io"
"io/ioutil"
"os"
"github.com/hashicorp/consul/api"
"github.com/hashicorp/consul/command/flags"
"github.com/hashicorp/consul/command/helpers"
"github.com/mitchellh/cli"
)
@ -173,11 +171,6 @@ func (c *cmd) Run(args []string) int {
}
func (c *cmd) dataFromArgs(args []string) (string, string, error) {
var stdin io.Reader = os.Stdin
if c.testStdin != nil {
stdin = c.testStdin
}
switch len(args) {
case 0:
return "", "", fmt.Errorf("Missing KEY argument")
@ -189,30 +182,11 @@ func (c *cmd) dataFromArgs(args []string) (string, string, error) {
}
key := args[0]
data := args[1]
data, err := helpers.LoadDataSource(args[1], c.testStdin)
// Handle empty quoted shell parameters
if len(data) == 0 {
return key, "", nil
}
switch data[0] {
case '@':
data, err := ioutil.ReadFile(data[1:])
if err != nil {
return "", "", fmt.Errorf("Failed to read file: %s", err)
}
return key, string(data), nil
case '-':
if len(data) > 1 {
return key, data, nil
}
var b bytes.Buffer
if _, err := io.Copy(&b, stdin); err != nil {
return "", "", fmt.Errorf("Failed to read stdin: %s", err)
}
return key, b.String(), nil
default:
if err != nil {
return "", "", err
} else {
return key, data, nil
}
}

View File

@ -7,9 +7,9 @@ import (
"strings"
"testing"
require "github.com/stretchr/testify/require"
"github.com/hashicorp/consul/testutil"
"github.com/mitchellh/cli"
require "github.com/stretchr/testify/require"
)
func TestValidateCommand_noTabs(t *testing.T) {
@ -72,7 +72,6 @@ func TestValidateCommand_SucceedWithMinimalHCLConfigFormat(t *testing.T) {
err := ioutil.WriteFile(fp, []byte("bind_addr = \"10.0.0.1\"\ndata_dir = \""+td+"\""), 0644)
require.Nilf(t, err, "err: %s", err)
cmd := New(cli.NewMockUi())
args := []string{"--config-format", "hcl", fp}

View File

@ -5,7 +5,7 @@ $script = <<SCRIPT
echo "Installing dependencies ..."
sudo apt-get update
sudo apt-get install -y unzip curl jq
sudo apt-get install -y unzip curl jq dnsutils
echo "Determining Consul version to install ..."
CHECKPOINT_URL="https://checkpoint-api.hashicorp.com/v1/check"

View File

@ -2,6 +2,7 @@ package lib
import (
"github.com/hashicorp/serf/serf"
"time"
)
// SerfDefaultConfig returns a Consul-flavored Serf default configuration,
@ -16,5 +17,11 @@ func SerfDefaultConfig() *serf.Config {
// cluster size.
base.MinQueueDepth = 4096
// This gives leaves some time to propagate through the cluster before
// we shut down. The value was chosen to be reasonably short, but to
// allow a leave to get to over 99.99% of the cluster with 100k nodes
// (using https://www.serf.io/docs/internals/simulator.html).
base.LeavePropagateDelay = 3 * time.Second
return base
}

29
lib/useragent.go Normal file
View File

@ -0,0 +1,29 @@
package lib
import (
"fmt"
"runtime"
"github.com/hashicorp/consul/version"
)
var (
// projectURL is the project URL.
projectURL = "https://www.consul.io/"
// rt is the runtime - variable for tests.
rt = runtime.Version()
// versionFunc is the func that returns the current version. This is a
// function to take into account the different build processes and distinguish
// between enterprise and oss builds.
versionFunc = func() string {
return version.GetHumanVersion()
}
)
// UserAgent returns the consistent user-agent string for Consul.
func UserAgent() string {
return fmt.Sprintf("Consul/%s (+%s; %s)",
versionFunc(), projectURL, rt)
}

18
lib/useragent_test.go Normal file
View File

@ -0,0 +1,18 @@
package lib
import (
"testing"
)
func TestUserAgent(t *testing.T) {
projectURL = "https://consul-test.com"
rt = "go5.0"
versionFunc = func() string { return "1.2.3" }
act := UserAgent()
exp := "Consul/1.2.3 (+https://consul-test.com; go5.0)"
if exp != act {
t.Errorf("expected %q to be %q", act, exp)
}
}

View File

@ -1,6 +1,6 @@
FROM ubuntu:bionic
ENV GOVERSION 1.10
ENV GOVERSION 1.10.1
RUN apt-get update -y && \
apt-get install --no-install-recommends -y -q \
@ -11,8 +11,12 @@ RUN apt-get update -y && \
ruby \
ruby-dev \
zip \
zlib1g-dev && \
gem install bundler
zlib1g-dev \
nodejs \
npm && \
gem install bundler && \
npm install --global yarn && \
npm install --global ember-cli
RUN mkdir /goroot && \
mkdir /gopath && \

View File

@ -12,26 +12,31 @@ cd $DIR
# Make sure build tools are available.
make tools
# Build the standalone version of the web assets for the sanity check.
pushd ui
bundle
make dist
popd
# # Build the standalone version of the web assets for the sanity check.
# pushd ui
# bundle
# make dist
# popd
# Fixup the timestamps to match what's checked in. This will allow us to cleanly
# verify that the checked-in content is up to date without spurious diffs of the
# file mod times.
pushd pkg
cat ../agent/bindata_assetfs.go | ../scripts/fixup_times.sh
popd
# pushd ui-v2
# yarn install
# make dist
# popd
# Regenerate the built-in web assets. If there are any diffs after doing this
# then we know something is up.
make static-assets
if ! git diff --quiet agent/bindata_assetfs.go; then
echo "Checked-in web assets are out of date, build aborted"
exit 1
fi
# # Fixup the timestamps to match what's checked in. This will allow us to cleanly
# # verify that the checked-in content is up to date without spurious diffs of the
# # file mod times.
# pushd pkg
# cat ../agent/bindata_assetfs.go | ../scripts/fixup_times.sh
# popd
# # Regenerate the built-in web assets. If there are any diffs after doing this
# # then we know something is up.
# make static-assets
# if ! git diff --quiet agent/bindata_assetfs.go; then
# echo "Checked-in web assets are out of date, build aborted"
# exit 1
# fi
# Now we are ready to do a clean build of everything. We no longer distribute the
# web UI so it's ok that gets blown away as part of this.

View File

@ -13,11 +13,18 @@ cd $DIR
make tools
# Build the web assets.
echo "Building the V1 UI"
pushd ui
bundle
make dist
popd
echo "Building the V2 UI"
pushd ui-v2
yarn install
make dist
popd
# Make the static assets using the container version of the builder
make static-assets

View File

@ -4,3 +4,4 @@ V 150526223338Z 0C unknown /CN=*.testco.internal/ST=California/C=US/emailAddres
V 160526220537Z 0D unknown /CN=test.internal/ST=CA/C=US/emailAddress=test@internal.com/O=HashiCorp Test Cert/OU=Dev
V 170604185910Z 0E unknown /CN=testco.internal/ST=California/C=US/emailAddress=test@testco.com/O=Hashicorp Test Cert/OU=Beta
V 180606021919Z 0F unknown /CN=testco.internal/ST=California/C=US/emailAddress=james@hashicorp.com/O=End Point/OU=Testing
V 21180418091009Z 10 unknown /CN=testco.internal/ST=California/C=US/emailAddress=james@hashicorp.com/O=End Point/OU=Testing

View File

@ -12,7 +12,7 @@ certificate = root.cer
database = certindex
private_key = privkey.pem
serial = serialfile
default_days = 365
default_days = 36500
default_md = sha1
policy = myca_policy
x509_extensions = myca_extensions

View File

@ -1 +1 @@
10
11

View File

@ -1,23 +1,23 @@
-----BEGIN CERTIFICATE-----
MIIDyTCCArGgAwIBAgIBGDANBgkqhkiG9w0BAQUFADCBmTELMAkGA1UEBhMCVVMx
MIIDyzCCArOgAwIBAgIBGjANBgkqhkiG9w0BAQUFADCBmTELMAkGA1UEBhMCVVMx
EzARBgNVBAgTCkNhbGlmb3JuaWExFDASBgNVBAcTC0xvcyBBbmdlbGVzMRkwFwYD
VQQKExBIYWhpQ29ycCBUZXN0IENBMQ0wCwYDVQQLEwRUZXN0MREwDwYDVQQDEwhD
ZXJ0QXV0aDEiMCAGCSqGSIb3DQEJARYTamFtZXNAaGFzaGljb3JwLmNvbTAeFw0x
NzA1MTIwNjE1NDhaFw0xODA1MTIwNjE1NDhaMHwxDjAMBgNVBAMMBUFsaWNlMRMw
EQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUzEiMCAGCSqGSIb3DQEJARYT
amFtZXNAaGFzaGljb3JwLmNvbTESMBAGA1UECgwJRW5kIFBvaW50MRAwDgYDVQQL
DAdUZXN0aW5nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvDVBeQvC
7hVijs6jjWzgD2c2whjBvDDqAzUOs2Xk1JfD2TT+eVPP47cuK4ufB3EbfYwrgq/J
v1NRvstEDXp1Croof8E3oBWbR3sXVK8BAiwwRVLuY95ApvqL8VkxXTSkp7hcE+dK
AGJH3+m4T+0nxgasHhWYJO2wsJy9/dVL3cQN+tXvgzo7PZCX83dmiTDh2M9A+Uo8
lSWb5YAhf8n2b3C4K+qLNhtGRpO33GfQ5XNkuRMTLbwu7eUP0IjYdrt8LmUpgrWB
WyGaQDUutisi5kv47yuK1d4o6q1LVHjvaFRtKKpiFQVZJCJMmdB9SI6QiOqFuyVC
EM4MgzRtkRW4UwIDAQABozgwNjAJBgNVHRMEAjAAMAsGA1UdDwQEAwIF4DAcBgNV
HREEFTATghFzZXJ2ZXIuZGMxLmNvbnN1bDANBgkqhkiG9w0BAQUFAAOCAQEAJsFF
Bj7rz0z/GDGtaAth5SIsRG9dmA9Lkyfpbjz351jp8kmFCma/rVBBP3yQ/O4FcvbY
DImmqEnotS/o5CcJI1jtjFcqW+QllFCWhY1ovkkEmyAFE9g5gLGKXxPX/uuvdVly
LgLNu7wgE2SHcUpDGPRt/xfon7syb/ZWUDU43xNkGFmgZyb9xTIWkmxYDuXYSPUY
Zb5YS2GWrX28HOrkkm4DY/Gf71w7eM4FKe/q/8hXQqkDooHPYE1aFZX2mgLhnr+Y
cdLsR5AlF7PgE4WmOHWVm0WyH1c8Df39AlgFXPyZEOl6Zt00m6PWbqHnkojcaaNj
jq7ykWgpRkdSgQSa+Q==
ZXJ0QXV0aDEiMCAGCSqGSIb3DQEJARYTamFtZXNAaGFzaGljb3JwLmNvbTAgFw0x
ODA1MTIwOTA0MzJaGA8yMTE4MDQxODA5MDQzMlowfDEOMAwGA1UEAwwFQWxpY2Ux
EzARBgNVBAgMCkNhbGlmb3JuaWExCzAJBgNVBAYTAlVTMSIwIAYJKoZIhvcNAQkB
FhNqYW1lc0BoYXNoaWNvcnAuY29tMRIwEAYDVQQKDAlFbmQgUG9pbnQxEDAOBgNV
BAsMB1Rlc3RpbmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDjzkhi
7DQSMX6CBeIJtX3K508fTlvNxs9gYKMGIybyTrWSc5gT76QA7ntnETpcParyoF7K
N7LJnmTZr9uYOxJ9ZkYHzeAoBVbYjvm2jgMt8lTHwqept0ASIYhhe1RBhkIJH9eN
hoY6LgYefelj/leTYu55TUGfPD0kRNs4bG5XCl8TFbACOxKKdcY3uZQTaOXYl/Uv
Nl2Pp9h3v72/WL680Y9kGnmU9wcvBU5RewOTZKtdGe6y3hRmYz16nKxo733KH5Px
RDy2GyJ9mKC7QiyL8TYc7BRSp9FePeAXx5RQOYTL6Z5pgirwOnZkiWyaKBud9T5t
FxeT9QJdd1NsAURdAgMBAAGjODA2MAkGA1UdEwQCMAAwCwYDVR0PBAQDAgXgMBwG
A1UdEQQVMBOCEXNlcnZlci5kYzEuY29uc3VsMA0GCSqGSIb3DQEBBQUAA4IBAQBN
xFFMhWl2UtZYrQ5f3GrqTRncoe/oDqXxuAiiBRDo3Gz/XDkz9aFwwK2z7rjaYVrQ
8ZksrA4T/Zr5nGCXCpFjVMzw3eFRWqWbGRFi/nfcifvk5EW7uobT84SOYQ5jrv6y
3kmsd6f2pnYKgWEX7J94XVIE/BeVSHZMHephrK6KC3Gdy66xNk6othKymY6veNxn
70qQbw0yRrud6svdPNmD6GCauz2i3blb7xW1FZMrJqtN0Mw5W2QHMyS1MQFeSeaC
TDv/Os3tocLFtdsoLAECLAqYAL9wAvvm8eNNOWPnFpy644lE2uLupWB8z5m0GbGp
utZXHATEkmGoFKC+dNml
-----END CERTIFICATE-----

View File

@ -1,27 +1,28 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEpQIBAAKCAQEAvDVBeQvC7hVijs6jjWzgD2c2whjBvDDqAzUOs2Xk1JfD2TT+
eVPP47cuK4ufB3EbfYwrgq/Jv1NRvstEDXp1Croof8E3oBWbR3sXVK8BAiwwRVLu
Y95ApvqL8VkxXTSkp7hcE+dKAGJH3+m4T+0nxgasHhWYJO2wsJy9/dVL3cQN+tXv
gzo7PZCX83dmiTDh2M9A+Uo8lSWb5YAhf8n2b3C4K+qLNhtGRpO33GfQ5XNkuRMT
Lbwu7eUP0IjYdrt8LmUpgrWBWyGaQDUutisi5kv47yuK1d4o6q1LVHjvaFRtKKpi
FQVZJCJMmdB9SI6QiOqFuyVCEM4MgzRtkRW4UwIDAQABAoIBAQCPfFqKGjlmoc8d
6NQwAg1gMORCXfV1sCT4hP7MLqainYGmmwxXG1qm1QTSFgQL/GNk9/REEhjRUIhF
2VnsnKuWng46N+hcl5xmhqVm3nT6Xw3+DBfK86p+ow0F12YXFQdjBt7MHc0BNext
/RWTec6U3olh9jykCsJmI1mFp5PLYVg4OjJz6v0GdnkBv6CCuwv0Talz8vEX4js9
AZvPAlCFR2rqmdAwpeHV/dk4x4ls04Hv8g1L40EhToBetfKOeT4PWA3Svg/71uGg
6QqQi642quWXZQL255fP9crMYfj12AUbFe4+ET6/3wZkpFeNgifAbLY1lOfEhjD3
ey7OjNhhAoGBAPUyoEpu8KDYgHM2b68yykmEjHXgpNGizsOrAV5kFbk7lTRYqoju
jKDROzOOH79OU+emCz9WlVatCrSXZoVMEfCIVtSsuL19z492sk2eX3qGz0fRN6Os
/z8K9QZ1fIAw+8W1QnwRRqZhRd7SiFLKrGY7huoq9KL8hqHx2Jw9KSlPAoGBAMR/
5ERVShxS+ADfFtRoUMEd9kcFEleQiq4tWosyAxB27+f+Q8lAIKNjmVNE1F1RC/Fi
h6BeQBbM6n9j5Q8+r8st7OYBpEu260teAKiE8GlB8s1D2s4oDvB5usvWnsl53gTU
WdpP5/M2Ioze5fl/8dZXshGPb6gHMXqwSNPr+fe9AoGAG9yn2C1pDG3tkqnx4O+d
iuMT7uUa9XNRmWxaGHa4/TZnCu60WiD5O+DqoD4bH2rwH9d/WbAmAhZhrAm0LZtq
QnHLpBkIWQftyPiM5EMFyG9/KEL+1ot26Zv+IcDB5/Mo+NtS9bQk2g0dmmdD9Fxx
YKCNARjmeYrGZaqMmZxdjAMCgYEAhXE8uVMaYyXNGfpmbJTy0lLgntZI4IJeS26t
YH30Ksg6n9fCfPc5svu+chf6B+00KRb6d+PJrjI2xZA3TCUMCPUFPiW7R1fPbn1G
AStWgISyuMbt3rbBfnmMa0UyzCwgpDL5WhKNuFL5H6V3k/pZZ3Bikx5Pe1J3PZRd
wN0uAhkCgYEAm6PvhIWY8dtOIJZOKB+GcuSfFKJrvfWPL7Lb6gfq45ojsl7EoMiW
h/uetqAYaAr4kQCZApSfrtwc5GeB8o93Xl6gFqM3Qo32wCITMaSrNRMuEtq1G9AX
Q+rGXgPFsMMA8nV3QOMftBNWan5O74w24+4LDu4WQex7EAaiH8J+jkk=
-----END RSA PRIVATE KEY-----
-----BEGIN PRIVATE KEY-----
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDjzkhi7DQSMX6C
BeIJtX3K508fTlvNxs9gYKMGIybyTrWSc5gT76QA7ntnETpcParyoF7KN7LJnmTZ
r9uYOxJ9ZkYHzeAoBVbYjvm2jgMt8lTHwqept0ASIYhhe1RBhkIJH9eNhoY6LgYe
felj/leTYu55TUGfPD0kRNs4bG5XCl8TFbACOxKKdcY3uZQTaOXYl/UvNl2Pp9h3
v72/WL680Y9kGnmU9wcvBU5RewOTZKtdGe6y3hRmYz16nKxo733KH5PxRDy2GyJ9
mKC7QiyL8TYc7BRSp9FePeAXx5RQOYTL6Z5pgirwOnZkiWyaKBud9T5tFxeT9QJd
d1NsAURdAgMBAAECggEALevYZbCNspktklJTRXfDetJz/bCTCwEnrprsYgFWCYRa
T8JjhqlJGzL3x0gOxqdbvXscgJEHxmLam5M6pg5KZOLn/QzAQfEJl7ACoI0yEOIH
uxj/KVQaY01FK7lru6WvzB0SG6JhjnrWmvDwykpsJvbLccJkFxBSluwWcOJSv9Kj
CQMExsy9s2aVyUcA19aob8tQunBpAZfqIAO/wQxGUbxo7Bk6/o+/jYSoedzm0viY
M7xskskE0CMglC4AkbpWBLAR/aKlgtFiniYm3wp4k7Nbf0WMkESfCfvQtqsBgp0W
vuL2QbVouzxiGtj9XyGA3WqsJDVFL4CD5Aoap+kmgQKBgQDyQYmyOlifQvB95ZWO
GVQ0W4bOqzxOHuQYswIPj2yRjeoD7dYcCQD8i3nKzxrSljIezal49dio3+yBJwY6
jomzrq7HPtmKMt4eZN1l9Tljiz9+5cxyKc2/qGJoEBkBccBlZXAFVJ99wSfcKQQw
zT4NbVHuXK5lZol6Wjvk/fVXIQKBgQDwut+wKCmsYgsVIJ17I1iHlcJUspPkWB4c
+iES1uGM49Iem2lMNSdRKzlkB5c6+JjIbmhLvh0+PH/7/vkVIrelbLCi4qe3E6m8
gTOVq8pHohzLJJQAEWG6JlkjxBj+Orgc5qos4eO71yJProGk+xMZARz5n0EKmkpP
Zju/T/7RvQKBgQDyOBMsT+hCPRTmXEIflTW7L/Rm+ZFPbtWT2I/r7PSZyDI+gXQ+
Dcadu/sni9H+0swEPo//cJiTqWj4bYNt0wzdyn/Ymf+6jUfHTgSMKBecbyMqhyvW
zfN5eSwDbm0CI7FB8J2Dxuu9Of7Xw278OIqdtDtiP+rjWhWFb2lJeZ7v4QKBgQCt
XRdMyI/CelUa4QMos/rEoiByWKzTLHZ7TdNVuvRyP3uJ2UhKvpjTBrrtA95wdKmq
5oAr0/1BXdaZxzTgeMEi3BSVKX+5A+sgOzfIGRCy59euoGgJaHsl0QovDMEnDWic
P63cZs1X8IXgNn9dLgfB4SBZ0MvJc/YCGlD65QRRTQKBgFxqEn90iOZr4AZKYoIR
0qQM0MA8W8Vi1EoKU7O/onuZrBA1rMfOGMjdtGmnTozVDbi/VKR6sjd4IpsIDH9L
WMn7Jm8Y5KYIEs9/DVv+/jPoPa/fQ680h8+QmRrz8P95Ap3xd17s+10qbUtrQdzI
w4xzB0gF0vOT/dCAmN66h/rv
-----END PRIVATE KEY-----

View File

@ -1 +1,3 @@
V 180512061548Z 18 unknown /CN=Alice/ST=California/C=US/emailAddress=james@hashicorp.com/O=End Point/OU=Testing
V 190512090339Z 19 unknown /CN=Alice/ST=California/C=US/emailAddress=james@hashicorp.com/O=End Point/OU=Testing
V 21180418090432Z 1A unknown /CN=Alice/ST=California/C=US/emailAddress=james@hashicorp.com/O=End Point/OU=Testing

View File

@ -12,7 +12,7 @@ certificate = CertAuth.crt
database = certindex
private_key = privkey.pem
serial = serialfile
default_days = 365
default_days = 36500
default_md = sha1
policy = myca_policy
x509_extensions = myca_extensions

View File

@ -1 +1 @@
19
1B

View File

@ -1,25 +1,25 @@
-----BEGIN CERTIFICATE-----
MIIERDCCAyygAwIBAgIBDzANBgkqhkiG9w0BAQUFADCBmDELMAkGA1UEBhMCVVMx
MIIERjCCAy6gAwIBAgIBEDANBgkqhkiG9w0BAQUFADCBmDELMAkGA1UEBhMCVVMx
CzAJBgNVBAgTAkNBMRYwFAYDVQQHEw1TYW4gRnJhbmNpc2NvMRwwGgYDVQQKExNI
YXNoaUNvcnAgVGVzdCBDZXJ0MQwwCgYDVQQLEwNEZXYxFjAUBgNVBAMTDXRlc3Qu
aW50ZXJuYWwxIDAeBgkqhkiG9w0BCQEWEXRlc3RAaW50ZXJuYWwuY29tMB4XDTE3
MDYwNjAyMTkxOVoXDTE4MDYwNjAyMTkxOVowgYYxGDAWBgNVBAMMD3Rlc3Rjby5p
bnRlcm5hbDETMBEGA1UECAwKQ2FsaWZvcm5pYTELMAkGA1UEBhMCVVMxIjAgBgkq
hkiG9w0BCQEWE2phbWVzQGhhc2hpY29ycC5jb20xEjAQBgNVBAoMCUVuZCBQb2lu
dDEQMA4GA1UECwwHVGVzdGluZzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
ggEBAMLdWSrOOYKs0RLs3cTZuzbo5eWvPeYaHsBIjaTJz5xajr8eG7lXl6StYoOy
zLuydwkoECEcer3QFBC+WIElkNrvd6CMH+8EpPmyo6qTWCPrWjomL17kX8GUi9j5
i340bnIzdHCBBVe/zLgucf59RrlU93D65zRtrviMZE9cI8Y7Wr/P26xNSPr23sUT
XAzEHfqdNgQw+S3fPqCiZgREApOc3aAQnDE2IIXgR1YF8R3ftd8WWoafCQUyNWiS
LLYiIcaxLg9cWjSEopQ3RU4oZ+/UA6k5OCt72AdBlkIm+rGqb2J+Aw23rDIEqCwI
Om7nl7ATQMj+3JHHPGmYJM99Qo8CAwEAAaOBqDCBpTAJBgNVHRMEAjAAMB0GA1Ud
DgQWBBSRrFY/9Pq3UXhxx6M+LkRwhGtXHTAfBgNVHSMEGDAWgBSj+es5+q9t57ZW
SVUogWXJARu4lTALBgNVHQ8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsG
AQUFBwMCMCwGA1UdHwQlMCMwIaAfoB2GG2h0dHA6Ly9wYXRoLnRvLmNybC9teWNh
LmNybDANBgkqhkiG9w0BAQUFAAOCAQEAEsEyEEsYx9qHlbHiJs+kBfI7VPepaEon
fd8UGI+uVN+vUFN509lWxdn8Zqxgni1ehiPO/7Qm+AVSA5KXMzJsO1qjBCPQ0QXu
06k1MJU6LoUIqWGkE37nKi+9n4fjH1sUePehDJfFiDmhgF6q3AC+o7p4/zbvBIzc
uwXnE5f0/vKVlI44STVN8qlM/ZWE5UH0xAboqgWF4LcK3hmQ6Vm24lBaoXCZFUzK
xGxFE1xK7tBskJJA8NXCbDCutveU8e6BHbE9qyOtKB00GzE1PXxgPZGFKJSBgaDv
n3Z0x1CipCRXY4BHd5A2FuPU6xLXWS9KzVeujW/0yrqss/hpu5zUnw==
aW50ZXJuYWwxIDAeBgkqhkiG9w0BCQEWEXRlc3RAaW50ZXJuYWwuY29tMCAXDTE4
MDUxMjA5MTAwOVoYDzIxMTgwNDE4MDkxMDA5WjCBhjEYMBYGA1UEAwwPdGVzdGNv
LmludGVybmFsMRMwEQYDVQQIDApDYWxpZm9ybmlhMQswCQYDVQQGEwJVUzEiMCAG
CSqGSIb3DQEJARYTamFtZXNAaGFzaGljb3JwLmNvbTESMBAGA1UECgwJRW5kIFBv
aW50MRAwDgYDVQQLDAdUZXN0aW5nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEA0X9Ft3q7EbTgyt4W0BwGtZ/kdDw+k2VEXs9GXRh7BG0sjWIu4szAbkau
igKwAdCcAHfZe4fRNTtzlUb7RnYSLB9SJZEbvwM07mfesR1ZpxtIKsCFZ8DjJ6Wo
eAvc+2JTIcWZLXuDIIIMZ6plvPbHN8RRnC5H4fw9Z8L+qmyyn0o7+4SClkhf2AZa
6WmoZCMbrSLMQdhx1MZTO86GeUJpIG0l3XJLb7wnfn5WDG/GZB8TGAycRD1EP5mx
wzgNqJLvL3TgL0d9NIwC0rpQC4qeP6pzngdr0KV0vgFyYoSBLHiU77+HL1C8QFN4
fWGoBjEfkVPjHKOk323OgJKWizB34wIDAQABo4GoMIGlMAkGA1UdEwQCMAAwHQYD
VR0OBBYEFHJwH4f2QlFTTll+bnNiZZBo1oheMB8GA1UdIwQYMBaAFKP56zn6r23n
tlZJVSiBZckBG7iVMAsGA1UdDwQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYI
KwYBBQUHAwIwLAYDVR0fBCUwIzAhoB+gHYYbaHR0cDovL3BhdGgudG8uY3JsL215
Y2EuY3JsMA0GCSqGSIb3DQEBBQUAA4IBAQA0ICTh1Dli9siCA5heDl51YCjoCVGa
B7OfoJStOW3BjesingD6kpQUPdbjr0qFzvSsn7IVd8v9IGr/hknBy9gjroPmwoct
gTgTuZpRm727AQiA6KSANnOz+dwb4r0ckdDqIrUTmk4lV7Pdk0lPONtGxfa8c3gY
QjaML7GK9QRU56RmYar+5VV2wI24lqz6cwpwTCa0gpZTRRKorpBONjSpZY4myGT4
rWRkGTu59XX0POvQxg4i2CL5Lu6WE43APoFRJBCYIQoTqOi7KwlaYqJZG7pa8LU0
mjDUjW3cNxthYLk2q3cZ4+Or5hbUZGBFhD716+FnChZ/531lgrGWLLMN
-----END CERTIFICATE-----

View File

@ -1,28 +1,28 @@
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDC3VkqzjmCrNES
7N3E2bs26OXlrz3mGh7ASI2kyc+cWo6/Hhu5V5ekrWKDssy7sncJKBAhHHq90BQQ
vliBJZDa73egjB/vBKT5sqOqk1gj61o6Ji9e5F/BlIvY+Yt+NG5yM3RwgQVXv8y4
LnH+fUa5VPdw+uc0ba74jGRPXCPGO1q/z9usTUj69t7FE1wMxB36nTYEMPkt3z6g
omYERAKTnN2gEJwxNiCF4EdWBfEd37XfFlqGnwkFMjVokiy2IiHGsS4PXFo0hKKU
N0VOKGfv1AOpOTgre9gHQZZCJvqxqm9ifgMNt6wyBKgsCDpu55ewE0DI/tyRxzxp
mCTPfUKPAgMBAAECggEAbxZiy81O6dj9Q423C46YdMAmt17EqdXALBvwa74E1fym
HfvbEDkIIQAbBjs7DdG6nISzVTz4GBd0KOtqZw10W+tiRis71TXPmu2k8gwXljqI
cFfub2k/0YqOgv4X8LWRNRdyTOSwmAqmeWU45TyjwenXOhg/EBtrQRQ/5yH+3vVc
Xtk3LSTNQW63cxn2TEO04a3yt95LNSsAUSLCc5JRzUqfeqD6m/XyBBrK8BYQ1s1K
anTyum1KGEV1jkW3aWlAahCP4fv/zrlD/wHzAkRUBxb8nLe4v68fyxfheVJ2I1fu
IdccU1DY6yuMrxNyttJiIFrgSSiDJCH3RaXJYlSjoQKBgQD/ZOp9MdLqYOSPOIrM
sYsyvIUbwolgGWpxw51XPCOVx6wIQ2zGGjiBitpvtLI/LXBgJQEkFqiFJJSBEIOy
jyPypPV27ALbYBvGQucpAoDhsq+3d0INgzHyVxcqhSS9EQXHpAfO2/KHI+JZ8pCk
0w1KnnlgaECwmCOnDcTw8hcZvwKBgQDDU60+XgA/0h1grGuNbeosikNLlfLeDLSD
p6tHKIQne9uWNiYXqFp8ugvGoYvSI1FEOnswzypY8gQ4KnRGCp7tiqi6ijFrS9/3
ErS7pUpxo39yIxgfMqyAhBMsvTG3LXkkMVlAtdCnlH/yZwvRCrDqOQAcm3kpntdv
5O11oVHrMQKBgGwh6jZ/tfGOfLc3FW19bpZYw3LxdwC9QhhQ3nlk+RwdonUNNyzZ
RTtz8vCA7Udakc3jXQxOm6NjzYyn1VrwyCOgPF3Rp5QCqT/Ua9MtQCxPX56qW8kk
1yzoOuLB5MA4SN4yUSwAbDtTsi6rSRrAUUxatMFg4qLih5XfepcZqTY5AoGBAK1e
AuHdW0lKPIsG3rt8OKJp4gsfv545FqvYUVzqaEoHVELCMxNCeXZFR5h44HqWoFX3
tkn/Rq4FuZsEi4lzedaLAPH5IJ4EjXhmIQaAUlAE92SeI5XlS1kSYVaPYqYcdW6b
YoXeGqHzW5ESx1k2rQpnp2K82FEJzFxjjCShF+2xAoGARNmBum668bJP0qHk0vG/
YM1CWesXUNVtcJ9fVawEJKKe9F8z2VQgMXKrDd4jVHlwKyK3WQv0B0syhTfdo4RI
2B70IrHOVKuY7EaPiCwmb2BZvpiiYxlG3cu2nv2kN0q52p+gi26GTJqcEDfOUYn3
bv6p74gSLiLtHWn4NWPdBWA=
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDRf0W3ersRtODK
3hbQHAa1n+R0PD6TZURez0ZdGHsEbSyNYi7izMBuRq6KArAB0JwAd9l7h9E1O3OV
RvtGdhIsH1IlkRu/AzTuZ96xHVmnG0gqwIVnwOMnpah4C9z7YlMhxZkte4Mgggxn
qmW89sc3xFGcLkfh/D1nwv6qbLKfSjv7hIKWSF/YBlrpaahkIxutIsxB2HHUxlM7
zoZ5QmkgbSXdcktvvCd+flYMb8ZkHxMYDJxEPUQ/mbHDOA2oku8vdOAvR300jALS
ulALip4/qnOeB2vQpXS+AXJihIEseJTvv4cvULxAU3h9YagGMR+RU+Mco6Tfbc6A
kpaLMHfjAgMBAAECggEAJeSNaaiLWaKL3mXZXn8TP5rSKawT7XktqrB3G7On3J8a
peASdvdt/wRN4aymxU1ESlljPxLL5oMAXwndvVrx4oUvyJe8mworcsva3dJfOviW
TxVPi/q5m5w9IqmSqO2Z98vT7wQeLa0YLVAG4u0ID7A0yrkcS2XifXgptA3BKUpi
QwukeaVLFJQDIUnokyvNLKryQh6wRd3+qKlKLJCxKVHRBIXafYo+gYarKI9Npjex
3jbf2cTpIEBTOc8vKsUGfJIJg0E6y6LGgCL2I7YUOh3WCJEKag64ufpSvwGcpmi8
/u2H1YWJn0HzCeWfy+8q9iamLlkc+DcbxV/T5pPqgQKBgQDxCZUmQC3/NBiT11Hr
PT8k8TAW2BbvwIsBa/PhnkRUGHyUZAw/dqoQZzy42g4xa2Rl8ZOCVOEFB726RzOo
KzOIqVUxZFrt6upyU6UB1ypETz0l3dmRwh0pA/7Ko5kxSE0Jy4CJl7d706uVGCTf
5/6KRL2aMxVgCZH9tomCfWJ+wwKBgQDegHiiwUSPgbJwGMPc1OdTSOy6Zn7702T9
GRDgEzXDRJqFrOh3GkUDRUYXXGWuP9ZydD8Bpah2OE2SzPNQf9SYzu84KLivUUkP
jE/IHx8Avjx+Sj3EvUNuONfWD/Ch043nqpsEQ6WJZuumf3DVu6fJk49o+4n241U6
pI2mmKDQYQKBgBhYCmtJkhuzTEQqPAjRL75waZX1DyP5w1BKceA4ltgTfQmTrTT/
rB9p/dUBmOte2E3/fxFrtypF5OCablouus6zo3oQk6pxzmnrjr/H1mn9wsQ/SskQ
3NcWozYeHcu/bKBvoDTFUO+9qhetz5OZn7ihRrD7Nc50SP1h4TN/rGH3AoGBAIvE
iAM1BKxg/IYOCHsgAm/+zzYITJxEHpwesssPRiZzYd220BCBH/j9+xmRoQ3kbAFZ
pHqUZU5d79zXgcB/jDyxQPQ2IE2A8jQiH7vGUONWnQl3+XUsrr7+VhbRzIbbLbjp
Ipd7JvE5Ba6BP5ADYVLurpdz6yZ7h35e/9w25E4BAoGAN6OGNF3wKP9gGMKgxpOu
SemLp6v8WGOTuqbqkhfsbLCd4IR6apYh5AWn2aiIq0cJvkUfgb8/yGAbP/fqsMXd
IvVqiOGKoMHfB4bb6grJk3CdpgHcaOtNowFRDKzXNuXH7f7xNNxSABIdXk6aSmkI
NEBFopxmFg7bQdfXMaciFBE=
-----END PRIVATE KEY-----

8
ui-v2/.dev.eslintrc.js Normal file
View File

@ -0,0 +1,8 @@
module.exports = {
extends: ['./.eslintrc.js'],
rules: {
'no-console': 'warn',
'no-unused-vars': ['error', { args: 'none' }],
'ember/routes-segments-snake-case': 'warn',
},
};

22
ui-v2/.editorconfig Normal file
View File

@ -0,0 +1,22 @@
# EditorConfig helps developers define and maintain consistent
# coding styles between different editors and IDEs
# editorconfig.org
root = true
[*]
end_of_line = lf
charset = utf-8
trim_trailing_whitespace = true
[*.{js,scss}]
insert_final_newline = true
indent_style = space
indent_size = 2
[*.hbs]
insert_final_newline = false
[*.{diff,md}]
trim_trailing_whitespace = false

9
ui-v2/.ember-cli Normal file
View File

@ -0,0 +1,9 @@
{
/**
Ember CLI sends analytics information by default. The data is completely
anonymous, but there are times when you might want to disable this behavior.
Setting `disableAnalytics` to true will prevent any data from being sent.
*/
"disableAnalytics": false
}

41
ui-v2/.eslintrc.js Normal file
View File

@ -0,0 +1,41 @@
module.exports = {
root: true,
parserOptions: {
ecmaVersion: 2017,
sourceType: 'module',
ecmaFeatures: {
experimentalObjectRestSpread: true
}
},
plugins: ['ember'],
extends: ['eslint:recommended', 'plugin:ember/recommended'],
env: {
browser: true,
},
rules: {
'no-unused-vars': ['error', { args: 'none' }]
},
overrides: [
// node files
{
files: ['testem.js', 'ember-cli-build.js', 'config/**/*.js'],
parserOptions: {
sourceType: 'script',
ecmaVersion: 2015,
},
env: {
browser: false,
node: true,
},
},
// test files
{
files: ['tests/**/*.js'],
excludedFiles: ['tests/dummy/**/*.js'],
env: {
embertest: true,
},
},
],
};

10
ui-v2/.gitignore vendored Normal file
View File

@ -0,0 +1,10 @@
/dist
/tmp
/node_modules
/coverage/*
/npm-debug.log*
/yarn-error.log
/testem.log

1
ui-v2/.nvmrc Normal file
View File

@ -0,0 +1 @@
8

3
ui-v2/.prettierrc Normal file
View File

@ -0,0 +1,3 @@
printWidth: 100
singleQuote: true
trailingComma: es5

3
ui-v2/.watchmanconfig Normal file
View File

@ -0,0 +1,3 @@
{
"ignore_dirs": ["tmp", "dist"]
}

15
ui-v2/GNUmakefile Normal file
View File

@ -0,0 +1,15 @@
ROOT:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
server:
yarn run start
dist:
yarn run build
mv dist ../pkg/web_ui/v2
lint:
yarn run lint:js
format:
yarn run format:js
.PHONY: server dist lint format

49
ui-v2/README.md Normal file
View File

@ -0,0 +1,49 @@
# consul-ui
## Prerequisites
You will need the following things properly installed on your computer.
* [Git](https://git-scm.com/)
* [Node.js](https://nodejs.org/) (with npm)
* [yarn](https://yarnpkg.com)
* [Ember CLI](https://ember-cli.com/)
* [Google Chrome](https://google.com/chrome/)
## Installation
* `git clone <repository-url>` this repository
* `cd ui`
* `yarn install`
## Running / Development
* `yarn run start`
* Visit your app at [http://localhost:4200](http://localhost:4200).
* Visit your tests at [http://localhost:4200/tests](http://localhost:4200/tests).
### Code Generators
Make use of the many generators for code, try `ember help generate` for more details
### Running Tests
* `ember test`
* `ember test --server`
### Building
* `ember build` (development)
* `ember build --environment production` (production)
### Deploying
## Further Reading / Useful Links
* [ember.js](https://emberjs.com/)
* [ember-cli](https://ember-cli.com/)
* Development Browser Extensions
* [ember inspector for chrome](https://chrome.google.com/webstore/detail/ember-inspector/bmdblncegkenkacieihfhpjfppoconhi)
* [ember inspector for firefox](https://addons.mozilla.org/en-US/firefox/addon/ember-inspector/)

162
ui-v2/app/adapters/acl.js Normal file
View File

@ -0,0 +1,162 @@
import Adapter, {
REQUEST_CREATE,
REQUEST_UPDATE,
REQUEST_DELETE,
DATACENTER_KEY as API_DATACENTER_KEY,
} from './application';
import EmberError from '@ember/error';
import { PRIMARY_KEY, SLUG_KEY } from 'consul-ui/models/acl';
import { FOREIGN_KEY as DATACENTER_KEY } from 'consul-ui/models/dc';
import { PUT as HTTP_PUT } from 'consul-ui/utils/http/method';
import { OK as HTTP_OK, UNAUTHORIZED as HTTP_UNAUTHORIZED } from 'consul-ui/utils/http/status';
import makeAttrable from 'consul-ui/utils/makeAttrable';
const REQUEST_CLONE = 'cloneRecord';
export default Adapter.extend({
urlForQuery: function(query, modelName) {
// https://www.consul.io/api/acl.html#list-acls
return this.appendURL('acl/list', [], this.cleanQuery(query));
},
urlForQueryRecord: function(query, modelName) {
// https://www.consul.io/api/acl.html#read-acl-token
return this.appendURL('acl/info', [query.id], this.cleanQuery(query));
},
urlForCreateRecord: function(modelName, snapshot) {
// https://www.consul.io/api/acl.html#create-acl-token
return this.appendURL('acl/create', [], {
[API_DATACENTER_KEY]: snapshot.attr(DATACENTER_KEY),
});
},
urlForUpdateRecord: function(id, modelName, snapshot) {
// the id is in the payload, don't add it in here
// https://www.consul.io/api/acl.html#update-acl-token
return this.appendURL('acl/update', [], {
[API_DATACENTER_KEY]: snapshot.attr(DATACENTER_KEY),
});
},
urlForDeleteRecord: function(id, modelName, snapshot) {
// https://www.consul.io/api/acl.html#delete-acl-token
return this.appendURL('acl/destroy', [snapshot.attr(SLUG_KEY)], {
[API_DATACENTER_KEY]: snapshot.attr(DATACENTER_KEY),
});
},
urlForCloneRecord: function(modelName, snapshot) {
// https://www.consul.io/api/acl.html#clone-acl-token
return this.appendURL('acl/clone', [snapshot.attr(SLUG_KEY)], {
[API_DATACENTER_KEY]: snapshot.attr(DATACENTER_KEY),
});
},
urlForRequest: function({ type, snapshot, requestType }) {
switch (requestType) {
case 'cloneRecord':
return this.urlForCloneRecord(type.modelName, snapshot);
}
return this._super(...arguments);
},
clone: function(store, modelClass, id, snapshot) {
const params = {
store: store,
type: modelClass,
id: id,
snapshot: snapshot,
requestType: 'cloneRecord',
};
// _requestFor is private... but these methods aren't, until they disappear..
const request = {
method: this.methodForRequest(params),
url: this.urlForRequest(params),
headers: this.headersForRequest(params),
data: this.dataForRequest(params),
};
// TODO: private..
return this._makeRequest(request);
},
dataForRequest: function(params) {
const data = this._super(...arguments);
switch (params.requestType) {
case REQUEST_UPDATE:
case REQUEST_CREATE:
return data.acl;
}
return data;
},
methodForRequest: function(params) {
switch (params.requestType) {
case REQUEST_DELETE:
case REQUEST_CREATE:
case REQUEST_CLONE:
return HTTP_PUT;
}
return this._super(...arguments);
},
isCreateRecord: function(url) {
return (
url.pathname ===
this.parseURL(this.urlForCreateRecord('acl', makeAttrable({ [DATACENTER_KEY]: '' }))).pathname
);
},
isCloneRecord: function(url) {
return (
url.pathname ===
this.parseURL(
this.urlForCloneRecord(
'acl',
makeAttrable({ [SLUG_KEY]: this.slugFromURL(url), [DATACENTER_KEY]: '' })
)
).pathname
);
},
isUpdateRecord: function(url) {
return (
url.pathname ===
this.parseURL(this.urlForUpdateRecord(null, 'acl', makeAttrable({ [DATACENTER_KEY]: '' })))
.pathname
);
},
handleResponse: function(status, headers, payload, requestData) {
let response = payload;
if (status === HTTP_OK) {
const url = this.parseURL(requestData.url);
switch (true) {
case response === true:
response = {
[PRIMARY_KEY]: this.uidForURL(url),
};
break;
case this.isQueryRecord(url):
response = {
...response[0],
...{
[PRIMARY_KEY]: this.uidForURL(url),
},
};
break;
case this.isUpdateRecord(url):
case this.isCreateRecord(url):
case this.isCloneRecord(url):
response = {
...response,
...{
[PRIMARY_KEY]: this.uidForURL(url, response[SLUG_KEY]),
},
};
break;
default:
response = response.map((item, i, arr) => {
return {
...item,
...{
[PRIMARY_KEY]: this.uidForURL(url, item[SLUG_KEY]),
},
};
});
}
} else if (status === HTTP_UNAUTHORIZED) {
const e = new EmberError();
e.code = status;
e.message = payload;
throw e;
}
return this._super(status, headers, response, requestData);
},
});

View File

@ -0,0 +1,82 @@
import Adapter from 'ember-data/adapters/rest';
import { inject as service } from '@ember/service';
import URL from 'url';
import createURL from 'consul-ui/utils/createURL';
export const REQUEST_CREATE = 'createRecord';
export const REQUEST_READ = 'queryRecord';
export const REQUEST_UPDATE = 'updateRecord';
export const REQUEST_DELETE = 'deleteRecord';
// export const REQUEST_READ_MULTIPLE = 'query';
export const DATACENTER_KEY = 'dc';
export default Adapter.extend({
namespace: 'v1',
repo: service('settings'),
headersForRequest: function(params) {
return {
...this.get('repo').findHeaders(),
...this._super(...arguments),
};
},
cleanQuery: function(_query) {
delete _query.id;
const query = { ..._query };
delete _query[DATACENTER_KEY];
return query;
},
isQueryRecord: function(url) {
// this is ONLY if ALL api's using it
// follow the 'last part of the url is the id' rule
const pathname = url.pathname
.split('/') // unslashify
// remove the last
.slice(0, -1)
// add and empty to ensure a trailing slash
.concat([''])
// slashify
.join('/');
// compare with empty id against empty id
return pathname === this.parseURL(this.urlForQueryRecord({ id: '' })).pathname;
},
getHost: function() {
return this.host || `${location.protocol}//${location.host}`;
},
slugFromURL: function(url) {
// follow the 'last part of the url is the id' rule
return decodeURIComponent(url.pathname.split('/').pop());
},
parseURL: function(str) {
return new URL(str, this.getHost());
},
uidForURL: function(url, _slug = '') {
const dc = url.searchParams.get(DATACENTER_KEY) || '';
const slug = _slug === '' ? this.slugFromURL(url) : _slug;
if (dc.length < 1) {
throw new Error('Unable to create unique id, missing datacenter');
}
if (slug.length < 1) {
throw new Error('Unable to create unique id, missing slug');
}
// TODO: we could use a URL here? They are unique AND useful
// but probably slower to create?
return JSON.stringify([dc, slug]);
},
// appendURL in turn calls createURL
// createURL ensures that all `parts` are URL encoded
// and all `query` values are URL encoded
// `this.buildURL()` with no arguments will give us `${host}/${namespace}`
// `path` is the user configurable 'urlsafe' string to append on `buildURL`
// `parts` is an array of possibly non 'urlsafe parts' to be encoded and
// appended onto the url
// `query` will populate the query string. Again the values of which will be
// url encoded
appendURL: function(path, parts = [], query = {}) {
return createURL([this.buildURL(), path], parts, query);
},
});

Some files were not shown because too many files have changed in this diff Show More