Merge branch 'oss' into clean-stale-leases
This commit is contained in:
commit
5909d81b7b
|
@ -7,7 +7,7 @@ services:
|
|||
- docker
|
||||
|
||||
go:
|
||||
- 1.8
|
||||
- 1.8.1
|
||||
|
||||
matrix:
|
||||
allow_failures:
|
||||
|
|
78
CHANGELOG.md
78
CHANGELOG.md
|
@ -1,4 +1,67 @@
|
|||
## 0.7.0 (Unreleased)
|
||||
## 0.7.1 (Unreleased)
|
||||
|
||||
DEPRECATIONS/CHANGES:
|
||||
|
||||
* LDAP Auth Backend: Group membership queries will now run as the `binddn`
|
||||
user when `binddn`/`bindpass` are configured, rather than as the
|
||||
authenticating user as was the case previously.
|
||||
|
||||
FEATURES:
|
||||
|
||||
* **AWS IAM Authentication**: IAM principals can get Vault tokens
|
||||
automatically, opening AWS-based authentication to users, ECS containers,
|
||||
Lambda instances, and more. Signed client identity information retrieved
|
||||
using the AWS API `sts:GetCallerIdentity` is validated against the AWS STS
|
||||
service before issuing a Vault token. This backend is unified with the
|
||||
`aws-ec2` authentication backend, and allows additional EC2-related
|
||||
restrictions to be applied during the IAM authentication; the previous EC2
|
||||
behavior is also still available. [GH-2441]
|
||||
* **MSSQL Physical Backend**: You can now use Microsoft SQL Server as your
|
||||
Vault physical data store [GH-2546]
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* auth/ldap: Use the binding credentials to search group membership rather
|
||||
than the user credentials [GH-2534]
|
||||
* cli/revoke: Add `-self` option to allow revoking the currently active token
|
||||
[GH-2596]
|
||||
* secret/pki: Add `no_store` option that allows certificates to be issued
|
||||
without being stored. This removes the ability to look up and/or add to a
|
||||
CRL but helps with scaling to very large numbers of certificates. [GH-2565]
|
||||
* secret/pki: If used with a role parameter, the `sign-verbatim/<role>`
|
||||
endpoint honors the values of `generate_lease`, `no_store`, `ttl` and
|
||||
`max_ttl` from the given role [GH-2593]
|
||||
* storage/etcd3: Add `discovery_srv` option to query for SRV records to find
|
||||
servers [GH-2521]
|
||||
* storage/s3: Support `max_parallel` option to limit concurrent outstanding
|
||||
requests [GH-2466]
|
||||
* storage/s3: Use pooled transport for http client [GH-2481]
|
||||
* storage/swift: Allow domain values for V3 authentication [GH-2554]
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* api: Respect a configured path in Vault's address [GH-2588]
|
||||
* auth/aws-ec2: New bounds added as criteria to allow role creation [GH-2600]
|
||||
* auth/ldap: Don't lowercase groups attached to users [GH-2613]
|
||||
* secret/mssql: Update mssql driver to support queries with colons [GH-2610]
|
||||
* secret/pki: Don't lowercase O/OU values in certs [GH-2555]
|
||||
* secret/pki: Don't attempt to validate IP SANs if none are provided [GH-2574]
|
||||
* secret/ssh: Don't automatically lowercase principles in issued SSH certs
|
||||
[GH-2591]
|
||||
* storage/consul: Properly handle state events rather than timing out
|
||||
[GH-2548]
|
||||
* storage/etcd3: Ensure locks are released if client is improperly shut down
|
||||
[GH-2526]
|
||||
|
||||
## 0.7.0 (March 21th, 2017)
|
||||
|
||||
SECURITY:
|
||||
|
||||
* Common name not being validated when `exclude_cn_from_sans` option used in
|
||||
`pki` backend: When using a role in the `pki` backend that specified the
|
||||
`exclude_cn_from_sans` option, the common name would not then be properly
|
||||
validated against the role's constraints. This has been fixed. We recommend
|
||||
any users of this feature to upgrade to 0.7 as soon as feasible.
|
||||
|
||||
DEPRECATIONS/CHANGES:
|
||||
|
||||
|
@ -56,6 +119,10 @@ FEATURES:
|
|||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* api/request: Passing username and password information in API request
|
||||
[GH-2469]
|
||||
* audit: Logging the token's use count with authentication response and
|
||||
logging the remaining uses of the client token with request [GH-2437]
|
||||
* auth/approle: Support for restricting the number of uses on the tokens
|
||||
issued [GH-2435]
|
||||
* auth/aws-ec2: AWS EC2 auth backend now supports constraints for VPC ID,
|
||||
|
@ -66,16 +133,23 @@ IMPROVEMENTS:
|
|||
* audit: Support adding a configurable prefix (such as `@cee`) before each
|
||||
line [GH-2359]
|
||||
* core: Canonicalize list operations to use a trailing slash [GH-2390]
|
||||
* core: Add option to disable caching on a per-mount level [GH-2455]
|
||||
* core: Add ability to require valid client certs in listener config [GH-2457]
|
||||
* physical/dynamodb: Implement a session timeout to avoid having to use
|
||||
recovery mode in the case of an unclean shutdown, which makes HA much safer
|
||||
[GH-2141]
|
||||
* secret/pki: O (Organization) values can now be set to role-defined values
|
||||
for issued/signed certificates [GH-2369]
|
||||
* secret/pki: Certificates issued/signed from PKI backend does not generate
|
||||
* secret/pki: Certificates issued/signed from PKI backend do not generate
|
||||
leases by default [GH-2403]
|
||||
* secret/pki: When using DER format, still return the private key type
|
||||
[GH-2405]
|
||||
* secret/pki: Add an intermediate to the CA chain even if it lacks an
|
||||
authority key ID [GH-2465]
|
||||
* secret/pki: Add role option to use CSR SANs [GH-2489]
|
||||
* secret/ssh: SSH backend as CA to sign user and host certificates [GH-2208]
|
||||
* secret/ssh: Support reading of SSH CA public key from `config/ca` endpoint
|
||||
and also return it when CA key pair is generated [GH-2483]
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
|
|
2
Makefile
2
Makefile
|
@ -22,7 +22,7 @@ dev-dynamic: generate
|
|||
|
||||
# test runs the unit tests and vets the code
|
||||
test: generate
|
||||
CGO_ENABLED=0 VAULT_TOKEN= VAULT_ACC= go test -tags='$(BUILD_TAGS)' $(TEST) $(TESTARGS) -timeout=10m -parallel=4
|
||||
CGO_ENABLED=0 VAULT_TOKEN= VAULT_ACC= go test -tags='$(BUILD_TAGS)' $(TEST) $(TESTARGS) -timeout=20m -parallel=4
|
||||
|
||||
testcompile: generate
|
||||
@for pkg in $(TEST) ; do \
|
||||
|
|
|
@ -9,7 +9,7 @@ Vault [![Build Status](https://travis-ci.org/hashicorp/vault.svg)](https://travi
|
|||
- Announcement list: [Google Groups](https://groups.google.com/group/hashicorp-announce)
|
||||
- Discussion list: [Google Groups](https://groups.google.com/group/vault-tool)
|
||||
|
||||
![Vault](https://raw.githubusercontent.com/hashicorp/vault/master/website/source/assets/images/logo-big.png?token=AAAFE8XmW6YF5TNuk3cosDGBK-sUGPEjks5VSAa2wA%3D%3D)
|
||||
<img width="300" alt="Vault Logo" src="https://cloud.githubusercontent.com/assets/416727/24112835/03b57de4-0d58-11e7-81f5-9056cac5b427.png">
|
||||
|
||||
Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log.
|
||||
|
||||
|
|
|
@ -10,6 +10,7 @@ import (
|
|||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
"path"
|
||||
|
||||
"golang.org/x/net/http2"
|
||||
|
||||
|
@ -329,13 +330,14 @@ func (c *Client) ClearToken() {
|
|||
// NewRequest creates a new raw request object to query the Vault server
|
||||
// configured for this client. This is an advanced method and generally
|
||||
// doesn't need to be called externally.
|
||||
func (c *Client) NewRequest(method, path string) *Request {
|
||||
func (c *Client) NewRequest(method, requestPath string) *Request {
|
||||
req := &Request{
|
||||
Method: method,
|
||||
URL: &url.URL{
|
||||
User: c.addr.User,
|
||||
Scheme: c.addr.Scheme,
|
||||
Host: c.addr.Host,
|
||||
Path: path,
|
||||
Path: path.Join(c.addr.Path, requestPath),
|
||||
},
|
||||
ClientToken: c.token,
|
||||
Params: make(map[string][]string),
|
||||
|
@ -343,12 +345,12 @@ func (c *Client) NewRequest(method, path string) *Request {
|
|||
|
||||
var lookupPath string
|
||||
switch {
|
||||
case strings.HasPrefix(path, "/v1/"):
|
||||
lookupPath = strings.TrimPrefix(path, "/v1/")
|
||||
case strings.HasPrefix(path, "v1/"):
|
||||
lookupPath = strings.TrimPrefix(path, "v1/")
|
||||
case strings.HasPrefix(requestPath, "/v1/"):
|
||||
lookupPath = strings.TrimPrefix(requestPath, "/v1/")
|
||||
case strings.HasPrefix(requestPath, "v1/"):
|
||||
lookupPath = strings.TrimPrefix(requestPath, "v1/")
|
||||
default:
|
||||
lookupPath = path
|
||||
lookupPath = requestPath
|
||||
}
|
||||
if c.wrappingLookupFunc != nil {
|
||||
req.WrapTTL = c.wrappingLookupFunc(method, lookupPath)
|
||||
|
|
|
@ -55,6 +55,7 @@ func (r *Request) ToHTTP() (*http.Request, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
req.URL.User = r.URL.User
|
||||
req.URL.Scheme = r.URL.Scheme
|
||||
req.URL.Host = r.URL.Host
|
||||
req.Host = r.URL.Host
|
||||
|
|
|
@ -129,6 +129,7 @@ type MountInput struct {
|
|||
type MountConfigInput struct {
|
||||
DefaultLeaseTTL string `json:"default_lease_ttl" structs:"default_lease_ttl" mapstructure:"default_lease_ttl"`
|
||||
MaxLeaseTTL string `json:"max_lease_ttl" structs:"max_lease_ttl" mapstructure:"max_lease_ttl"`
|
||||
ForceNoCache bool `json:"force_no_cache" structs:"force_no_cache" mapstructure:"force_no_cache"`
|
||||
}
|
||||
|
||||
type MountOutput struct {
|
||||
|
@ -139,6 +140,7 @@ type MountOutput struct {
|
|||
}
|
||||
|
||||
type MountConfigOutput struct {
|
||||
DefaultLeaseTTL int `json:"default_lease_ttl" structs:"default_lease_ttl" mapstructure:"default_lease_ttl"`
|
||||
MaxLeaseTTL int `json:"max_lease_ttl" structs:"max_lease_ttl" mapstructure:"max_lease_ttl"`
|
||||
DefaultLeaseTTL int `json:"default_lease_ttl" structs:"default_lease_ttl" mapstructure:"default_lease_ttl"`
|
||||
MaxLeaseTTL int `json:"max_lease_ttl" structs:"max_lease_ttl" mapstructure:"max_lease_ttl"`
|
||||
ForceNoCache bool `json:"force_no_cache" structs:"force_no_cache" mapstructure:"force_no_cache"`
|
||||
}
|
||||
|
|
|
@ -102,9 +102,10 @@ func (f *AuditFormatter) FormatRequest(
|
|||
Error: errString,
|
||||
|
||||
Auth: AuditAuth{
|
||||
DisplayName: auth.DisplayName,
|
||||
Policies: auth.Policies,
|
||||
Metadata: auth.Metadata,
|
||||
DisplayName: auth.DisplayName,
|
||||
Policies: auth.Policies,
|
||||
Metadata: auth.Metadata,
|
||||
RemainingUses: req.ClientTokenRemainingUses,
|
||||
},
|
||||
|
||||
Request: AuditRequest{
|
||||
|
@ -255,6 +256,7 @@ func (f *AuditFormatter) FormatResponse(
|
|||
DisplayName: resp.Auth.DisplayName,
|
||||
Policies: resp.Auth.Policies,
|
||||
Metadata: resp.Auth.Metadata,
|
||||
NumUses: resp.Auth.NumUses,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -362,11 +364,13 @@ type AuditResponse struct {
|
|||
}
|
||||
|
||||
type AuditAuth struct {
|
||||
ClientToken string `json:"client_token"`
|
||||
Accessor string `json:"accessor"`
|
||||
DisplayName string `json:"display_name"`
|
||||
Policies []string `json:"policies"`
|
||||
Metadata map[string]string `json:"metadata"`
|
||||
ClientToken string `json:"client_token"`
|
||||
Accessor string `json:"accessor"`
|
||||
DisplayName string `json:"display_name"`
|
||||
Policies []string `json:"policies"`
|
||||
Metadata map[string]string `json:"metadata"`
|
||||
NumUses int `json:"num_uses,omitempty"`
|
||||
RemainingUses int `json:"remaining_uses,omitempty"`
|
||||
}
|
||||
|
||||
type AuditSecret struct {
|
||||
|
|
|
@ -1895,7 +1895,7 @@ func (b *backend) handleRoleSecretIDCommon(req *logical.Request, data *framework
|
|||
}
|
||||
|
||||
// Parse the CIDR blocks into a slice
|
||||
secretIDCIDRs := strutil.ParseDedupAndSortStrings(cidrList, ",")
|
||||
secretIDCIDRs := strutil.ParseDedupLowercaseAndSortStrings(cidrList, ",")
|
||||
|
||||
// Ensure that the CIDRs on the secret ID are a subset of that of role's
|
||||
if err := verifyCIDRRoleSecretIDSubset(secretIDCIDRs, role.BoundCIDRList); err != nil {
|
||||
|
@ -2086,7 +2086,7 @@ or the 'role/<role_name>/custom-secret-id' endpoints, and if those SecretIDs
|
|||
are used to perform the login operation, then the value of 'token-max-ttl'
|
||||
defines the maximum lifetime of the tokens issued, after which the tokens
|
||||
cannot be renewed. A reauthentication is required after this duration.
|
||||
This value will be croleed by the backend mount's maximum TTL value.`,
|
||||
This value will be capped by the backend mount's maximum TTL value.`,
|
||||
},
|
||||
"role-id": {
|
||||
"Returns the 'role_id' of the role.",
|
||||
|
|
|
@ -31,7 +31,7 @@ type secretIDStorageEntry struct {
|
|||
// operation
|
||||
SecretIDNumUses int `json:"secret_id_num_uses" structs:"secret_id_num_uses" mapstructure:"secret_id_num_uses"`
|
||||
|
||||
// Duration after which this SecretID should expire. This is croleed by
|
||||
// Duration after which this SecretID should expire. This is capped by
|
||||
// the backend mount's max TTL value.
|
||||
SecretIDTTL time.Duration `json:"secret_id_ttl" structs:"secret_id_ttl" mapstructure:"secret_id_ttl"`
|
||||
|
||||
|
@ -273,7 +273,7 @@ func (b *backend) validateBindSecretID(req *logical.Request, roleName, secretID,
|
|||
func verifyCIDRRoleSecretIDSubset(secretIDCIDRs []string, roleBoundCIDRList string) error {
|
||||
if len(secretIDCIDRs) != 0 {
|
||||
// Parse the CIDRs on role as a slice
|
||||
roleCIDRs := strutil.ParseDedupAndSortStrings(roleBoundCIDRList, ",")
|
||||
roleCIDRs := strutil.ParseDedupLowercaseAndSortStrings(roleBoundCIDRList, ",")
|
||||
|
||||
// If there are no CIDR blocks on the role, then the subset
|
||||
// requirement would be satisfied
|
||||
|
|
|
@ -1,165 +0,0 @@
|
|||
package awsec2
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/hashicorp/vault/logical"
|
||||
)
|
||||
|
||||
func TestAwsEc2_RoleCrud(t *testing.T) {
|
||||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
roleData := map[string]interface{}{
|
||||
"bound_ami_id": "testamiid",
|
||||
"bound_account_id": "testaccountid",
|
||||
"bound_region": "testregion",
|
||||
"bound_iam_role_arn": "testiamrolearn",
|
||||
"bound_iam_instance_profile_arn": "testiaminstanceprofilearn",
|
||||
"bound_subnet_id": "testsubnetid",
|
||||
"bound_vpc_id": "testvpcid",
|
||||
"role_tag": "testtag",
|
||||
"allow_instance_migration": true,
|
||||
"ttl": "10m",
|
||||
"max_ttl": "20m",
|
||||
"policies": "testpolicy1,testpolicy2",
|
||||
"disallow_reauthentication": true,
|
||||
"hmac_key": "testhmackey",
|
||||
"period": "1m",
|
||||
}
|
||||
|
||||
roleReq := &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Storage: storage,
|
||||
Path: "role/testrole",
|
||||
Data: roleData,
|
||||
}
|
||||
|
||||
resp, err := b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
roleReq.Operation = logical.ReadOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
expected := map[string]interface{}{
|
||||
"bound_ami_id": "testamiid",
|
||||
"bound_account_id": "testaccountid",
|
||||
"bound_region": "testregion",
|
||||
"bound_iam_role_arn": "testiamrolearn",
|
||||
"bound_iam_instance_profile_arn": "testiaminstanceprofilearn",
|
||||
"bound_subnet_id": "testsubnetid",
|
||||
"bound_vpc_id": "testvpcid",
|
||||
"role_tag": "testtag",
|
||||
"allow_instance_migration": true,
|
||||
"ttl": time.Duration(600),
|
||||
"max_ttl": time.Duration(1200),
|
||||
"policies": []string{"default", "testpolicy1", "testpolicy2"},
|
||||
"disallow_reauthentication": true,
|
||||
"period": time.Duration(60),
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(expected, resp.Data) {
|
||||
t.Fatalf("bad: role data: expected: %#v\n actual: %#v", expected, resp.Data)
|
||||
}
|
||||
|
||||
roleData["bound_vpc_id"] = "newvpcid"
|
||||
roleReq.Operation = logical.UpdateOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
roleReq.Operation = logical.ReadOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
expected["bound_vpc_id"] = "newvpcid"
|
||||
|
||||
if !reflect.DeepEqual(expected, resp.Data) {
|
||||
t.Fatalf("bad: role data: expected: %#v\n actual: %#v", expected, resp.Data)
|
||||
}
|
||||
|
||||
roleReq.Operation = logical.DeleteOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
if resp != nil {
|
||||
t.Fatalf("failed to delete role entry")
|
||||
}
|
||||
}
|
||||
|
||||
func TestAwsEc2_RoleDurationSeconds(t *testing.T) {
|
||||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
roleData := map[string]interface{}{
|
||||
"bound_iam_instance_profile_arn": "testarn",
|
||||
"ttl": "10s",
|
||||
"max_ttl": "20s",
|
||||
"period": "30s",
|
||||
}
|
||||
|
||||
roleReq := &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Storage: storage,
|
||||
Path: "role/testrole",
|
||||
Data: roleData,
|
||||
}
|
||||
|
||||
resp, err := b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
roleReq.Operation = logical.ReadOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
if int64(resp.Data["ttl"].(time.Duration)) != 10 {
|
||||
t.Fatalf("bad: period; expected: 10, actual: %d", resp.Data["ttl"])
|
||||
}
|
||||
if int64(resp.Data["max_ttl"].(time.Duration)) != 20 {
|
||||
t.Fatalf("bad: period; expected: 20, actual: %d", resp.Data["max_ttl"])
|
||||
}
|
||||
if int64(resp.Data["period"].(time.Duration)) != 30 {
|
||||
t.Fatalf("bad: period; expected: 30, actual: %d", resp.Data["period"])
|
||||
}
|
||||
}
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"sync"
|
|
@ -1,12 +1,17 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
"github.com/aws/aws-sdk-go/service/sts"
|
||||
"github.com/hashicorp/vault/helper/policyutil"
|
||||
"github.com/hashicorp/vault/logical"
|
||||
logicaltest "github.com/hashicorp/vault/logical/testing"
|
||||
|
@ -29,11 +34,12 @@ func TestBackend_CreateParseVerifyRoleTag(t *testing.T) {
|
|||
|
||||
// create a role entry
|
||||
data := map[string]interface{}{
|
||||
"auth_type": "ec2",
|
||||
"policies": "p,q,r,s",
|
||||
"bound_ami_id": "abcd-123",
|
||||
}
|
||||
resp, err := b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/abcd-123",
|
||||
Storage: storage,
|
||||
Data: data,
|
||||
|
@ -100,7 +106,7 @@ func TestBackend_CreateParseVerifyRoleTag(t *testing.T) {
|
|||
|
||||
// register a different role
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/ami-6789",
|
||||
Storage: storage,
|
||||
Data: data,
|
||||
|
@ -683,132 +689,6 @@ vSeDCOUMYQR7R9LINYwouHIziqQYMAkGByqGSM44BAMDLwAwLAIUWXBlk40xTwSw
|
|||
}
|
||||
}
|
||||
|
||||
func TestBackend_pathRole(t *testing.T) {
|
||||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"policies": "p,q,r,s",
|
||||
"max_ttl": "2h",
|
||||
"bound_ami_id": "ami-abcd123",
|
||||
}
|
||||
resp, err := b.HandleRequest(&logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create role")
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ReadOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp == nil || resp.IsError() {
|
||||
t.Fatal("failed to read the role entry")
|
||||
}
|
||||
if !policyutil.EquivalentPolicies(strings.Split(data["policies"].(string), ","), resp.Data["policies"].([]string)) {
|
||||
t.Fatalf("bad: policies: expected: %#v\ngot: %#v\n", data, resp.Data)
|
||||
}
|
||||
|
||||
data["allow_instance_migration"] = true
|
||||
data["disallow_reauthentication"] = true
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create role")
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ReadOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !resp.Data["allow_instance_migration"].(bool) || !resp.Data["disallow_reauthentication"].(bool) {
|
||||
t.Fatal("bad: expected:true got:false\n")
|
||||
}
|
||||
|
||||
// add another entry, to test listing of role entries
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "role/ami-abcd456",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create role")
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ListOperation,
|
||||
Path: "roles",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp == nil || resp.Data == nil || resp.IsError() {
|
||||
t.Fatalf("failed to list the role entries")
|
||||
}
|
||||
keys := resp.Data["keys"].([]string)
|
||||
if len(keys) != 2 {
|
||||
t.Fatalf("bad: keys: %#v\n", keys)
|
||||
}
|
||||
|
||||
_, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.DeleteOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ReadOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp != nil {
|
||||
t.Fatalf("bad: response: expected:nil actual:%#v\n", resp)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestBackend_parseAndVerifyRoleTagValue(t *testing.T) {
|
||||
// create a backend
|
||||
config := logical.TestBackendConfig()
|
||||
|
@ -825,6 +705,7 @@ func TestBackend_parseAndVerifyRoleTagValue(t *testing.T) {
|
|||
|
||||
// create a role
|
||||
data := map[string]interface{}{
|
||||
"auth_type": "ec2",
|
||||
"policies": "p,q,r,s",
|
||||
"max_ttl": "120s",
|
||||
"role_tag": "VaultRole",
|
||||
|
@ -901,6 +782,7 @@ func TestBackend_PathRoleTag(t *testing.T) {
|
|||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"auth_type": "ec2",
|
||||
"policies": "p,q,r,s",
|
||||
"max_ttl": "120s",
|
||||
"role_tag": "VaultRole",
|
||||
|
@ -966,6 +848,7 @@ func TestBackend_PathBlacklistRoleTag(t *testing.T) {
|
|||
|
||||
// create an role entry
|
||||
data := map[string]interface{}{
|
||||
"auth_type": "ec2",
|
||||
"policies": "p,q,r,s",
|
||||
"role_tag": "VaultRole",
|
||||
"bound_ami_id": "abcd-123",
|
||||
|
@ -1068,7 +951,7 @@ func TestBackend_PathBlacklistRoleTag(t *testing.T) {
|
|||
// needs to be set:
|
||||
// TEST_AWS_SECRET_KEY
|
||||
// TEST_AWS_ACCESS_KEY
|
||||
func TestBackendAcc_LoginAndWhitelistIdentity(t *testing.T) {
|
||||
func TestBackendAcc_LoginWithInstanceIdentityDocAndWhitelistIdentity(t *testing.T) {
|
||||
// This test case should be run only when certain env vars are set and
|
||||
// executed as an acceptance test.
|
||||
if os.Getenv(logicaltest.TestEnvVar) == "" {
|
||||
|
@ -1156,6 +1039,7 @@ func TestBackendAcc_LoginAndWhitelistIdentity(t *testing.T) {
|
|||
|
||||
// Place the wrong AMI ID in the role data.
|
||||
data := map[string]interface{}{
|
||||
"auth_type": "ec2",
|
||||
"policies": "root",
|
||||
"max_ttl": "120s",
|
||||
"bound_ami_id": "wrong_ami_id",
|
||||
|
@ -1164,7 +1048,7 @@ func TestBackendAcc_LoginAndWhitelistIdentity(t *testing.T) {
|
|||
}
|
||||
|
||||
roleReq := &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/" + roleName,
|
||||
Storage: storage,
|
||||
Data: data,
|
||||
|
@ -1183,6 +1067,7 @@ func TestBackendAcc_LoginAndWhitelistIdentity(t *testing.T) {
|
|||
}
|
||||
|
||||
// Place the correct AMI ID, but make the AccountID wrong
|
||||
roleReq.Operation = logical.UpdateOperation
|
||||
data["bound_ami_id"] = amiID
|
||||
data["bound_account_id"] = "wrong-account-id"
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
|
@ -1210,7 +1095,7 @@ func TestBackendAcc_LoginAndWhitelistIdentity(t *testing.T) {
|
|||
t.Fatalf("bad: expected error response: resp:%#v\nerr:%v", resp, err)
|
||||
}
|
||||
|
||||
// Place the correct IAM Role ARN
|
||||
// place the correct IAM role ARN
|
||||
data["bound_iam_role_arn"] = iamARN
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
|
@ -1280,7 +1165,6 @@ func TestBackend_pathStsConfig(t *testing.T) {
|
|||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
|
@ -1289,7 +1173,6 @@ func TestBackend_pathStsConfig(t *testing.T) {
|
|||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
stsReq := &logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Storage: storage,
|
||||
|
@ -1389,3 +1272,233 @@ func TestBackend_pathStsConfig(t *testing.T) {
|
|||
t.Fatalf("no entries should be present")
|
||||
}
|
||||
}
|
||||
|
||||
func buildCallerIdentityLoginData(request *http.Request, roleName string) (map[string]interface{}, error) {
|
||||
headersJson, err := json.Marshal(request.Header)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
requestBody, err := ioutil.ReadAll(request.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return map[string]interface{}{
|
||||
"iam_http_request_method": request.Method,
|
||||
"iam_request_url": base64.StdEncoding.EncodeToString([]byte(request.URL.String())),
|
||||
"iam_request_headers": base64.StdEncoding.EncodeToString(headersJson),
|
||||
"iam_request_body": base64.StdEncoding.EncodeToString(requestBody),
|
||||
"request_role": roleName,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// This is an acceptance test.
|
||||
// If the test is NOT being run on an AWS EC2 instance in an instance profile,
|
||||
// it requires the following environment variables to be set:
|
||||
// TEST_AWS_ACCESS_KEY_ID
|
||||
// TEST_AWS_SECRET_ACCESS_KEY
|
||||
// TEST_AWS_SECURITY_TOKEN or TEST_AWS_SESSION_TOKEN (optional, if you are using short-lived creds)
|
||||
// These are intentionally NOT the "standard" variables to prevent accidentally
|
||||
// using prod creds in acceptance tests
|
||||
func TestBackendAcc_LoginWithCallerIdentity(t *testing.T) {
|
||||
// This test case should be run only when certain env vars are set and
|
||||
// executed as an acceptance test.
|
||||
if os.Getenv(logicaltest.TestEnvVar) == "" {
|
||||
t.Skip(fmt.Sprintf("Acceptance tests skipped unless env '%s' set", logicaltest.TestEnvVar))
|
||||
return
|
||||
}
|
||||
|
||||
storage := &logical.InmemStorage{}
|
||||
config := logical.TestBackendConfig()
|
||||
config.StorageView = storage
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Override the default AWS env vars (if set) with our test creds
|
||||
// so that the credential provider chain will pick them up
|
||||
// NOTE that I'm not bothing to override the shared config file location,
|
||||
// so if creds are specified there, they will be used before IAM
|
||||
// instance profile creds
|
||||
// This doesn't provide perfect leakage protection (e.g., it will still
|
||||
// potentially pick up credentials from the ~/.config files), but probably
|
||||
// good enough rather than having to muck around in the low-level details
|
||||
for _, envvar := range []string{
|
||||
"AWS_ACCESS_KEY_ID", "AWS_SECRET_ACCESS_KEY", "AWS_SECURITY_TOKEN", "AWS_SESSION_TOKEN"} {
|
||||
// restore existing environment variables (in case future tests need them)
|
||||
defer os.Setenv(envvar, os.Getenv(envvar))
|
||||
os.Setenv(envvar, os.Getenv("TEST_"+envvar))
|
||||
}
|
||||
awsSession, err := session.NewSession()
|
||||
if err != nil {
|
||||
fmt.Println("failed to create session,", err)
|
||||
return
|
||||
}
|
||||
|
||||
stsService := sts.New(awsSession)
|
||||
stsInputParams := &sts.GetCallerIdentityInput{}
|
||||
|
||||
testIdentity, err := stsService.GetCallerIdentity(stsInputParams)
|
||||
if err != nil {
|
||||
t.Fatalf("Received error retrieving identity: %s", err)
|
||||
}
|
||||
testIdentityArn, _, _, err := parseIamArn(*testIdentity.Arn)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Test setup largely done
|
||||
// At this point, we're going to:
|
||||
// 1. Configure the client to require our test header value
|
||||
// 2. Configure two different roles:
|
||||
// a. One bound to our test user
|
||||
// b. One bound to a garbage ARN
|
||||
// 3. Pass in a request that doesn't have the signed header, ensure
|
||||
// we're not allowed to login
|
||||
// 4. Passin a request that has a validly signed header, but the wrong
|
||||
// value, ensure it doesn't allow login
|
||||
// 5. Pass in a request that has a validly signed request, ensure
|
||||
// it allows us to login to our role
|
||||
// 6. Pass in a request that has a validly signed request, asking for
|
||||
// the other role, ensure it fails
|
||||
const testVaultHeaderValue = "VaultAcceptanceTesting"
|
||||
const testValidRoleName = "valid-role"
|
||||
const testInvalidRoleName = "invalid-role"
|
||||
|
||||
clientConfigData := map[string]interface{}{
|
||||
"iam_server_id_header_value": testVaultHeaderValue,
|
||||
}
|
||||
clientRequest := &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "config/client",
|
||||
Storage: storage,
|
||||
Data: clientConfigData,
|
||||
}
|
||||
_, err = b.HandleRequest(clientRequest)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// configuring the valid role we'll be able to login to
|
||||
roleData := map[string]interface{}{
|
||||
"bound_iam_principal_arn": testIdentityArn,
|
||||
"policies": "root",
|
||||
"auth_type": iamAuthType,
|
||||
}
|
||||
roleRequest := &logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/" + testValidRoleName,
|
||||
Storage: storage,
|
||||
Data: roleData,
|
||||
}
|
||||
resp, err := b.HandleRequest(roleRequest)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("bad: failed to create role: resp:%#v\nerr:%v", resp, err)
|
||||
}
|
||||
|
||||
// configuring a valid role we won't be able to login to
|
||||
roleDataEc2 := map[string]interface{}{
|
||||
"auth_type": "ec2",
|
||||
"policies": "root",
|
||||
"bound_ami_id": "ami-1234567",
|
||||
}
|
||||
roleRequestEc2 := &logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/ec2only",
|
||||
Storage: storage,
|
||||
Data: roleDataEc2,
|
||||
}
|
||||
resp, err = b.HandleRequest(roleRequestEc2)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("bad: failed to create role; resp:%#v\nerr:%v", resp, err)
|
||||
}
|
||||
|
||||
// now we're creating the invalid role we won't be able to login to
|
||||
roleData["bound_iam_principal_arn"] = "arn:aws:iam::123456789012:role/FakeRole"
|
||||
roleRequest.Path = "role/" + testInvalidRoleName
|
||||
resp, err = b.HandleRequest(roleRequest)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("bad: didn't fail to create role: resp:%#v\nerr:%v", resp, err)
|
||||
}
|
||||
|
||||
// now, create the request without the signed header
|
||||
stsRequestNoHeader, _ := stsService.GetCallerIdentityRequest(stsInputParams)
|
||||
stsRequestNoHeader.Sign()
|
||||
loginData, err := buildCallerIdentityLoginData(stsRequestNoHeader.HTTPRequest, testValidRoleName)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
loginRequest := &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "login",
|
||||
Storage: storage,
|
||||
Data: loginData,
|
||||
}
|
||||
resp, err = b.HandleRequest(loginRequest)
|
||||
if err != nil || resp == nil || !resp.IsError() {
|
||||
t.Errorf("bad: expected failed login due to missing header: resp:%#v\nerr:%v", resp, err)
|
||||
}
|
||||
|
||||
// create the request with the invalid header value
|
||||
|
||||
// Not reusing stsRequestNoHeader because the process of signing the request
|
||||
// and reading the body modifies the underlying request, so it's just cleaner
|
||||
// to get new requests.
|
||||
stsRequestInvalidHeader, _ := stsService.GetCallerIdentityRequest(stsInputParams)
|
||||
stsRequestInvalidHeader.HTTPRequest.Header.Add(iamServerIdHeader, "InvalidValue")
|
||||
stsRequestInvalidHeader.Sign()
|
||||
loginData, err = buildCallerIdentityLoginData(stsRequestInvalidHeader.HTTPRequest, testValidRoleName)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
loginRequest = &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "login",
|
||||
Storage: storage,
|
||||
Data: loginData,
|
||||
}
|
||||
resp, err = b.HandleRequest(loginRequest)
|
||||
if err != nil || resp == nil || !resp.IsError() {
|
||||
t.Errorf("bad: expected failed login due to invalid header: resp:%#v\nerr:%v", resp, err)
|
||||
}
|
||||
|
||||
// Now, valid request against invalid role
|
||||
stsRequestValid, _ := stsService.GetCallerIdentityRequest(stsInputParams)
|
||||
stsRequestValid.HTTPRequest.Header.Add(iamServerIdHeader, testVaultHeaderValue)
|
||||
stsRequestValid.Sign()
|
||||
loginData, err = buildCallerIdentityLoginData(stsRequestValid.HTTPRequest, testInvalidRoleName)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
loginRequest = &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "login",
|
||||
Storage: storage,
|
||||
Data: loginData,
|
||||
}
|
||||
resp, err = b.HandleRequest(loginRequest)
|
||||
if err != nil || resp == nil || !resp.IsError() {
|
||||
t.Errorf("bad: expected failed login due to invalid role: resp:%#v\nerr:%v", resp, err)
|
||||
}
|
||||
|
||||
loginData["role"] = "ec2only"
|
||||
resp, err = b.HandleRequest(loginRequest)
|
||||
if err != nil || resp == nil || !resp.IsError() {
|
||||
t.Errorf("bad: expected failed login due to bad auth type: resp:%#v\nerr:%v", resp, err)
|
||||
}
|
||||
|
||||
// finally, the happy path tests :)
|
||||
|
||||
loginData["role"] = testValidRoleName
|
||||
resp, err = b.HandleRequest(loginRequest)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp == nil || resp.Auth == nil || resp.IsError() {
|
||||
t.Errorf("bad: expected valid login: resp:%#v", resp)
|
||||
}
|
||||
}
|
|
@ -0,0 +1,129 @@
|
|||
package awsauth
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"strings"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
"github.com/aws/aws-sdk-go/service/sts"
|
||||
"github.com/hashicorp/vault/api"
|
||||
"github.com/hashicorp/vault/helper/awsutil"
|
||||
)
|
||||
|
||||
type CLIHandler struct{}
|
||||
|
||||
func (h *CLIHandler) Auth(c *api.Client, m map[string]string) (string, error) {
|
||||
mount, ok := m["mount"]
|
||||
if !ok {
|
||||
mount = "aws"
|
||||
}
|
||||
|
||||
role, ok := m["role"]
|
||||
if !ok {
|
||||
role = ""
|
||||
}
|
||||
|
||||
headerValue, ok := m["header_value"]
|
||||
if !ok {
|
||||
headerValue = ""
|
||||
}
|
||||
|
||||
// Grab any supplied credentials off the command line
|
||||
// Ensure we're able to fall back to the SDK default credential providers
|
||||
credConfig := &awsutil.CredentialsConfig{
|
||||
AccessKey: m["aws_access_key_id"],
|
||||
SecretKey: m["aws_secret_access_key"],
|
||||
SessionToken: m["aws_security_token"],
|
||||
}
|
||||
creds, err := credConfig.GenerateCredentialChain()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if creds == nil {
|
||||
return "", fmt.Errorf("could not compile valid credential providers from static config, environemnt, shared, or instance metadata")
|
||||
}
|
||||
|
||||
// Use the credentials we've found to construct an STS session
|
||||
stsSession, err := session.NewSessionWithOptions(session.Options{
|
||||
Config: aws.Config{Credentials: creds},
|
||||
})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
var params *sts.GetCallerIdentityInput
|
||||
svc := sts.New(stsSession)
|
||||
stsRequest, _ := svc.GetCallerIdentityRequest(params)
|
||||
|
||||
// Inject the required auth header value, if suplied, and then sign the request including that header
|
||||
if headerValue != "" {
|
||||
stsRequest.HTTPRequest.Header.Add(iamServerIdHeader, headerValue)
|
||||
}
|
||||
stsRequest.Sign()
|
||||
|
||||
// Now extract out the relevant parts of the request
|
||||
headersJson, err := json.Marshal(stsRequest.HTTPRequest.Header)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
requestBody, err := ioutil.ReadAll(stsRequest.HTTPRequest.Body)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
method := stsRequest.HTTPRequest.Method
|
||||
targetUrl := base64.StdEncoding.EncodeToString([]byte(stsRequest.HTTPRequest.URL.String()))
|
||||
headers := base64.StdEncoding.EncodeToString(headersJson)
|
||||
body := base64.StdEncoding.EncodeToString(requestBody)
|
||||
|
||||
// And pass them on to the Vault server
|
||||
path := fmt.Sprintf("auth/%s/login", mount)
|
||||
secret, err := c.Logical().Write(path, map[string]interface{}{
|
||||
"iam_http_request_method": method,
|
||||
"iam_request_url": targetUrl,
|
||||
"iam_request_headers": headers,
|
||||
"iam_request_body": body,
|
||||
"role": role,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if secret == nil {
|
||||
return "", fmt.Errorf("empty response from credential provider")
|
||||
}
|
||||
|
||||
return secret.Auth.ClientToken, nil
|
||||
}
|
||||
|
||||
func (h *CLIHandler) Help() string {
|
||||
help := `
|
||||
The AWS credential provider allows you to authenticate with
|
||||
AWS IAM credentials. To use it, you specify valid AWS IAM credentials
|
||||
in one of a number of ways. They can be specified explicitly on the
|
||||
command line (which in general you should not do), via the standard AWS
|
||||
environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and
|
||||
AWS_SECURITY_TOKEN), via the ~/.aws/credentials file, or via an EC2
|
||||
instance profile (in that order).
|
||||
|
||||
Example: vault auth -method=aws
|
||||
|
||||
If you need to explicitly pass in credentials, you would do it like this:
|
||||
Example: vault auth -method=aws aws_access_key_id=<access key> aws_secret_access_key=<secret key> aws_security_token=<token>
|
||||
|
||||
Key/Value Pairs:
|
||||
|
||||
mount=aws The mountpoint for the AWS credential provider.
|
||||
Defaults to "aws"
|
||||
aws_access_key_id=<access key> Explicitly specified AWS access key
|
||||
aws_secret_access_key=<secret key> Explicitly specified AWS secret key
|
||||
aws_security_token=<token> Security token for temporary credentials
|
||||
header_value The Value of the X-Vault-AWS-IAM-Server-ID header.
|
||||
role The name of the role you're requesting a token for
|
||||
`
|
||||
|
||||
return strings.TrimSpace(help)
|
||||
}
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
@ -13,14 +13,14 @@ import (
|
|||
"github.com/hashicorp/vault/logical"
|
||||
)
|
||||
|
||||
// getClientConfig creates a aws-sdk-go config, which is used to create client
|
||||
// getRawClientConfig creates a aws-sdk-go config, which is used to create client
|
||||
// that can interact with AWS API. This builds credentials in the following
|
||||
// order of preference:
|
||||
//
|
||||
// * Static credentials from 'config/client'
|
||||
// * Environment variables
|
||||
// * Instance metadata role
|
||||
func (b *backend) getClientConfig(s logical.Storage, region string) (*aws.Config, error) {
|
||||
func (b *backend) getRawClientConfig(s logical.Storage, region, clientType string) (*aws.Config, error) {
|
||||
credsConfig := &awsutil.CredentialsConfig{
|
||||
Region: region,
|
||||
}
|
||||
|
@ -34,8 +34,13 @@ func (b *backend) getClientConfig(s logical.Storage, region string) (*aws.Config
|
|||
endpoint := aws.String("")
|
||||
if config != nil {
|
||||
// Override the default endpoint with the configured endpoint.
|
||||
if config.Endpoint != "" {
|
||||
switch {
|
||||
case clientType == "ec2" && config.Endpoint != "":
|
||||
endpoint = aws.String(config.Endpoint)
|
||||
case clientType == "iam" && config.IAMEndpoint != "":
|
||||
endpoint = aws.String(config.IAMEndpoint)
|
||||
case clientType == "sts" && config.STSEndpoint != "":
|
||||
endpoint = aws.String(config.STSEndpoint)
|
||||
}
|
||||
|
||||
credsConfig.AccessKey = config.AccessKey
|
||||
|
@ -61,25 +66,35 @@ func (b *backend) getClientConfig(s logical.Storage, region string) (*aws.Config
|
|||
}, nil
|
||||
}
|
||||
|
||||
// getStsClientConfig returns an aws-sdk-go config, with assumed credentials
|
||||
// It uses getClientConfig to obtain config for the runtime environemnt, which is
|
||||
// then used to obtain a set of assumed credentials. The credentials will expire
|
||||
// after 15 minutes but will auto-refresh.
|
||||
func (b *backend) getStsClientConfig(s logical.Storage, region string, stsRole string) (*aws.Config, error) {
|
||||
config, err := b.getClientConfig(s, region)
|
||||
// getClientConfig returns an aws-sdk-go config, with optionally assumed credentials
|
||||
// It uses getRawClientConfig to obtain config for the runtime environemnt, and if
|
||||
// stsRole is a non-empty string, it will use AssumeRole to obtain a set of assumed
|
||||
// credentials. The credentials will expire after 15 minutes but will auto-refresh.
|
||||
func (b *backend) getClientConfig(s logical.Storage, region, stsRole, clientType string) (*aws.Config, error) {
|
||||
|
||||
config, err := b.getRawClientConfig(s, region, clientType)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if config == nil {
|
||||
return nil, fmt.Errorf("could not compile valid credentials through the default provider chain")
|
||||
}
|
||||
assumedCredentials := stscreds.NewCredentials(session.New(config), stsRole)
|
||||
// Test that we actually have permissions to assume the role
|
||||
if _, err = assumedCredentials.Get(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
config.Credentials = assumedCredentials
|
||||
if stsRole != "" {
|
||||
assumeRoleConfig, err := b.getRawClientConfig(s, region, "sts")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if assumeRoleConfig == nil {
|
||||
return nil, fmt.Errorf("could not configure STS client")
|
||||
}
|
||||
assumedCredentials := stscreds.NewCredentials(session.New(assumeRoleConfig), stsRole)
|
||||
// Test that we actually have permissions to assume the role
|
||||
if _, err = assumedCredentials.Get(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
config.Credentials = assumedCredentials
|
||||
}
|
||||
|
||||
return config, nil
|
||||
}
|
||||
|
@ -128,12 +143,7 @@ func (b *backend) clientEC2(s logical.Storage, region string, stsRole string) (*
|
|||
// Create an AWS config object using a chain of providers
|
||||
var awsConfig *aws.Config
|
||||
var err error
|
||||
// The empty stsRole signifies the master account
|
||||
if stsRole == "" {
|
||||
awsConfig, err = b.getClientConfig(s, region)
|
||||
} else {
|
||||
awsConfig, err = b.getStsClientConfig(s, region, stsRole)
|
||||
}
|
||||
awsConfig, err = b.getClientConfig(s, region, stsRole, "ec2")
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -179,12 +189,7 @@ func (b *backend) clientIAM(s logical.Storage, region string, stsRole string) (*
|
|||
// Create an AWS config object using a chain of providers
|
||||
var awsConfig *aws.Config
|
||||
var err error
|
||||
// The empty stsRole signifies the master account
|
||||
if stsRole == "" {
|
||||
awsConfig, err = b.getClientConfig(s, region)
|
||||
} else {
|
||||
awsConfig, err = b.getStsClientConfig(s, region, stsRole)
|
||||
}
|
||||
awsConfig, err = b.getClientConfig(s, region, stsRole, "iam")
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"crypto/x509"
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"github.com/fatih/structs"
|
||||
|
@ -27,6 +27,24 @@ func pathConfigClient(b *backend) *framework.Path {
|
|||
Default: "",
|
||||
Description: "URL to override the default generated endpoint for making AWS EC2 API calls.",
|
||||
},
|
||||
|
||||
"iam_endpoint": &framework.FieldSchema{
|
||||
Type: framework.TypeString,
|
||||
Default: "",
|
||||
Description: "URL to override the default generated endpoint for making AWS IAM API calls.",
|
||||
},
|
||||
|
||||
"sts_endpoint": &framework.FieldSchema{
|
||||
Type: framework.TypeString,
|
||||
Default: "",
|
||||
Description: "URL to override the default generated endpoint for making AWS STS API calls.",
|
||||
},
|
||||
|
||||
"iam_server_id_header_value": &framework.FieldSchema{
|
||||
Type: framework.TypeString,
|
||||
Default: "",
|
||||
Description: "Value to require in the X-Vault-AWS-IAM-Server-ID request header",
|
||||
},
|
||||
},
|
||||
|
||||
ExistenceCheck: b.pathConfigClientExistenceCheck,
|
||||
|
@ -162,6 +180,41 @@ func (b *backend) pathConfigClientCreateUpdate(
|
|||
configEntry.Endpoint = data.Get("endpoint").(string)
|
||||
}
|
||||
|
||||
iamEndpointStr, ok := data.GetOk("iam_endpoint")
|
||||
if ok {
|
||||
if configEntry.IAMEndpoint != iamEndpointStr.(string) {
|
||||
changedCreds = true
|
||||
configEntry.IAMEndpoint = iamEndpointStr.(string)
|
||||
}
|
||||
} else if req.Operation == logical.CreateOperation {
|
||||
configEntry.IAMEndpoint = data.Get("iam_endpoint").(string)
|
||||
}
|
||||
|
||||
stsEndpointStr, ok := data.GetOk("sts_endpoint")
|
||||
if ok {
|
||||
if configEntry.STSEndpoint != stsEndpointStr.(string) {
|
||||
// We don't directly cache STS clients as they are ever directly used.
|
||||
// However, they are potentially indirectly used as credential providers
|
||||
// for the EC2 and IAM clients, and thus we would be indirectly caching
|
||||
// them there. So, if we change the STS endpoint, we should flush those
|
||||
// cached clients.
|
||||
changedCreds = true
|
||||
configEntry.STSEndpoint = stsEndpointStr.(string)
|
||||
}
|
||||
} else if req.Operation == logical.CreateOperation {
|
||||
configEntry.STSEndpoint = data.Get("sts_endpoint").(string)
|
||||
}
|
||||
|
||||
headerValStr, ok := data.GetOk("iam_server_id_header_value")
|
||||
if ok {
|
||||
if configEntry.IAMServerIdHeaderValue != headerValStr.(string) {
|
||||
// NOT setting changedCreds here, since this isn't really cached
|
||||
configEntry.IAMServerIdHeaderValue = headerValStr.(string)
|
||||
}
|
||||
} else if req.Operation == logical.CreateOperation {
|
||||
configEntry.IAMServerIdHeaderValue = data.Get("iam_server_id_header_value").(string)
|
||||
}
|
||||
|
||||
// Since this endpoint supports both create operation and update operation,
|
||||
// the error checks for access_key and secret_key not being set are not present.
|
||||
// This allows calling this endpoint multiple times to provide the values.
|
||||
|
@ -172,8 +225,10 @@ func (b *backend) pathConfigClientCreateUpdate(
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if err := req.Storage.Put(entry); err != nil {
|
||||
return nil, err
|
||||
if changedCreds || req.Operation == logical.CreateOperation {
|
||||
if err := req.Storage.Put(entry); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if changedCreds {
|
||||
|
@ -187,9 +242,12 @@ func (b *backend) pathConfigClientCreateUpdate(
|
|||
// Struct to hold 'aws_access_key' and 'aws_secret_key' that are required to
|
||||
// interact with the AWS EC2 API.
|
||||
type clientConfig struct {
|
||||
AccessKey string `json:"access_key" structs:"access_key" mapstructure:"access_key"`
|
||||
SecretKey string `json:"secret_key" structs:"secret_key" mapstructure:"secret_key"`
|
||||
Endpoint string `json:"endpoint" structs:"endpoint" mapstructure:"endpoint"`
|
||||
AccessKey string `json:"access_key" structs:"access_key" mapstructure:"access_key"`
|
||||
SecretKey string `json:"secret_key" structs:"secret_key" mapstructure:"secret_key"`
|
||||
Endpoint string `json:"endpoint" structs:"endpoint" mapstructure:"endpoint"`
|
||||
IAMEndpoint string `json:"iam_endpoint" structs:"iam_endpoint" mapstructure:"iam_endpoint"`
|
||||
STSEndpoint string `json:"sts_endpoint" structs:"sts_endpoint" mapstructure:"sts_endpoint"`
|
||||
IAMServerIdHeaderValue string `json:"iam_server_id_header_value" structs:"iam_server_id_header_value" mapstructure:"iam_server_id_header_value"`
|
||||
}
|
||||
|
||||
const pathConfigClientHelpSyn = `
|
|
@ -0,0 +1,76 @@
|
|||
package awsauth
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/vault/logical"
|
||||
)
|
||||
|
||||
func TestBackend_pathConfigClient(t *testing.T) {
|
||||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// make sure we start with empty roles, which gives us confidence that the read later
|
||||
// actually is the two roles we created
|
||||
resp, err := b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ReadOperation,
|
||||
Path: "config/client",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
// at this point, resp == nil is valid as no client config exists
|
||||
// if resp != nil, then resp.Data must have EndPoint and IAMServerIdHeaderValue as nil
|
||||
if resp != nil {
|
||||
if resp.IsError() {
|
||||
t.Fatalf("failed to read client config entry")
|
||||
} else if resp.Data["endpoint"] != nil || resp.Data["iam_server_id_header_value"] != nil {
|
||||
t.Fatalf("returned endpoint or iam_server_id_header_value non-nil")
|
||||
}
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"sts_endpoint": "https://my-custom-sts-endpoint.example.com",
|
||||
"iam_server_id_header_value": "vault_server_identification_314159",
|
||||
}
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "config/client",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatal("failed to create the client config entry")
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ReadOperation,
|
||||
Path: "config/client",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp == nil || resp.IsError() {
|
||||
t.Fatal("failed to read the client config entry")
|
||||
}
|
||||
if resp.Data["iam_server_id_header_value"] != data["iam_server_id_header_value"] {
|
||||
t.Fatalf("expected iam_server_id_header_value: '%#v'; returned iam_server_id_header_value: '%#v'",
|
||||
data["iam_server_id_header_value"], resp.Data["iam_server_id_header_value"])
|
||||
}
|
||||
}
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"fmt"
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"fmt"
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"fmt"
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"time"
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,140 @@
|
|||
package awsauth
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/url"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBackend_pathLogin_getCallerIdentityResponse(t *testing.T) {
|
||||
responseFromUser := `<GetCallerIdentityResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
|
||||
<GetCallerIdentityResult>
|
||||
<Arn>arn:aws:iam::123456789012:user/MyUserName</Arn>
|
||||
<UserId>ASOMETHINGSOMETHINGSOMETHING</UserId>
|
||||
<Account>123456789012</Account>
|
||||
</GetCallerIdentityResult>
|
||||
<ResponseMetadata>
|
||||
<RequestId>7f4fc40c-853a-11e6-8848-8d035d01eb87</RequestId>
|
||||
</ResponseMetadata>
|
||||
</GetCallerIdentityResponse>`
|
||||
expectedUserArn := "arn:aws:iam::123456789012:user/MyUserName"
|
||||
|
||||
responseFromAssumedRole := `<GetCallerIdentityResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
|
||||
<GetCallerIdentityResult>
|
||||
<Arn>arn:aws:sts::123456789012:assumed-role/RoleName/RoleSessionName</Arn>
|
||||
<UserId>ASOMETHINGSOMETHINGELSE:RoleSessionName</UserId>
|
||||
<Account>123456789012</Account>
|
||||
</GetCallerIdentityResult>
|
||||
<ResponseMetadata>
|
||||
<RequestId>7f4fc40c-853a-11e6-8848-8d035d01eb87</RequestId>
|
||||
</ResponseMetadata>
|
||||
</GetCallerIdentityResponse>`
|
||||
expectedRoleArn := "arn:aws:sts::123456789012:assumed-role/RoleName/RoleSessionName"
|
||||
|
||||
parsedUserResponse, err := parseGetCallerIdentityResponse(responseFromUser)
|
||||
if parsed_arn := parsedUserResponse.GetCallerIdentityResult[0].Arn; parsed_arn != expectedUserArn {
|
||||
t.Errorf("expected to parse arn %#v, got %#v", expectedUserArn, parsed_arn)
|
||||
}
|
||||
|
||||
parsedRoleResponse, err := parseGetCallerIdentityResponse(responseFromAssumedRole)
|
||||
if parsed_arn := parsedRoleResponse.GetCallerIdentityResult[0].Arn; parsed_arn != expectedRoleArn {
|
||||
t.Errorf("expected to parn arn %#v; got %#v", expectedRoleArn, parsed_arn)
|
||||
}
|
||||
|
||||
_, err = parseGetCallerIdentityResponse("SomeRandomGibberish")
|
||||
if err == nil {
|
||||
t.Errorf("expected to NOT parse random giberish, but didn't get an error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBackend_pathLogin_parseIamArn(t *testing.T) {
|
||||
userArn := "arn:aws:iam::123456789012:user/MyUserName"
|
||||
assumedRoleArn := "arn:aws:sts::123456789012:assumed-role/RoleName/RoleSessionName"
|
||||
baseRoleArn := "arn:aws:iam::123456789012:role/RoleName"
|
||||
|
||||
xformedUser, principalFriendlyName, sessionName, err := parseIamArn(userArn)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if xformedUser != userArn {
|
||||
t.Fatalf("expected to transform ARN %#v into %#v but got %#v instead", userArn, userArn, xformedUser)
|
||||
}
|
||||
if principalFriendlyName != "MyUserName" {
|
||||
t.Fatalf("expected to extract MyUserName from ARN %#v but got %#v instead", userArn, principalFriendlyName)
|
||||
}
|
||||
if sessionName != "" {
|
||||
t.Fatalf("expected to extract no session name from ARN %#v but got %#v instead", userArn, sessionName)
|
||||
}
|
||||
|
||||
xformedRole, principalFriendlyName, sessionName, err := parseIamArn(assumedRoleArn)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if xformedRole != baseRoleArn {
|
||||
t.Fatalf("expected to transform ARN %#v into %#v but got %#v instead", assumedRoleArn, baseRoleArn, xformedRole)
|
||||
}
|
||||
if principalFriendlyName != "RoleName" {
|
||||
t.Fatalf("expected to extract principal name of RoleName from ARN %#v but got %#v instead", assumedRoleArn, sessionName)
|
||||
}
|
||||
if sessionName != "RoleSessionName" {
|
||||
t.Fatalf("expected to extract role session name of RoleSessionName from ARN %#v but got %#v instead", assumedRoleArn, sessionName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBackend_validateVaultHeaderValue(t *testing.T) {
|
||||
const canaryHeaderValue = "Vault-Server"
|
||||
requestUrl, err := url.Parse("https://sts.amazonaws.com/")
|
||||
if err != nil {
|
||||
t.Fatalf("error parsing test URL: %v", err)
|
||||
}
|
||||
postHeadersMissing := http.Header{
|
||||
"Host": []string{"Foo"},
|
||||
"Authorization": []string{"AWS4-HMAC-SHA256 Credential=AKIDEXAMPLE/20150830/us-east-1/iam/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-vault-aws-iam-server-id, Signature=5d672d79c15b13162d9279b0855cfba6789a8edb4c82c400e06b5924a6f2b5d7"},
|
||||
}
|
||||
postHeadersInvalid := http.Header{
|
||||
"Host": []string{"Foo"},
|
||||
iamServerIdHeader: []string{"InvalidValue"},
|
||||
"Authorization": []string{"AWS4-HMAC-SHA256 Credential=AKIDEXAMPLE/20150830/us-east-1/iam/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-vault-aws-iam-server-id, Signature=5d672d79c15b13162d9279b0855cfba6789a8edb4c82c400e06b5924a6f2b5d7"},
|
||||
}
|
||||
postHeadersUnsigned := http.Header{
|
||||
"Host": []string{"Foo"},
|
||||
iamServerIdHeader: []string{canaryHeaderValue},
|
||||
"Authorization": []string{"AWS4-HMAC-SHA256 Credential=AKIDEXAMPLE/20150830/us-east-1/iam/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=5d672d79c15b13162d9279b0855cfba6789a8edb4c82c400e06b5924a6f2b5d7"},
|
||||
}
|
||||
postHeadersValid := http.Header{
|
||||
"Host": []string{"Foo"},
|
||||
iamServerIdHeader: []string{canaryHeaderValue},
|
||||
"Authorization": []string{"AWS4-HMAC-SHA256 Credential=AKIDEXAMPLE/20150830/us-east-1/iam/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-vault-aws-iam-server-id, Signature=5d672d79c15b13162d9279b0855cfba6789a8edb4c82c400e06b5924a6f2b5d7"},
|
||||
}
|
||||
|
||||
postHeadersSplit := http.Header{
|
||||
"Host": []string{"Foo"},
|
||||
iamServerIdHeader: []string{canaryHeaderValue},
|
||||
"Authorization": []string{"AWS4-HMAC-SHA256 Credential=AKIDEXAMPLE/20150830/us-east-1/iam/aws4_request", "SignedHeaders=content-type;host;x-amz-date;x-vault-aws-iam-server-id, Signature=5d672d79c15b13162d9279b0855cfba6789a8edb4c82c400e06b5924a6f2b5d7"},
|
||||
}
|
||||
|
||||
err = validateVaultHeaderValue(postHeadersMissing, requestUrl, canaryHeaderValue)
|
||||
if err == nil {
|
||||
t.Error("validated POST request with missing Vault header")
|
||||
}
|
||||
|
||||
err = validateVaultHeaderValue(postHeadersInvalid, requestUrl, canaryHeaderValue)
|
||||
if err == nil {
|
||||
t.Error("validated POST request with invalid Vault header value")
|
||||
}
|
||||
|
||||
err = validateVaultHeaderValue(postHeadersUnsigned, requestUrl, canaryHeaderValue)
|
||||
if err == nil {
|
||||
t.Error("validated POST request with unsigned Vault header")
|
||||
}
|
||||
|
||||
err = validateVaultHeaderValue(postHeadersValid, requestUrl, canaryHeaderValue)
|
||||
if err != nil {
|
||||
t.Errorf("did NOT validate valid POST request: %v", err)
|
||||
}
|
||||
|
||||
err = validateVaultHeaderValue(postHeadersSplit, requestUrl, canaryHeaderValue)
|
||||
if err != nil {
|
||||
t.Errorf("did NOT validate valid POST request with split Authorization header: %v", err)
|
||||
}
|
||||
}
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
@ -20,6 +20,11 @@ func pathRole(b *backend) *framework.Path {
|
|||
Type: framework.TypeString,
|
||||
Description: "Name of the role.",
|
||||
},
|
||||
"auth_type": {
|
||||
Type: framework.TypeString,
|
||||
Description: `The auth_type permitted to authenticate to this role. Must be one of
|
||||
iam or ec2 and cannot be changed after role creation.`,
|
||||
},
|
||||
"bound_ami_id": {
|
||||
Type: framework.TypeString,
|
||||
Description: `If set, defines a constraint on the EC2 instances that they should be
|
||||
|
@ -29,10 +34,17 @@ using the AMI ID specified by this parameter.`,
|
|||
Type: framework.TypeString,
|
||||
Description: `If set, defines a constraint on the EC2 instances that the account ID
|
||||
in its identity document to match the one specified by this parameter.`,
|
||||
},
|
||||
"bound_iam_principal_arn": {
|
||||
Type: framework.TypeString,
|
||||
Description: `ARN of the IAM principal to bind to this role. Only applicable when
|
||||
auth_type is iam.`,
|
||||
},
|
||||
"bound_region": {
|
||||
Type: framework.TypeString,
|
||||
Description: `If set, defines a constraint on the EC2 instances that the region in its identity document to match the one specified by this parameter.`,
|
||||
Type: framework.TypeString,
|
||||
Description: `If set, defines a constraint on the EC2 instances that the region in
|
||||
its identity document to match the one specified by this parameter. Only applicable when
|
||||
auth_type is ec2.`,
|
||||
},
|
||||
"bound_iam_role_arn": {
|
||||
Type: framework.TypeString,
|
||||
|
@ -41,14 +53,34 @@ that it must match the IAM role ARN specified by this parameter.
|
|||
The value is prefix-matched (as though it were a glob ending in
|
||||
'*'). The configured IAM user or EC2 instance role must be allowed
|
||||
to execute the 'iam:GetInstanceProfile' action if this is
|
||||
specified.`,
|
||||
specified. This is only checked when auth_type is
|
||||
ec2.`,
|
||||
},
|
||||
"bound_iam_instance_profile_arn": {
|
||||
Type: framework.TypeString,
|
||||
Description: `If set, defines a constraint on the EC2 instances to be associated
|
||||
with an IAM instance profile ARN which has a prefix that matches
|
||||
the value specified by this parameter. The value is prefix-matched
|
||||
(as though it were a glob ending in '*').`,
|
||||
(as though it were a glob ending in '*'). This is only checked when
|
||||
auth_type is ec2.`,
|
||||
},
|
||||
"inferred_entity_type": {
|
||||
Type: framework.TypeString,
|
||||
Description: `When auth_type is iam, the
|
||||
AWS entity type to infer from the authenticated principal. The only supported
|
||||
value is ec2_instance, which will extract the EC2 instance ID from the
|
||||
authenticated role and apply the following restrictions specific to EC2
|
||||
instances: bound_ami_id, bound_account_id, bound_iam_role_arn,
|
||||
bound_iam_instance_profile_arn, bound_vpc_id, bound_subnet_id. The configured
|
||||
EC2 client must be able to find the inferred instance ID in the results, and the
|
||||
instance must be running. If unable to determine the EC2 instance ID or unable
|
||||
to find the EC2 instance ID among running instances, then authentication will
|
||||
fail.`,
|
||||
},
|
||||
"inferred_aws_region": {
|
||||
Type: framework.TypeString,
|
||||
Description: `When auth_type is iam and
|
||||
inferred_entity_type is set, the region to assume the inferred entity exists in.`,
|
||||
},
|
||||
"bound_vpc_id": {
|
||||
Type: framework.TypeString,
|
||||
|
@ -63,9 +95,13 @@ If set, defines a constraint on the EC2 instance to be associated with the
|
|||
subnet ID that matches the value specified by this parameter.`,
|
||||
},
|
||||
"role_tag": {
|
||||
Type: framework.TypeString,
|
||||
Default: "",
|
||||
Description: "If set, enables the role tags for this role. The value set for this field should be the 'key' of the tag on the EC2 instance. The 'value' of the tag should be generated using 'role/<role>/tag' endpoint. Defaults to an empty string, meaning that role tags are disabled.",
|
||||
Type: framework.TypeString,
|
||||
Default: "",
|
||||
Description: `If set, enables the role tags for this role. The value set for this
|
||||
field should be the 'key' of the tag on the EC2 instance. The 'value'
|
||||
of the tag should be generated using 'role/<role>/tag' endpoint.
|
||||
Defaults to an empty string, meaning that role tags are disabled. This
|
||||
is only allowed if auth_type is ec2.`,
|
||||
},
|
||||
"period": &framework.FieldSchema{
|
||||
Type: framework.TypeDurationSecond,
|
||||
|
@ -90,9 +126,14 @@ to 0, in which case the value will fallback to the system/mount defaults.`,
|
|||
Description: "Policies to be set on tokens issued using this role.",
|
||||
},
|
||||
"allow_instance_migration": {
|
||||
Type: framework.TypeBool,
|
||||
Default: false,
|
||||
Description: "If set, allows migration of the underlying instance where the client resides. This keys off of pendingTime in the metadata document, so essentially, this disables the client nonce check whenever the instance is migrated to a new host and pendingTime is newer than the previously-remembered time. Use with caution.",
|
||||
Type: framework.TypeBool,
|
||||
Default: false,
|
||||
Description: `If set, allows migration of the underlying instance where the client
|
||||
resides. This keys off of pendingTime in the metadata document, so
|
||||
essentially, this disables the client nonce check whenever the
|
||||
instance is migrated to a new host and pendingTime is newer than the
|
||||
previously-remembered time. Use with caution. This is only checked when
|
||||
auth_type is ec2.`,
|
||||
},
|
||||
"disallow_reauthentication": {
|
||||
Type: framework.TypeBool,
|
||||
|
@ -159,9 +200,44 @@ func (b *backend) lockedAWSRole(s logical.Storage, roleName string) (*awsRoleEnt
|
|||
}
|
||||
|
||||
b.roleMutex.RLock()
|
||||
defer b.roleMutex.RUnlock()
|
||||
|
||||
return b.nonLockedAWSRole(s, roleName)
|
||||
roleEntry, err := b.nonLockedAWSRole(s, roleName)
|
||||
// we manually unlock rather than defer the unlock because we might need to grab
|
||||
// a read/write lock in the upgrade path
|
||||
b.roleMutex.RUnlock()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if roleEntry == nil {
|
||||
return nil, nil
|
||||
}
|
||||
needUpgrade, err := upgradeRoleEntry(roleEntry)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error upgrading roleEntry: %v", err)
|
||||
}
|
||||
if needUpgrade {
|
||||
b.roleMutex.Lock()
|
||||
defer b.roleMutex.Unlock()
|
||||
// Now that we have a R/W lock, we need to re-read the role entry in case it was
|
||||
// written to between releasing the read lock and acquiring the write lock
|
||||
roleEntry, err = b.nonLockedAWSRole(s, roleName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// somebody deleted the role, so no use in putting it back
|
||||
if roleEntry == nil {
|
||||
return nil, nil
|
||||
}
|
||||
// now re-check to see if we need to upgrade
|
||||
if needUpgrade, err = upgradeRoleEntry(roleEntry); err != nil {
|
||||
return nil, fmt.Errorf("error upgrading roleEntry: %v", err)
|
||||
}
|
||||
if needUpgrade {
|
||||
if err = b.nonLockedSetAWSRole(s, roleName, roleEntry); err != nil {
|
||||
return nil, fmt.Errorf("error saving upgraded roleEntry: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
return roleEntry, nil
|
||||
}
|
||||
|
||||
// lockedSetAWSRole creates or updates a role in the storage. This method
|
||||
|
@ -206,9 +282,41 @@ func (b *backend) nonLockedSetAWSRole(s logical.Storage, roleName string,
|
|||
return nil
|
||||
}
|
||||
|
||||
// If needed, updates the role entry and returns a bool indicating if it was updated
|
||||
// (and thus needs to be persisted)
|
||||
func upgradeRoleEntry(roleEntry *awsRoleEntry) (bool, error) {
|
||||
if roleEntry == nil {
|
||||
return false, fmt.Errorf("received nil roleEntry")
|
||||
}
|
||||
var upgraded bool
|
||||
// Check if the value held by role ARN field is actually an instance profile ARN
|
||||
if roleEntry.BoundIamRoleARN != "" && strings.Contains(roleEntry.BoundIamRoleARN, ":instance-profile/") {
|
||||
// If yes, move it to the correct field
|
||||
roleEntry.BoundIamInstanceProfileARN = roleEntry.BoundIamRoleARN
|
||||
|
||||
// Reset the old field
|
||||
roleEntry.BoundIamRoleARN = ""
|
||||
|
||||
upgraded = true
|
||||
}
|
||||
|
||||
// Check if there was no pre-existing AuthType set (from older versions)
|
||||
if roleEntry.AuthType == "" {
|
||||
// then default to the original behavior of ec2
|
||||
roleEntry.AuthType = ec2AuthType
|
||||
upgraded = true
|
||||
}
|
||||
|
||||
return upgraded, nil
|
||||
|
||||
}
|
||||
|
||||
// nonLockedAWSRole returns the properties set on the given role. This method
|
||||
// does not acquire the read lock before reading the role from the storage. If
|
||||
// locking is desired, use lockedAWSRole instead.
|
||||
// This method also does NOT check to see if a role upgrade is required. It is
|
||||
// the responsibility of the caller to check if a role upgrade is required and,
|
||||
// if so, to upgrade the role
|
||||
func (b *backend) nonLockedAWSRole(s logical.Storage, roleName string) (*awsRoleEntry, error) {
|
||||
if roleName == "" {
|
||||
return nil, fmt.Errorf("missing role name")
|
||||
|
@ -227,20 +335,6 @@ func (b *backend) nonLockedAWSRole(s logical.Storage, roleName string) (*awsRole
|
|||
return nil, err
|
||||
}
|
||||
|
||||
// Check if the value held by role ARN field is actually an instance profile ARN
|
||||
if result.BoundIamRoleARN != "" && strings.Contains(result.BoundIamRoleARN, ":instance-profile/") {
|
||||
// If yes, move it to the correct field
|
||||
result.BoundIamInstanceProfileARN = result.BoundIamRoleARN
|
||||
|
||||
// Reset the old field
|
||||
result.BoundIamRoleARN = ""
|
||||
|
||||
// Save the update
|
||||
if err = b.nonLockedSetAWSRole(s, roleName, &result); err != nil {
|
||||
return nil, fmt.Errorf("failed to move instance profile ARN to bound_iam_instance_profile_arn field")
|
||||
}
|
||||
}
|
||||
|
||||
return &result, nil
|
||||
}
|
||||
|
||||
|
@ -316,6 +410,17 @@ func (b *backend) pathRoleCreateUpdate(
|
|||
}
|
||||
if roleEntry == nil {
|
||||
roleEntry = &awsRoleEntry{}
|
||||
} else {
|
||||
needUpdate, err := upgradeRoleEntry(roleEntry)
|
||||
if err != nil {
|
||||
return logical.ErrorResponse(fmt.Sprintf("failed to update roleEntry: %v", err)), nil
|
||||
}
|
||||
if needUpdate {
|
||||
err = b.nonLockedSetAWSRole(req.Storage, roleName, roleEntry)
|
||||
if err != nil {
|
||||
return logical.ErrorResponse(fmt.Sprintf("failed to save upgraded roleEntry: %v", err)), nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Fetch and set the bound parameters. There can't be default values
|
||||
|
@ -348,14 +453,120 @@ func (b *backend) pathRoleCreateUpdate(
|
|||
roleEntry.BoundIamInstanceProfileARN = boundIamInstanceProfileARNRaw.(string)
|
||||
}
|
||||
|
||||
// Ensure that at least one bound is set on the role
|
||||
switch {
|
||||
case roleEntry.BoundAccountID != "":
|
||||
case roleEntry.BoundAmiID != "":
|
||||
case roleEntry.BoundIamInstanceProfileARN != "":
|
||||
case roleEntry.BoundIamRoleARN != "":
|
||||
default:
|
||||
if boundIamPrincipalARNRaw, ok := data.GetOk("bound_iam_principal_arn"); ok {
|
||||
roleEntry.BoundIamPrincipalARN = boundIamPrincipalARNRaw.(string)
|
||||
}
|
||||
|
||||
if inferRoleTypeRaw, ok := data.GetOk("inferred_entity_type"); ok {
|
||||
roleEntry.InferredEntityType = inferRoleTypeRaw.(string)
|
||||
}
|
||||
|
||||
if inferredAWSRegionRaw, ok := data.GetOk("inferred_aws_region"); ok {
|
||||
roleEntry.InferredAWSRegion = inferredAWSRegionRaw.(string)
|
||||
}
|
||||
|
||||
// auth_type is a special case as it's immutable and can't be changed once a role is created
|
||||
if authTypeRaw, ok := data.GetOk("auth_type"); ok {
|
||||
// roleEntry.AuthType should only be "" when it's a new role; existing roles without an
|
||||
// auth_type should have already been upgraded to have one before we get here
|
||||
if roleEntry.AuthType == "" {
|
||||
switch authTypeRaw.(string) {
|
||||
case ec2AuthType, iamAuthType:
|
||||
roleEntry.AuthType = authTypeRaw.(string)
|
||||
default:
|
||||
return logical.ErrorResponse(fmt.Sprintf("unrecognized auth_type: %v", authTypeRaw.(string))), nil
|
||||
}
|
||||
} else if authTypeRaw.(string) != roleEntry.AuthType {
|
||||
return logical.ErrorResponse("changing auth_type on a role is not allowed"), nil
|
||||
}
|
||||
} else if req.Operation == logical.CreateOperation {
|
||||
switch req.MountType {
|
||||
// maintain backwards compatibility for old aws-ec2 auth types
|
||||
case "aws-ec2":
|
||||
roleEntry.AuthType = ec2AuthType
|
||||
// but default to iamAuth for new mounts going forward
|
||||
case "aws":
|
||||
roleEntry.AuthType = iamAuthType
|
||||
default:
|
||||
roleEntry.AuthType = iamAuthType
|
||||
}
|
||||
}
|
||||
|
||||
allowEc2Binds := roleEntry.AuthType == ec2AuthType
|
||||
|
||||
if roleEntry.InferredEntityType != "" {
|
||||
switch {
|
||||
case roleEntry.AuthType != iamAuthType:
|
||||
return logical.ErrorResponse("specified inferred_entity_type but didn't allow iam auth_type"), nil
|
||||
case roleEntry.InferredEntityType != ec2EntityType:
|
||||
return logical.ErrorResponse(fmt.Sprintf("specified invalid inferred_entity_type: %s", roleEntry.InferredEntityType)), nil
|
||||
case roleEntry.InferredAWSRegion == "":
|
||||
return logical.ErrorResponse("specified inferred_entity_type but not inferred_aws_region"), nil
|
||||
}
|
||||
allowEc2Binds = true
|
||||
} else if roleEntry.InferredAWSRegion != "" {
|
||||
return logical.ErrorResponse("specified inferred_aws_region but not inferred_entity_type"), nil
|
||||
}
|
||||
|
||||
numBinds := 0
|
||||
|
||||
if roleEntry.BoundAccountID != "" {
|
||||
if !allowEc2Binds {
|
||||
return logical.ErrorResponse(fmt.Sprintf("specified bound_account_id but not allowing ec2 auth_type or inferring %s", ec2EntityType)), nil
|
||||
}
|
||||
numBinds++
|
||||
}
|
||||
|
||||
if roleEntry.BoundRegion != "" {
|
||||
if roleEntry.AuthType != ec2AuthType {
|
||||
return logical.ErrorResponse("specified bound_region but not allowing ec2 auth_type"), nil
|
||||
}
|
||||
numBinds++
|
||||
}
|
||||
|
||||
if roleEntry.BoundAmiID != "" {
|
||||
if !allowEc2Binds {
|
||||
return logical.ErrorResponse(fmt.Sprintf("specified bound_ami_id but not allowing ec2 auth_type or inferring %s", ec2EntityType)), nil
|
||||
}
|
||||
numBinds++
|
||||
}
|
||||
|
||||
if roleEntry.BoundIamInstanceProfileARN != "" {
|
||||
if !allowEc2Binds {
|
||||
return logical.ErrorResponse(fmt.Sprintf("specified bound_iam_instance_profile_arn but not allowing ec2 auth_type or inferring %s", ec2EntityType)), nil
|
||||
}
|
||||
numBinds++
|
||||
}
|
||||
|
||||
if roleEntry.BoundIamRoleARN != "" {
|
||||
if !allowEc2Binds {
|
||||
return logical.ErrorResponse(fmt.Sprintf("specified bound_iam_role_arn but not allowing ec2 auth_type or inferring %s", ec2EntityType)), nil
|
||||
}
|
||||
numBinds++
|
||||
}
|
||||
|
||||
if roleEntry.BoundIamPrincipalARN != "" {
|
||||
if roleEntry.AuthType != iamAuthType {
|
||||
return logical.ErrorResponse("specified bound_iam_principal_arn but not allowing iam auth_type"), nil
|
||||
}
|
||||
numBinds++
|
||||
}
|
||||
|
||||
if roleEntry.BoundVpcID != "" {
|
||||
if !allowEc2Binds {
|
||||
return logical.ErrorResponse(fmt.Sprintf("specified bound_vpc_id but not allowing ec2 auth_type or inferring %s", ec2EntityType)), nil
|
||||
}
|
||||
numBinds++
|
||||
}
|
||||
|
||||
if roleEntry.BoundSubnetID != "" {
|
||||
if !allowEc2Binds {
|
||||
return logical.ErrorResponse(fmt.Sprintf("specified bound_subnet_id but not allowing ec2 auth_type or inferring %s", ec2EntityType)), nil
|
||||
}
|
||||
numBinds++
|
||||
}
|
||||
|
||||
if numBinds == 0 {
|
||||
return logical.ErrorResponse("at least be one bound parameter should be specified on the role"), nil
|
||||
}
|
||||
|
||||
|
@ -368,15 +579,21 @@ func (b *backend) pathRoleCreateUpdate(
|
|||
|
||||
disallowReauthenticationBool, ok := data.GetOk("disallow_reauthentication")
|
||||
if ok {
|
||||
if roleEntry.AuthType != ec2AuthType {
|
||||
return logical.ErrorResponse("specified disallow_reauthentication when not using ec2 auth type"), nil
|
||||
}
|
||||
roleEntry.DisallowReauthentication = disallowReauthenticationBool.(bool)
|
||||
} else if req.Operation == logical.CreateOperation {
|
||||
} else if req.Operation == logical.CreateOperation && roleEntry.AuthType == ec2AuthType {
|
||||
roleEntry.DisallowReauthentication = data.Get("disallow_reauthentication").(bool)
|
||||
}
|
||||
|
||||
allowInstanceMigrationBool, ok := data.GetOk("allow_instance_migration")
|
||||
if ok {
|
||||
if roleEntry.AuthType != ec2AuthType {
|
||||
return logical.ErrorResponse("specified allow_instance_migration when not using ec2 auth type"), nil
|
||||
}
|
||||
roleEntry.AllowInstanceMigration = allowInstanceMigrationBool.(bool)
|
||||
} else if req.Operation == logical.CreateOperation {
|
||||
} else if req.Operation == logical.CreateOperation && roleEntry.AuthType == ec2AuthType {
|
||||
roleEntry.AllowInstanceMigration = data.Get("allow_instance_migration").(bool)
|
||||
}
|
||||
|
||||
|
@ -428,13 +645,16 @@ func (b *backend) pathRoleCreateUpdate(
|
|||
|
||||
roleTagStr, ok := data.GetOk("role_tag")
|
||||
if ok {
|
||||
if roleEntry.AuthType != ec2AuthType {
|
||||
return logical.ErrorResponse("tried to enable role_tag when not using ec2 auth method"), nil
|
||||
}
|
||||
roleEntry.RoleTag = roleTagStr.(string)
|
||||
// There is a limit of 127 characters on the tag key for AWS EC2 instances.
|
||||
// Complying to that requirement, do not allow the value of 'key' to be more than that.
|
||||
if len(roleEntry.RoleTag) > 127 {
|
||||
return logical.ErrorResponse("length of role tag exceeds the EC2 key limit of 127 characters"), nil
|
||||
}
|
||||
} else if req.Operation == logical.CreateOperation {
|
||||
} else if req.Operation == logical.CreateOperation && roleEntry.AuthType == ec2AuthType {
|
||||
roleEntry.RoleTag = data.Get("role_tag").(string)
|
||||
}
|
||||
|
||||
|
@ -458,13 +678,17 @@ func (b *backend) pathRoleCreateUpdate(
|
|||
|
||||
// Struct to hold the information associated with an AMI ID in Vault.
|
||||
type awsRoleEntry struct {
|
||||
AuthType string `json:"auth_type" structs:"auth_type" mapstructure:"auth_type"`
|
||||
BoundAmiID string `json:"bound_ami_id" structs:"bound_ami_id" mapstructure:"bound_ami_id"`
|
||||
BoundAccountID string `json:"bound_account_id" structs:"bound_account_id" mapstructure:"bound_account_id"`
|
||||
BoundRegion string `json:"bound_region" structs:"bound_region" mapstructure:"bound_region"`
|
||||
BoundIamPrincipalARN string `json:"bound_iam_principal_arn" structs:"bound_iam_principal_arn" mapstructure:"bound_iam_principal_arn"`
|
||||
BoundIamRoleARN string `json:"bound_iam_role_arn" structs:"bound_iam_role_arn" mapstructure:"bound_iam_role_arn"`
|
||||
BoundIamInstanceProfileARN string `json:"bound_iam_instance_profile_arn" structs:"bound_iam_instance_profile_arn" mapstructure:"bound_iam_instance_profile_arn"`
|
||||
BoundRegion string `json:"bound_region" structs:"bound_region" mapstructure:"bound_region"`
|
||||
BoundSubnetID string `json:"bound_subnet_id" structs:"bound_subnet_id" mapstructure:"bound_subnet_id"`
|
||||
BoundVpcID string `json:"bound_vpc_id" structs:"bound_vpc_id" mapstructure:"bound_vpc_id"`
|
||||
InferredEntityType string `json:"inferred_entity_type" structs:"inferred_entity_type" mapstructure:"inferred_entity_type"`
|
||||
InferredAWSRegion string `json:"inferred_aws_region" structs:"inferred_aws_region" mapstructure:"inferred_aws_region"`
|
||||
RoleTag string `json:"role_tag" structs:"role_tag" mapstructure:"role_tag"`
|
||||
AllowInstanceMigration bool `json:"allow_instance_migration" structs:"allow_instance_migration" mapstructure:"allow_instance_migration"`
|
||||
TTL time.Duration `json:"ttl" structs:"ttl" mapstructure:"ttl"`
|
||||
|
@ -492,6 +716,7 @@ endpoint 'role/<role>/tag'. This tag then needs to be applied on the
|
|||
instance before it attempts a login. The policies on the tag should be a
|
||||
subset of policies that are associated to the role. In order to enable
|
||||
login using tags, 'role_tag' option should be set while creating a role.
|
||||
This only applies when authenticating EC2 instances.
|
||||
|
||||
Also, a 'max_ttl' can be configured in this endpoint that determines the maximum
|
||||
duration for which a login can be renewed. Note that the 'max_ttl' has an upper
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"crypto/hmac"
|
|
@ -0,0 +1,555 @@
|
|||
package awsauth
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/hashicorp/vault/helper/policyutil"
|
||||
"github.com/hashicorp/vault/logical"
|
||||
)
|
||||
|
||||
func TestBackend_pathRoleEc2(t *testing.T) {
|
||||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"auth_type": "ec2",
|
||||
"policies": "p,q,r,s",
|
||||
"max_ttl": "2h",
|
||||
"bound_ami_id": "ami-abcd123",
|
||||
}
|
||||
resp, err := b.HandleRequest(&logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create role")
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ReadOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp == nil || resp.IsError() {
|
||||
t.Fatal("failed to read the role entry")
|
||||
}
|
||||
if !policyutil.EquivalentPolicies(strings.Split(data["policies"].(string), ","), resp.Data["policies"].([]string)) {
|
||||
t.Fatalf("bad: policies: expected: %#v\ngot: %#v\n", data, resp.Data)
|
||||
}
|
||||
|
||||
data["allow_instance_migration"] = true
|
||||
data["disallow_reauthentication"] = true
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create role: %s", resp.Data["error"])
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ReadOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if !resp.Data["allow_instance_migration"].(bool) || !resp.Data["disallow_reauthentication"].(bool) {
|
||||
t.Fatal("bad: expected:true got:false\n")
|
||||
}
|
||||
|
||||
// add another entry, to test listing of role entries
|
||||
data["bound_ami_id"] = "ami-abcd456"
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/ami-abcd456",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create role: %s", resp.Data["error"])
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ListOperation,
|
||||
Path: "roles",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp == nil || resp.Data == nil || resp.IsError() {
|
||||
t.Fatalf("failed to list the role entries")
|
||||
}
|
||||
keys := resp.Data["keys"].([]string)
|
||||
if len(keys) != 2 {
|
||||
t.Fatalf("bad: keys: %#v\n", keys)
|
||||
}
|
||||
|
||||
_, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.DeleteOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ReadOperation,
|
||||
Path: "role/ami-abcd123",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp != nil {
|
||||
t.Fatalf("bad: response: expected:nil actual:%#v\n", resp)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestBackend_pathIam(t *testing.T) {
|
||||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// make sure we start with empty roles, which gives us confidence that the read later
|
||||
// actually is the two roles we created
|
||||
resp, err := b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ListOperation,
|
||||
Path: "roles",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp == nil || resp.Data == nil || resp.IsError() {
|
||||
t.Fatalf("failed to list role entries")
|
||||
}
|
||||
if resp.Data["keys"] != nil {
|
||||
t.Fatalf("Received roles when expected none")
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"auth_type": iamAuthType,
|
||||
"policies": "p,q,r,s",
|
||||
"max_ttl": "2h",
|
||||
"bound_iam_principal_arn": "n:aws:iam::123456789012:user/MyUserName",
|
||||
}
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/MyRoleName",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create the role entry; resp: %#v", resp)
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ReadOperation,
|
||||
Path: "role/MyRoleName",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp == nil || resp.IsError() {
|
||||
t.Fatal("failed to read the role entry")
|
||||
}
|
||||
if !policyutil.EquivalentPolicies(strings.Split(data["policies"].(string), ","), resp.Data["policies"].([]string)) {
|
||||
t.Fatalf("bad: policies: expected %#v\ngot: %#v\n", data, resp.Data)
|
||||
}
|
||||
|
||||
data["inferred_entity_type"] = "invalid"
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/ShouldNeverExist",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
if resp == nil || !resp.IsError() {
|
||||
t.Fatalf("Created role with invalid inferred_entity_type")
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data["inferred_entity_type"] = ec2EntityType
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/ShouldNeverExist",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
if resp == nil || !resp.IsError() {
|
||||
t.Fatalf("Created role without necessary inferred_aws_region")
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
delete(data, "bound_iam_principal_arn")
|
||||
data["inferred_aws_region"] = "us-east-1"
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/ShouldNeverExist",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
if resp == nil || !resp.IsError() {
|
||||
t.Fatalf("Created role without anything bound")
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// generate a second role, ensure we're able to list both
|
||||
data["bound_ami_id"] = "ami-abcd123"
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Path: "role/MyOtherRoleName",
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create additional role: %s")
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ListOperation,
|
||||
Path: "roles",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp == nil || resp.Data == nil || resp.IsError() {
|
||||
t.Fatalf("failed to list role entries")
|
||||
}
|
||||
keys := resp.Data["keys"].([]string)
|
||||
if len(keys) != 2 {
|
||||
t.Fatalf("bad: keys %#v\n", keys)
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.DeleteOperation,
|
||||
Path: "role/MyOtherRoleName",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ReadOperation,
|
||||
Path: "role/MyOtherRoleName",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp != nil {
|
||||
t.Fatalf("bad: response: expected: nil actual:%3v\n", resp)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBackend_pathRoleMixedTypes(t *testing.T) {
|
||||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data := map[string]interface{}{
|
||||
"policies": "p,q,r,s",
|
||||
"bound_ami_id": "ami-abc1234",
|
||||
"auth_type": "ec2,invalid",
|
||||
}
|
||||
|
||||
submitRequest := func(roleName string, op logical.Operation) (*logical.Response, error) {
|
||||
return b.HandleRequest(&logical.Request{
|
||||
Operation: op,
|
||||
Path: "role/" + roleName,
|
||||
Data: data,
|
||||
Storage: storage,
|
||||
})
|
||||
}
|
||||
|
||||
resp, err := submitRequest("shouldNeverExist", logical.CreateOperation)
|
||||
if resp == nil || !resp.IsError() {
|
||||
t.Fatalf("created role with invalid auth_type; resp: %#v", resp)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data["auth_type"] = "ec2,,iam"
|
||||
resp, err = submitRequest("shouldNeverExist", logical.CreateOperation)
|
||||
if resp == nil || !resp.IsError() {
|
||||
t.Fatalf("created role mixed auth types")
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data["auth_type"] = ec2AuthType
|
||||
resp, err = submitRequest("ec2_to_iam", logical.CreateOperation)
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create valid role; resp: %#v", resp)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data["auth_type"] = iamAuthType
|
||||
delete(data, "bound_ami_id")
|
||||
data["bound_iam_principal_arn"] = "arn:aws:iam::123456789012:role/MyRole"
|
||||
resp, err = submitRequest("ec2_to_iam", logical.UpdateOperation)
|
||||
if resp == nil || !resp.IsError() {
|
||||
t.Fatalf("changed auth type on the role")
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
data["inferred_entity_type"] = ec2EntityType
|
||||
data["inferred_aws_region"] = "us-east-1"
|
||||
resp, err = submitRequest("multipleTypesInferred", logical.CreateOperation)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp.IsError() {
|
||||
t.Fatalf("didn't allow creation of roles with only inferred bindings")
|
||||
}
|
||||
}
|
||||
|
||||
func TestAwsEc2_RoleCrud(t *testing.T) {
|
||||
var err error
|
||||
var resp *logical.Response
|
||||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
role1Data := map[string]interface{}{
|
||||
"auth_type": "ec2",
|
||||
"bound_vpc_id": "testvpcid",
|
||||
"allow_instance_migration": true,
|
||||
"policies": "testpolicy1,testpolicy2",
|
||||
}
|
||||
roleReq := &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Storage: storage,
|
||||
Path: "role/role1",
|
||||
Data: role1Data,
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
roleData := map[string]interface{}{
|
||||
"auth_type": "ec2",
|
||||
"bound_ami_id": "testamiid",
|
||||
"bound_account_id": "testaccountid",
|
||||
"bound_region": "testregion",
|
||||
"bound_iam_role_arn": "testiamrolearn",
|
||||
"bound_iam_instance_profile_arn": "testiaminstanceprofilearn",
|
||||
"bound_subnet_id": "testsubnetid",
|
||||
"bound_vpc_id": "testvpcid",
|
||||
"role_tag": "testtag",
|
||||
"allow_instance_migration": true,
|
||||
"ttl": "10m",
|
||||
"max_ttl": "20m",
|
||||
"policies": "testpolicy1,testpolicy2",
|
||||
"disallow_reauthentication": true,
|
||||
"hmac_key": "testhmackey",
|
||||
"period": "1m",
|
||||
}
|
||||
|
||||
roleReq.Path = "role/testrole"
|
||||
roleReq.Data = roleData
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
roleReq.Operation = logical.ReadOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
expected := map[string]interface{}{
|
||||
"auth_type": ec2AuthType,
|
||||
"bound_ami_id": "testamiid",
|
||||
"bound_account_id": "testaccountid",
|
||||
"bound_region": "testregion",
|
||||
"bound_iam_principal_arn": "",
|
||||
"bound_iam_role_arn": "testiamrolearn",
|
||||
"bound_iam_instance_profile_arn": "testiaminstanceprofilearn",
|
||||
"bound_subnet_id": "testsubnetid",
|
||||
"bound_vpc_id": "testvpcid",
|
||||
"inferred_entity_type": "",
|
||||
"inferred_aws_region": "",
|
||||
"role_tag": "testtag",
|
||||
"allow_instance_migration": true,
|
||||
"ttl": time.Duration(600),
|
||||
"max_ttl": time.Duration(1200),
|
||||
"policies": []string{"default", "testpolicy1", "testpolicy2"},
|
||||
"disallow_reauthentication": true,
|
||||
"period": time.Duration(60),
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(expected, resp.Data) {
|
||||
t.Fatalf("bad: role data: expected: %#v\n actual: %#v", expected, resp.Data)
|
||||
}
|
||||
|
||||
roleData["bound_vpc_id"] = "newvpcid"
|
||||
roleReq.Operation = logical.UpdateOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
roleReq.Operation = logical.ReadOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
expected["bound_vpc_id"] = "newvpcid"
|
||||
|
||||
if !reflect.DeepEqual(expected, resp.Data) {
|
||||
t.Fatalf("bad: role data: expected: %#v\n actual: %#v", expected, resp.Data)
|
||||
}
|
||||
|
||||
roleReq.Operation = logical.DeleteOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
if resp != nil {
|
||||
t.Fatalf("failed to delete role entry")
|
||||
}
|
||||
}
|
||||
|
||||
func TestAwsEc2_RoleDurationSeconds(t *testing.T) {
|
||||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
roleData := map[string]interface{}{
|
||||
"auth_type": "ec2",
|
||||
"bound_iam_instance_profile_arn": "testarn",
|
||||
"ttl": "10s",
|
||||
"max_ttl": "20s",
|
||||
"period": "30s",
|
||||
}
|
||||
|
||||
roleReq := &logical.Request{
|
||||
Operation: logical.CreateOperation,
|
||||
Storage: storage,
|
||||
Path: "role/testrole",
|
||||
Data: roleData,
|
||||
}
|
||||
|
||||
resp, err := b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
roleReq.Operation = logical.ReadOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("resp: %#v, err: %v", resp, err)
|
||||
}
|
||||
|
||||
if int64(resp.Data["ttl"].(time.Duration)) != 10 {
|
||||
t.Fatalf("bad: period; expected: 10, actual: %d", resp.Data["ttl"])
|
||||
}
|
||||
if int64(resp.Data["max_ttl"].(time.Duration)) != 20 {
|
||||
t.Fatalf("bad: period; expected: 20, actual: %d", resp.Data["max_ttl"])
|
||||
}
|
||||
if int64(resp.Data["period"].(time.Duration)) != 30 {
|
||||
t.Fatalf("bad: period; expected: 30, actual: %d", resp.Data["period"])
|
||||
}
|
||||
}
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"fmt"
|
|
@ -1,4 +1,4 @@
|
|||
package awsec2
|
||||
package awsauth
|
||||
|
||||
import (
|
||||
"fmt"
|
|
@ -104,13 +104,13 @@ func (b *backend) Login(req *logical.Request, username string, password string)
|
|||
// Clean connection
|
||||
defer c.Close()
|
||||
|
||||
bindDN, err := b.getBindDN(cfg, c, username)
|
||||
userBindDN, err := b.getUserBindDN(cfg, c, username)
|
||||
if err != nil {
|
||||
return nil, logical.ErrorResponse(err.Error()), nil
|
||||
}
|
||||
|
||||
if b.Logger().IsDebug() {
|
||||
b.Logger().Debug("auth/ldap: BindDN fetched", "username", username, "binddn", bindDN)
|
||||
b.Logger().Debug("auth/ldap: User BindDN fetched", "username", username, "binddn", userBindDN)
|
||||
}
|
||||
|
||||
if cfg.DenyNullBind && len(password) == 0 {
|
||||
|
@ -118,11 +118,22 @@ func (b *backend) Login(req *logical.Request, username string, password string)
|
|||
}
|
||||
|
||||
// Try to bind as the login user. This is where the actual authentication takes place.
|
||||
if err = c.Bind(bindDN, password); err != nil {
|
||||
if err = c.Bind(userBindDN, password); err != nil {
|
||||
return nil, logical.ErrorResponse(fmt.Sprintf("LDAP bind failed: %v", err)), nil
|
||||
}
|
||||
|
||||
userDN, err := b.getUserDN(cfg, c, bindDN)
|
||||
// We re-bind to the BindDN if it's defined because we assume
|
||||
// the BindDN should be the one to search, not the user logging in.
|
||||
if cfg.BindDN != "" && cfg.BindPassword != "" {
|
||||
if err := c.Bind(cfg.BindDN, cfg.BindPassword); err != nil {
|
||||
return nil, logical.ErrorResponse(fmt.Sprintf("Encountered an error while attempting to re-bind with the BindDN User: %s", err.Error())), nil
|
||||
}
|
||||
if b.Logger().IsDebug() {
|
||||
b.Logger().Debug("auth/ldap: Re-Bound to original BindDN")
|
||||
}
|
||||
}
|
||||
|
||||
userDN, err := b.getUserDN(cfg, c, userBindDN)
|
||||
if err != nil {
|
||||
return nil, logical.ErrorResponse(err.Error()), nil
|
||||
}
|
||||
|
@ -165,11 +176,11 @@ func (b *backend) Login(req *logical.Request, username string, password string)
|
|||
policies = append(policies, group.Policies...)
|
||||
}
|
||||
}
|
||||
if user !=nil && user.Policies != nil {
|
||||
if user != nil && user.Policies != nil {
|
||||
policies = append(policies, user.Policies...)
|
||||
}
|
||||
// Policies from each group may overlap
|
||||
policies = strutil.RemoveDuplicates(policies)
|
||||
policies = strutil.RemoveDuplicates(policies, true)
|
||||
|
||||
if len(policies) == 0 {
|
||||
errStr := "user is not a member of any authorized group"
|
||||
|
@ -218,7 +229,7 @@ func (b *backend) getCN(dn string) string {
|
|||
* 2. If upndomain is set, the user dn is constructed as 'username@upndomain'. See https://msdn.microsoft.com/en-us/library/cc223499.aspx
|
||||
*
|
||||
*/
|
||||
func (b *backend) getBindDN(cfg *ConfigEntry, c *ldap.Conn, username string) (string, error) {
|
||||
func (b *backend) getUserBindDN(cfg *ConfigEntry, c *ldap.Conn, username string) (string, error) {
|
||||
bindDN := ""
|
||||
if cfg.DiscoverDN || (cfg.BindDN != "" && cfg.BindPassword != "") {
|
||||
if err := c.Bind(cfg.BindDN, cfg.BindPassword); err != nil {
|
||||
|
|
|
@ -101,7 +101,7 @@ func (b *backend) pathUserRead(
|
|||
func (b *backend) pathUserWrite(
|
||||
req *logical.Request, d *framework.FieldData) (*logical.Response, error) {
|
||||
name := d.Get("name").(string)
|
||||
groups := strutil.ParseDedupAndSortStrings(d.Get("groups").(string), ",")
|
||||
groups := strutil.RemoveDuplicates(strutil.ParseStringSlice(d.Get("groups").(string), ","), false)
|
||||
policies := policyutil.ParsePolicies(d.Get("policies").(string))
|
||||
for i, g := range groups {
|
||||
groups[i] = strings.TrimSpace(g)
|
||||
|
|
|
@ -76,7 +76,7 @@ func createSession(cfg *sessionConfig, s logical.Storage) (*gocql.Session, error
|
|||
}
|
||||
|
||||
clusterConfig.SslOpts = &gocql.SslOptions{
|
||||
Config: *tlsConfig,
|
||||
Config: tlsConfig,
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -78,7 +78,7 @@ func (b *backend) DB(s logical.Storage) (*sql.DB, error) {
|
|||
}
|
||||
connString := connConfig.ConnectionString
|
||||
|
||||
db, err := sql.Open("mssql", connString)
|
||||
db, err := sql.Open("sqlserver", connString)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
|
@ -102,7 +102,7 @@ func (b *backend) secretCredsRevoke(
|
|||
// we need to drop the database users before we can drop the login and the role
|
||||
// This isn't done in a transaction because even if we fail along the way,
|
||||
// we want to remove as much access as possible
|
||||
stmt, err := db.Prepare(fmt.Sprintf("EXEC sp_msloginmappings '%s';", username))
|
||||
stmt, err := db.Prepare(fmt.Sprintf("EXEC master.dbo.sp_msloginmappings '%s';", username))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
|
@ -225,7 +225,7 @@ func TestBackend_RSARoles_CSR(t *testing.T) {
|
|||
|
||||
stepCount = len(testCase.Steps)
|
||||
|
||||
testCase.Steps = append(testCase.Steps, generateRoleSteps(t, false)...)
|
||||
testCase.Steps = append(testCase.Steps, generateRoleSteps(t, true)...)
|
||||
if len(os.Getenv("VAULT_VERBOSE_PKITESTS")) > 0 {
|
||||
for i, v := range testCase.Steps {
|
||||
fmt.Printf("Step %d:\n%+v\n\n", i+stepCount, v)
|
||||
|
@ -1471,7 +1471,7 @@ func generateRoleSteps(t *testing.T, useCSRs bool) []logicaltest.TestStep {
|
|||
}
|
||||
cert := parsedCertBundle.Certificate
|
||||
|
||||
expected := strutil.ParseDedupAndSortStrings(role.OU, ",")
|
||||
expected := strutil.ParseDedupLowercaseAndSortStrings(role.OU, ",")
|
||||
if !reflect.DeepEqual(cert.Subject.OrganizationalUnit, expected) {
|
||||
return fmt.Errorf("Error: returned certificate has OU of %s but %s was specified in the role.", cert.Subject.OrganizationalUnit, expected)
|
||||
}
|
||||
|
@ -1492,7 +1492,7 @@ func generateRoleSteps(t *testing.T, useCSRs bool) []logicaltest.TestStep {
|
|||
}
|
||||
cert := parsedCertBundle.Certificate
|
||||
|
||||
expected := strutil.ParseDedupAndSortStrings(role.Organization, ",")
|
||||
expected := strutil.ParseDedupLowercaseAndSortStrings(role.Organization, ",")
|
||||
if !reflect.DeepEqual(cert.Subject.Organization, expected) {
|
||||
return fmt.Errorf("Error: returned certificate has Organization of %s but %s was specified in the role.", cert.Subject.Organization, expected)
|
||||
}
|
||||
|
@ -1787,6 +1787,12 @@ func generateRoleSteps(t *testing.T, useCSRs bool) []logicaltest.TestStep {
|
|||
}
|
||||
// IP SAN tests
|
||||
{
|
||||
roleVals.UseCSRSANs = true
|
||||
roleVals.AllowIPSANs = false
|
||||
issueTestStep.ErrorOk = false
|
||||
addTests(nil)
|
||||
|
||||
roleVals.UseCSRSANs = false
|
||||
issueVals.IPSANs = "127.0.0.1,::1"
|
||||
issueTestStep.ErrorOk = true
|
||||
addTests(nil)
|
||||
|
@ -1978,6 +1984,172 @@ func TestBackend_PathFetchCertList(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestBackend_SignVerbatim(t *testing.T) {
|
||||
// create the backend
|
||||
config := logical.TestBackendConfig()
|
||||
storage := &logical.InmemStorage{}
|
||||
config.StorageView = storage
|
||||
|
||||
b := Backend()
|
||||
_, err := b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// generate root
|
||||
rootData := map[string]interface{}{
|
||||
"common_name": "test.com",
|
||||
"ttl": "172800",
|
||||
}
|
||||
|
||||
resp, err := b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "root/generate/internal",
|
||||
Storage: storage,
|
||||
Data: rootData,
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to generate root, %#v", *resp)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// create a CSR and key
|
||||
key, err := rsa.GenerateKey(rand.Reader, 2048)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
csrReq := &x509.CertificateRequest{
|
||||
Subject: pkix.Name{
|
||||
CommonName: "foo.bar.com",
|
||||
},
|
||||
}
|
||||
csr, err := x509.CreateCertificateRequest(rand.Reader, csrReq, key)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(csr) == 0 {
|
||||
t.Fatal("generated csr is empty")
|
||||
}
|
||||
pemCSR := pem.EncodeToMemory(&pem.Block{
|
||||
Type: "CERTIFICATE REQUEST",
|
||||
Bytes: csr,
|
||||
})
|
||||
if len(pemCSR) == 0 {
|
||||
t.Fatal("pem csr is empty")
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "sign-verbatim",
|
||||
Storage: storage,
|
||||
Data: map[string]interface{}{
|
||||
"csr": string(pemCSR),
|
||||
},
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to sign-verbatim basic CSR: %#v", *resp)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp.Secret != nil {
|
||||
t.Fatal("secret is not nil")
|
||||
}
|
||||
|
||||
// create a role entry; we use this to check that sign-verbatim when used with a role is still honoring TTLs
|
||||
roleData := map[string]interface{}{
|
||||
"ttl": "4h",
|
||||
"max_ttl": "8h",
|
||||
}
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "roles/test",
|
||||
Storage: storage,
|
||||
Data: roleData,
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create a role, %#v", *resp)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "sign-verbatim/test",
|
||||
Storage: storage,
|
||||
Data: map[string]interface{}{
|
||||
"csr": string(pemCSR),
|
||||
"ttl": "5h",
|
||||
},
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to sign-verbatim ttl'd CSR: %#v", *resp)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp.Secret != nil {
|
||||
t.Fatal("got a lease when we should not have")
|
||||
}
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "sign-verbatim/test",
|
||||
Storage: storage,
|
||||
Data: map[string]interface{}{
|
||||
"csr": string(pemCSR),
|
||||
"ttl": "12h",
|
||||
},
|
||||
})
|
||||
if resp != nil && !resp.IsError() {
|
||||
t.Fatalf("sign-verbatim signed too-large-ttl'd CSR: %#v", *resp)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// now check that if we set generate-lease it takes it from the role and the TTLs match
|
||||
roleData = map[string]interface{}{
|
||||
"ttl": "4h",
|
||||
"max_ttl": "8h",
|
||||
"generate_lease": true,
|
||||
}
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "roles/test",
|
||||
Storage: storage,
|
||||
Data: roleData,
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to create a role, %#v", *resp)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "sign-verbatim/test",
|
||||
Storage: storage,
|
||||
Data: map[string]interface{}{
|
||||
"csr": string(pemCSR),
|
||||
"ttl": "5h",
|
||||
},
|
||||
})
|
||||
if resp != nil && resp.IsError() {
|
||||
t.Fatalf("failed to sign-verbatim role-leased CSR: %#v", *resp)
|
||||
}
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if resp.Secret == nil {
|
||||
t.Fatalf("secret is nil, response is %#v", *resp)
|
||||
}
|
||||
if math.Abs(float64(resp.Secret.TTL-(5*time.Hour))) > float64(5*time.Hour) {
|
||||
t.Fatalf("ttl not default; wanted %v, got %v", b.System().DefaultLeaseTTL(), resp.Secret.TTL)
|
||||
}
|
||||
}
|
||||
|
||||
const (
|
||||
rsaCAKey string = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEogIBAAKCAQEAmPQlK7xD5p+E8iLQ8XlVmll5uU2NKMxKY3UF5tbh+0vkc+Fy
|
||||
|
|
|
@ -18,6 +18,7 @@ import (
|
|||
|
||||
"github.com/hashicorp/vault/helper/certutil"
|
||||
"github.com/hashicorp/vault/helper/errutil"
|
||||
"github.com/hashicorp/vault/helper/parseutil"
|
||||
"github.com/hashicorp/vault/helper/strutil"
|
||||
"github.com/hashicorp/vault/logical"
|
||||
"github.com/hashicorp/vault/logical/framework"
|
||||
|
@ -66,8 +67,10 @@ func (b *caInfoBundle) GetCAChain() []*certutil.CertBlock {
|
|||
chain := []*certutil.CertBlock{}
|
||||
|
||||
// Include issuing CA in Chain, not including Root Authority
|
||||
if len(b.Certificate.AuthorityKeyId) > 0 &&
|
||||
!bytes.Equal(b.Certificate.AuthorityKeyId, b.Certificate.SubjectKeyId) {
|
||||
if (len(b.Certificate.AuthorityKeyId) > 0 &&
|
||||
!bytes.Equal(b.Certificate.AuthorityKeyId, b.Certificate.SubjectKeyId)) ||
|
||||
(len(b.Certificate.AuthorityKeyId) == 0 &&
|
||||
!bytes.Equal(b.Certificate.RawIssuer, b.Certificate.RawSubject)) {
|
||||
|
||||
chain = append(chain, &certutil.CertBlock{
|
||||
Certificate: b.Certificate,
|
||||
|
@ -215,7 +218,7 @@ func fetchCertBySerial(req *logical.Request, prefix, serial string) (*logical.St
|
|||
// Given a set of requested names for a certificate, verifies that all of them
|
||||
// match the various toggles set in the role for controlling issuance.
|
||||
// If one does not pass, it is returned in the string argument.
|
||||
func validateNames(req *logical.Request, names []string, role *roleEntry) (string, error) {
|
||||
func validateNames(req *logical.Request, names []string, role *roleEntry) string {
|
||||
for _, name := range names {
|
||||
sanitizedName := name
|
||||
emailDomain := name
|
||||
|
@ -231,7 +234,7 @@ func validateNames(req *logical.Request, names []string, role *roleEntry) (strin
|
|||
if strings.Contains(name, "@") {
|
||||
splitEmail := strings.Split(name, "@")
|
||||
if len(splitEmail) != 2 {
|
||||
return name, nil
|
||||
return name
|
||||
}
|
||||
sanitizedName = splitEmail[1]
|
||||
emailDomain = splitEmail[1]
|
||||
|
@ -248,7 +251,7 @@ func validateNames(req *logical.Request, names []string, role *roleEntry) (strin
|
|||
|
||||
// Email addresses using wildcard domain names do not make sense
|
||||
if isEmail && isWildcard {
|
||||
return name, nil
|
||||
return name
|
||||
}
|
||||
|
||||
// AllowAnyName is checked after this because EnforceHostnames still
|
||||
|
@ -257,7 +260,7 @@ func validateNames(req *logical.Request, names []string, role *roleEntry) (strin
|
|||
// wildcard prefix.
|
||||
if role.EnforceHostnames {
|
||||
if !hostnameRegex.MatchString(sanitizedName) {
|
||||
return name, nil
|
||||
return name
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -366,10 +369,10 @@ func validateNames(req *logical.Request, names []string, role *roleEntry) (strin
|
|||
}
|
||||
|
||||
//panic(fmt.Sprintf("\nName is %s\nRole is\n%#v\n", name, role))
|
||||
return name, nil
|
||||
return name
|
||||
}
|
||||
|
||||
return "", nil
|
||||
return ""
|
||||
}
|
||||
|
||||
func generateCert(b *backend,
|
||||
|
@ -558,13 +561,13 @@ func generateCreationBundle(b *backend,
|
|||
var err error
|
||||
var ok bool
|
||||
|
||||
// Get the common name
|
||||
// Read in names -- CN, DNS and email addresses
|
||||
var cn string
|
||||
dnsNames := []string{}
|
||||
emailAddresses := []string{}
|
||||
{
|
||||
if csr != nil {
|
||||
if role.UseCSRCommonName {
|
||||
cn = csr.Subject.CommonName
|
||||
}
|
||||
if csr != nil && role.UseCSRCommonName {
|
||||
cn = csr.Subject.CommonName
|
||||
}
|
||||
if cn == "" {
|
||||
cn = data.Get("common_name").(string)
|
||||
|
@ -572,28 +575,12 @@ func generateCreationBundle(b *backend,
|
|||
return nil, errutil.UserError{Err: `the common_name field is required, or must be provided in a CSR with "use_csr_common_name" set to true`}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Set OU (organizationalUnit) values if specified in the role
|
||||
ou := []string{}
|
||||
{
|
||||
if role.OU != "" {
|
||||
ou = strutil.ParseDedupAndSortStrings(role.OU, ",")
|
||||
if csr != nil && role.UseCSRSANs {
|
||||
dnsNames = csr.DNSNames
|
||||
emailAddresses = csr.EmailAddresses
|
||||
}
|
||||
}
|
||||
|
||||
// Set O (organization) values if specified in the role
|
||||
organization := []string{}
|
||||
{
|
||||
if role.Organization != "" {
|
||||
organization = strutil.ParseDedupAndSortStrings(role.Organization, ",")
|
||||
}
|
||||
}
|
||||
|
||||
// Read in alternate names -- DNS and email addresses
|
||||
dnsNames := []string{}
|
||||
emailAddresses := []string{}
|
||||
{
|
||||
if !data.Get("exclude_cn_from_sans").(bool) {
|
||||
if strings.Contains(cn, "@") {
|
||||
// Note: emails are not disallowed if the role's email protection
|
||||
|
@ -606,11 +593,12 @@ func generateCreationBundle(b *backend,
|
|||
dnsNames = append(dnsNames, cn)
|
||||
}
|
||||
}
|
||||
cnAltInt, ok := data.GetOk("alt_names")
|
||||
if ok {
|
||||
cnAlt := cnAltInt.(string)
|
||||
if len(cnAlt) != 0 {
|
||||
for _, v := range strings.Split(cnAlt, ",") {
|
||||
|
||||
if csr == nil || !role.UseCSRSANs {
|
||||
cnAltRaw, ok := data.GetOk("alt_names")
|
||||
if ok {
|
||||
cnAlt := strutil.ParseDedupLowercaseAndSortStrings(cnAltRaw.(string), ",")
|
||||
for _, v := range cnAlt {
|
||||
if strings.Contains(v, "@") {
|
||||
emailAddresses = append(emailAddresses, v)
|
||||
} else {
|
||||
|
@ -620,23 +608,25 @@ func generateCreationBundle(b *backend,
|
|||
}
|
||||
}
|
||||
|
||||
// Check for bad email and/or DNS names
|
||||
badName, err := validateNames(req, dnsNames, role)
|
||||
// Check the CN. This ensures that the CN is checked even if it's
|
||||
// excluded from SANs.
|
||||
badName := validateNames(req, []string{cn}, role)
|
||||
if len(badName) != 0 {
|
||||
return nil, errutil.UserError{Err: fmt.Sprintf(
|
||||
"name %s not allowed by this role", badName)}
|
||||
} else if err != nil {
|
||||
return nil, errutil.InternalError{Err: fmt.Sprintf(
|
||||
"error validating name %s: %s", badName, err)}
|
||||
"common name %s not allowed by this role", badName)}
|
||||
}
|
||||
|
||||
badName, err = validateNames(req, emailAddresses, role)
|
||||
// Check for bad email and/or DNS names
|
||||
badName = validateNames(req, dnsNames, role)
|
||||
if len(badName) != 0 {
|
||||
return nil, errutil.UserError{Err: fmt.Sprintf(
|
||||
"email %s not allowed by this role", badName)}
|
||||
} else if err != nil {
|
||||
return nil, errutil.InternalError{Err: fmt.Sprintf(
|
||||
"error validating name %s: %s", badName, err)}
|
||||
"subject alternate name %s not allowed by this role", badName)}
|
||||
}
|
||||
|
||||
badName = validateNames(req, emailAddresses, role)
|
||||
if len(badName) != 0 {
|
||||
return nil, errutil.UserError{Err: fmt.Sprintf(
|
||||
"email address %s not allowed by this role", badName)}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -644,26 +634,52 @@ func generateCreationBundle(b *backend,
|
|||
ipAddresses := []net.IP{}
|
||||
var ipAltInt interface{}
|
||||
{
|
||||
ipAltInt, ok = data.GetOk("ip_sans")
|
||||
if ok {
|
||||
ipAlt := ipAltInt.(string)
|
||||
if len(ipAlt) != 0 {
|
||||
if csr != nil && role.UseCSRSANs {
|
||||
if len(csr.IPAddresses) > 0 {
|
||||
if !role.AllowIPSANs {
|
||||
return nil, errutil.UserError{Err: fmt.Sprintf(
|
||||
"IP Subject Alternative Names are not allowed in this role, but was provided %s", ipAlt)}
|
||||
"IP Subject Alternative Names are not allowed in this role, but was provided some via CSR")}
|
||||
}
|
||||
for _, v := range strings.Split(ipAlt, ",") {
|
||||
parsedIP := net.ParseIP(v)
|
||||
if parsedIP == nil {
|
||||
ipAddresses = csr.IPAddresses
|
||||
}
|
||||
} else {
|
||||
ipAltInt, ok = data.GetOk("ip_sans")
|
||||
if ok {
|
||||
ipAlt := ipAltInt.(string)
|
||||
if len(ipAlt) != 0 {
|
||||
if !role.AllowIPSANs {
|
||||
return nil, errutil.UserError{Err: fmt.Sprintf(
|
||||
"the value '%s' is not a valid IP address", v)}
|
||||
"IP Subject Alternative Names are not allowed in this role, but was provided %s", ipAlt)}
|
||||
}
|
||||
for _, v := range strings.Split(ipAlt, ",") {
|
||||
parsedIP := net.ParseIP(v)
|
||||
if parsedIP == nil {
|
||||
return nil, errutil.UserError{Err: fmt.Sprintf(
|
||||
"the value '%s' is not a valid IP address", v)}
|
||||
}
|
||||
ipAddresses = append(ipAddresses, parsedIP)
|
||||
}
|
||||
ipAddresses = append(ipAddresses, parsedIP)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Set OU (organizationalUnit) values if specified in the role
|
||||
ou := []string{}
|
||||
{
|
||||
if role.OU != "" {
|
||||
ou = strutil.RemoveDuplicates(strutil.ParseStringSlice(role.OU, ","), false)
|
||||
}
|
||||
}
|
||||
|
||||
// Set O (organization) values if specified in the role
|
||||
organization := []string{}
|
||||
{
|
||||
if role.Organization != "" {
|
||||
organization = strutil.RemoveDuplicates(strutil.ParseStringSlice(role.Organization, ","), false)
|
||||
}
|
||||
}
|
||||
|
||||
// Get the TTL and very it against the max allowed
|
||||
var ttlField string
|
||||
var ttl time.Duration
|
||||
|
@ -680,7 +696,7 @@ func generateCreationBundle(b *backend,
|
|||
if len(ttlField) == 0 {
|
||||
ttl = b.System().DefaultLeaseTTL()
|
||||
} else {
|
||||
ttl, err = time.ParseDuration(ttlField)
|
||||
ttl, err = parseutil.ParseDurationSecond(ttlField)
|
||||
if err != nil {
|
||||
return nil, errutil.UserError{Err: fmt.Sprintf(
|
||||
"invalid requested ttl: %s", err)}
|
||||
|
@ -690,7 +706,7 @@ func generateCreationBundle(b *backend,
|
|||
if len(role.MaxTTL) == 0 {
|
||||
maxTTL = b.System().MaxLeaseTTL()
|
||||
} else {
|
||||
maxTTL, err = time.ParseDuration(role.MaxTTL)
|
||||
maxTTL, err = parseutil.ParseDurationSecond(role.MaxTTL)
|
||||
if err != nil {
|
||||
return nil, errutil.UserError{Err: fmt.Sprintf(
|
||||
"invalid ttl: %s", err)}
|
||||
|
|
|
@ -116,18 +116,46 @@ func (b *backend) pathSign(
|
|||
func (b *backend) pathSignVerbatim(
|
||||
req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
|
||||
|
||||
roleName := data.Get("role").(string)
|
||||
|
||||
// Get the role if one was specified
|
||||
role, err := b.getRole(req.Storage, roleName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ttl := b.System().DefaultLeaseTTL()
|
||||
role := &roleEntry{
|
||||
maxTTL := b.System().MaxLeaseTTL()
|
||||
|
||||
entry := &roleEntry{
|
||||
TTL: ttl.String(),
|
||||
MaxTTL: maxTTL.String(),
|
||||
AllowLocalhost: true,
|
||||
AllowAnyName: true,
|
||||
AllowIPSANs: true,
|
||||
EnforceHostnames: false,
|
||||
KeyType: "any",
|
||||
UseCSRCommonName: true,
|
||||
UseCSRSANs: true,
|
||||
GenerateLease: new(bool),
|
||||
}
|
||||
|
||||
return b.pathIssueSignCert(req, data, role, true, true)
|
||||
if role != nil {
|
||||
if role.TTL != "" {
|
||||
entry.TTL = role.TTL
|
||||
}
|
||||
if role.MaxTTL != "" {
|
||||
entry.MaxTTL = role.MaxTTL
|
||||
}
|
||||
entry.NoStore = role.NoStore
|
||||
}
|
||||
|
||||
*entry.GenerateLease = false
|
||||
if role != nil && role.GenerateLease != nil {
|
||||
*entry.GenerateLease = *role.GenerateLease
|
||||
}
|
||||
|
||||
return b.pathIssueSignCert(req, data, entry, true, true)
|
||||
}
|
||||
|
||||
func (b *backend) pathIssueSignCert(
|
||||
|
@ -239,12 +267,14 @@ func (b *backend) pathIssueSignCert(
|
|||
resp.Secret.TTL = parsedBundle.Certificate.NotAfter.Sub(time.Now())
|
||||
}
|
||||
|
||||
err = req.Storage.Put(&logical.StorageEntry{
|
||||
Key: "certs/" + cb.SerialNumber,
|
||||
Value: parsedBundle.CertificateBytes,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Unable to store certificate locally: %v", err)
|
||||
if !role.NoStore {
|
||||
err = req.Storage.Put(&logical.StorageEntry{
|
||||
Key: "certs/" + cb.SerialNumber,
|
||||
Value: parsedBundle.CertificateBytes,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Unable to store certificate locally: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
return resp, nil
|
||||
|
|
|
@ -169,6 +169,14 @@ does *not* include any requested Subject Alternative
|
|||
Names. Defaults to true.`,
|
||||
},
|
||||
|
||||
"use_csr_sans": &framework.FieldSchema{
|
||||
Type: framework.TypeBool,
|
||||
Default: true,
|
||||
Description: `If set, when used with a signing profile,
|
||||
the SANs in the CSR will be used. This does *not*
|
||||
include the Common Name (cn). Defaults to true.`,
|
||||
},
|
||||
|
||||
"ou": &framework.FieldSchema{
|
||||
Type: framework.TypeString,
|
||||
Default: "",
|
||||
|
@ -196,6 +204,17 @@ to the CRL. When large number of certificates are generated with long
|
|||
lifetimes, it is recommended that lease generation be disabled, as large amount of
|
||||
leases adversely affect the startup time of Vault.`,
|
||||
},
|
||||
"no_store": &framework.FieldSchema{
|
||||
Type: framework.TypeBool,
|
||||
Default: false,
|
||||
Description: `
|
||||
If set, certificates issued/signed against this role will not be stored in the
|
||||
in the storage backend. This can improve performance when issuing large numbers
|
||||
of certificates. However, certificates issued in this way cannot be enumerated
|
||||
or revoked, so this option is recommended only for certificates that are
|
||||
non-sensitive, or extremely short-lived. This option implies a value of "false"
|
||||
for "generate_lease".`,
|
||||
},
|
||||
},
|
||||
|
||||
Callbacks: map[logical.Operation]framework.OperationFunc{
|
||||
|
@ -371,13 +390,20 @@ func (b *backend) pathRoleCreate(
|
|||
KeyType: data.Get("key_type").(string),
|
||||
KeyBits: data.Get("key_bits").(int),
|
||||
UseCSRCommonName: data.Get("use_csr_common_name").(bool),
|
||||
UseCSRSANs: data.Get("use_csr_sans").(bool),
|
||||
KeyUsage: data.Get("key_usage").(string),
|
||||
OU: data.Get("ou").(string),
|
||||
Organization: data.Get("organization").(string),
|
||||
GenerateLease: new(bool),
|
||||
NoStore: data.Get("no_store").(bool),
|
||||
}
|
||||
|
||||
*entry.GenerateLease = data.Get("generate_lease").(bool)
|
||||
// no_store implies generate_lease := false
|
||||
if entry.NoStore {
|
||||
*entry.GenerateLease = false
|
||||
} else {
|
||||
*entry.GenerateLease = data.Get("generate_lease").(bool)
|
||||
}
|
||||
|
||||
if entry.KeyType == "rsa" && entry.KeyBits < 2048 {
|
||||
return logical.ErrorResponse("RSA keys < 2048 bits are unsafe and not supported"), nil
|
||||
|
@ -487,6 +513,7 @@ type roleEntry struct {
|
|||
CodeSigningFlag bool `json:"code_signing_flag" structs:"code_signing_flag" mapstructure:"code_signing_flag"`
|
||||
EmailProtectionFlag bool `json:"email_protection_flag" structs:"email_protection_flag" mapstructure:"email_protection_flag"`
|
||||
UseCSRCommonName bool `json:"use_csr_common_name" structs:"use_csr_common_name" mapstructure:"use_csr_common_name"`
|
||||
UseCSRSANs bool `json:"use_csr_sans" structs:"use_csr_sans" mapstructure:"use_csr_sans"`
|
||||
KeyType string `json:"key_type" structs:"key_type" mapstructure:"key_type"`
|
||||
KeyBits int `json:"key_bits" structs:"key_bits" mapstructure:"key_bits"`
|
||||
MaxPathLength *int `json:",omitempty" structs:"max_path_length,omitempty" mapstructure:"max_path_length"`
|
||||
|
@ -494,6 +521,7 @@ type roleEntry struct {
|
|||
OU string `json:"ou" structs:"ou" mapstructure:"ou"`
|
||||
Organization string `json:"organization" structs:"organization" mapstructure:"organization"`
|
||||
GenerateLease *bool `json:"generate_lease,omitempty" structs:"generate_lease,omitempty"`
|
||||
NoStore bool `json:"no_store" structs:"no_store" mapstructure:"no_store"`
|
||||
}
|
||||
|
||||
const pathListRolesHelpSyn = `List the existing roles in this backend`
|
||||
|
|
|
@ -124,6 +124,114 @@ func TestPki_RoleGenerateLease(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestPki_RoleNoStore(t *testing.T) {
|
||||
var resp *logical.Response
|
||||
var err error
|
||||
b, storage := createBackendWithStorage(t)
|
||||
|
||||
roleData := map[string]interface{}{
|
||||
"allowed_domains": "myvault.com",
|
||||
"ttl": "5h",
|
||||
}
|
||||
|
||||
roleReq := &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "roles/testrole",
|
||||
Storage: storage,
|
||||
Data: roleData,
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("bad: err: %v resp: %#v", err, resp)
|
||||
}
|
||||
|
||||
roleReq.Operation = logical.ReadOperation
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("bad: err: %v resp: %#v", err, resp)
|
||||
}
|
||||
|
||||
// By default, no_store should be `false`
|
||||
noStore := resp.Data["no_store"].(bool)
|
||||
if noStore {
|
||||
t.Fatalf("no_store should not be set by default")
|
||||
}
|
||||
|
||||
// Make sure that setting no_store to `true` works properly
|
||||
roleReq.Operation = logical.UpdateOperation
|
||||
roleReq.Path = "roles/testrole_nostore"
|
||||
roleReq.Data["no_store"] = true
|
||||
roleReq.Data["allowed_domain"] = "myvault.com"
|
||||
roleReq.Data["allow_subdomains"] = true
|
||||
roleReq.Data["ttl"] = "5h"
|
||||
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("bad: err: %v resp: %#v", err, resp)
|
||||
}
|
||||
|
||||
roleReq.Operation = logical.ReadOperation
|
||||
resp, err = b.HandleRequest(roleReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("bad: err: %v resp: %#v", err, resp)
|
||||
}
|
||||
|
||||
noStore = resp.Data["no_store"].(bool)
|
||||
if !noStore {
|
||||
t.Fatalf("no_store should have been set to true")
|
||||
}
|
||||
|
||||
// issue a certificate and test that it's not stored
|
||||
caData := map[string]interface{}{
|
||||
"common_name": "myvault.com",
|
||||
"ttl": "5h",
|
||||
"ip_sans": "127.0.0.1",
|
||||
}
|
||||
caReq := &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "root/generate/internal",
|
||||
Storage: storage,
|
||||
Data: caData,
|
||||
}
|
||||
resp, err = b.HandleRequest(caReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("bad: err: %v resp: %#v", err, resp)
|
||||
}
|
||||
|
||||
issueData := map[string]interface{}{
|
||||
"common_name": "cert.myvault.com",
|
||||
"format": "pem",
|
||||
"ip_sans": "127.0.0.1",
|
||||
"ttl": "1h",
|
||||
}
|
||||
issueReq := &logical.Request{
|
||||
Operation: logical.UpdateOperation,
|
||||
Path: "issue/testrole_nostore",
|
||||
Storage: storage,
|
||||
Data: issueData,
|
||||
}
|
||||
|
||||
resp, err = b.HandleRequest(issueReq)
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("bad: err: %v resp: %#v", err, resp)
|
||||
}
|
||||
|
||||
// list certs
|
||||
resp, err = b.HandleRequest(&logical.Request{
|
||||
Operation: logical.ListOperation,
|
||||
Path: "certs",
|
||||
Storage: storage,
|
||||
})
|
||||
if err != nil || (resp != nil && resp.IsError()) {
|
||||
t.Fatalf("bad: err: %v resp: %#v", err, resp)
|
||||
}
|
||||
if len(resp.Data["keys"].([]string)) != 1 {
|
||||
t.Fatalf("Only the CA certificate should be stored: %#v", resp)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPki_CertsLease(t *testing.T) {
|
||||
var resp *logical.Response
|
||||
var err error
|
||||
|
|
|
@ -587,7 +587,7 @@ func TestBackend_ValidPrincipalsValidatedForHostCertificates(t *testing.T) {
|
|||
},
|
||||
}),
|
||||
|
||||
signCertificateStep("testing", "root", ssh.HostCert, []string{"dummy.example.org", "second.example.com"}, map[string]string{
|
||||
signCertificateStep("testing", "vault-root-22608f5ef173aabf700797cb95c5641e792698ec6380e8e1eb55523e39aa5e51", ssh.HostCert, []string{"dummy.example.org", "second.example.com"}, map[string]string{
|
||||
"option": "value",
|
||||
}, map[string]string{
|
||||
"extension": "extended",
|
||||
|
@ -632,7 +632,7 @@ func TestBackend_OptionsOverrideDefaults(t *testing.T) {
|
|||
},
|
||||
}),
|
||||
|
||||
signCertificateStep("testing", "root", ssh.UserCert, []string{"tuber"}, map[string]string{
|
||||
signCertificateStep("testing", "vault-root-22608f5ef173aabf700797cb95c5641e792698ec6380e8e1eb55523e39aa5e51", ssh.UserCert, []string{"tuber"}, map[string]string{
|
||||
"secondary": "value",
|
||||
}, map[string]string{
|
||||
"additional": "value",
|
||||
|
@ -709,7 +709,7 @@ func validateSSHCertificate(cert *ssh.Certificate, keyId string, certType int, v
|
|||
ttl time.Duration) error {
|
||||
|
||||
if cert.KeyId != keyId {
|
||||
return fmt.Errorf("Incorrect KeyId: %v", cert.KeyId)
|
||||
return fmt.Errorf("Incorrect KeyId: %v, wanted %v", cert.KeyId, keyId)
|
||||
}
|
||||
|
||||
if cert.CertType != uint32(certType) {
|
||||
|
|
|
@ -7,11 +7,25 @@ import (
|
|||
"encoding/pem"
|
||||
"fmt"
|
||||
|
||||
multierror "github.com/hashicorp/go-multierror"
|
||||
"github.com/hashicorp/vault/logical"
|
||||
"github.com/hashicorp/vault/logical/framework"
|
||||
"golang.org/x/crypto/ssh"
|
||||
)
|
||||
|
||||
const (
|
||||
caPublicKey = "ca_public_key"
|
||||
caPrivateKey = "ca_private_key"
|
||||
caPublicKeyStoragePath = "config/ca_public_key"
|
||||
caPublicKeyStoragePathDeprecated = "public_key"
|
||||
caPrivateKeyStoragePath = "config/ca_private_key"
|
||||
caPrivateKeyStoragePathDeprecated = "config/ca_bundle"
|
||||
)
|
||||
|
||||
type keyStorageEntry struct {
|
||||
Key string `json:"key" structs:"key" mapstructure:"key"`
|
||||
}
|
||||
|
||||
func pathConfigCA(b *backend) *framework.Path {
|
||||
return &framework.Path{
|
||||
Pattern: "config/ca",
|
||||
|
@ -34,27 +48,102 @@ func pathConfigCA(b *backend) *framework.Path {
|
|||
Callbacks: map[logical.Operation]framework.OperationFunc{
|
||||
logical.UpdateOperation: b.pathConfigCAUpdate,
|
||||
logical.DeleteOperation: b.pathConfigCADelete,
|
||||
logical.ReadOperation: b.pathConfigCARead,
|
||||
},
|
||||
|
||||
HelpSynopsis: `Set the SSH private key used for signing certificates.`,
|
||||
HelpDescription: `This sets the CA information used for certificates generated by this
|
||||
by this mount. The fields must be in the standard private and public SSH format.
|
||||
|
||||
For security reasons, the private key cannot be retrieved later.`,
|
||||
For security reasons, the private key cannot be retrieved later.
|
||||
|
||||
Read operations will return the public key, if already stored/generated.`,
|
||||
}
|
||||
}
|
||||
|
||||
func (b *backend) pathConfigCARead(
|
||||
req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
|
||||
publicKeyEntry, err := caKey(req.Storage, caPublicKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read CA public key: %v", err)
|
||||
}
|
||||
|
||||
if publicKeyEntry == nil {
|
||||
return logical.ErrorResponse("keys haven't been configured yet"), nil
|
||||
}
|
||||
|
||||
response := &logical.Response{
|
||||
Data: map[string]interface{}{
|
||||
"public_key": publicKeyEntry.Key,
|
||||
},
|
||||
}
|
||||
|
||||
return response, nil
|
||||
}
|
||||
|
||||
func (b *backend) pathConfigCADelete(
|
||||
req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
|
||||
if err := req.Storage.Delete("config/ca_bundle"); err != nil {
|
||||
if err := req.Storage.Delete(caPrivateKeyStoragePath); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := req.Storage.Delete("config/ca_public_key"); err != nil {
|
||||
if err := req.Storage.Delete(caPublicKeyStoragePath); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func caKey(storage logical.Storage, keyType string) (*keyStorageEntry, error) {
|
||||
var path, deprecatedPath string
|
||||
switch keyType {
|
||||
case caPrivateKey:
|
||||
path = caPrivateKeyStoragePath
|
||||
deprecatedPath = caPrivateKeyStoragePathDeprecated
|
||||
case caPublicKey:
|
||||
path = caPublicKeyStoragePath
|
||||
deprecatedPath = caPublicKeyStoragePathDeprecated
|
||||
default:
|
||||
return nil, fmt.Errorf("unrecognized key type %q", keyType)
|
||||
}
|
||||
|
||||
entry, err := storage.Get(path)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read CA key of type %q: %v", keyType, err)
|
||||
}
|
||||
|
||||
if entry == nil {
|
||||
// If the entry is not found, look at an older path. If found, upgrade
|
||||
// it.
|
||||
entry, err = storage.Get(deprecatedPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if entry != nil {
|
||||
entry, err = logical.StorageEntryJSON(path, keyStorageEntry{
|
||||
Key: string(entry.Value),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := storage.Put(entry); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err = storage.Delete(deprecatedPath); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
if entry == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var keyEntry keyStorageEntry
|
||||
if err := entry.DecodeJSON(&keyEntry); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &keyEntry, nil
|
||||
}
|
||||
|
||||
func (b *backend) pathConfigCAUpdate(req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
|
||||
var err error
|
||||
publicKey := data.Get("public_key").(string)
|
||||
|
@ -112,39 +201,68 @@ func (b *backend) pathConfigCAUpdate(req *logical.Request, data *framework.Field
|
|||
return nil, fmt.Errorf("failed to generate or parse the keys")
|
||||
}
|
||||
|
||||
publicKeyEntry, err := req.Storage.Get("config/ca_public_key")
|
||||
publicKeyEntry, err := caKey(req.Storage, caPublicKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed while reading ca_public_key: %v", err)
|
||||
return nil, fmt.Errorf("failed to read CA public key: %v", err)
|
||||
}
|
||||
|
||||
privateKeyEntry, err := req.Storage.Get("config/ca_bundle")
|
||||
privateKeyEntry, err := caKey(req.Storage, caPrivateKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed while reading ca_bundle: %v", err)
|
||||
return nil, fmt.Errorf("failed to read CA private key: %v", err)
|
||||
}
|
||||
|
||||
if publicKeyEntry != nil || privateKeyEntry != nil {
|
||||
if (publicKeyEntry != nil && publicKeyEntry.Key != "") || (privateKeyEntry != nil && privateKeyEntry.Key != "") {
|
||||
return nil, fmt.Errorf("keys are already configured; delete them before reconfiguring")
|
||||
}
|
||||
|
||||
err = req.Storage.Put(&logical.StorageEntry{
|
||||
Key: "config/ca_public_key",
|
||||
Value: []byte(publicKey),
|
||||
entry, err := logical.StorageEntryJSON(caPublicKeyStoragePath, &keyStorageEntry{
|
||||
Key: publicKey,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
bundle := signingBundle{
|
||||
Certificate: privateKey,
|
||||
}
|
||||
|
||||
entry, err := logical.StorageEntryJSON("config/ca_bundle", bundle)
|
||||
// Save the public key
|
||||
err = req.Storage.Put(entry)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
entry, err = logical.StorageEntryJSON(caPrivateKeyStoragePath, &keyStorageEntry{
|
||||
Key: privateKey,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Save the private key
|
||||
err = req.Storage.Put(entry)
|
||||
return nil, err
|
||||
if err != nil {
|
||||
var mErr *multierror.Error
|
||||
|
||||
mErr = multierror.Append(mErr, fmt.Errorf("failed to store CA private key: %v", err))
|
||||
|
||||
// If storing private key fails, the corresponding public key should be
|
||||
// removed
|
||||
if delErr := req.Storage.Delete(caPublicKeyStoragePath); delErr != nil {
|
||||
mErr = multierror.Append(mErr, fmt.Errorf("failed to cleanup CA public key: %v", delErr))
|
||||
return nil, mErr
|
||||
}
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if generateSigningKey {
|
||||
response := &logical.Response{
|
||||
Data: map[string]interface{}{
|
||||
"public_key": publicKey,
|
||||
},
|
||||
}
|
||||
|
||||
return response, nil
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func generateSSHKeyPair() (string, string, error) {
|
||||
|
|
|
@ -6,6 +6,91 @@ import (
|
|||
"github.com/hashicorp/vault/logical"
|
||||
)
|
||||
|
||||
func TestSSH_ConfigCAStorageUpgrade(t *testing.T) {
|
||||
var err error
|
||||
|
||||
config := logical.TestBackendConfig()
|
||||
config.StorageView = &logical.InmemStorage{}
|
||||
|
||||
b, err := Backend(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
_, err = b.Setup(config)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Store at an older path
|
||||
err = config.StorageView.Put(&logical.StorageEntry{
|
||||
Key: caPrivateKeyStoragePathDeprecated,
|
||||
Value: []byte(privateKey),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Reading it should return the key as well as upgrade the storage path
|
||||
privateKeyEntry, err := caKey(config.StorageView, caPrivateKey)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if privateKeyEntry == nil || privateKeyEntry.Key == "" {
|
||||
t.Fatalf("failed to read the stored private key")
|
||||
}
|
||||
|
||||
entry, err := config.StorageView.Get(caPrivateKeyStoragePathDeprecated)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if entry != nil {
|
||||
t.Fatalf("bad: expected a nil entry after upgrade")
|
||||
}
|
||||
|
||||
entry, err = config.StorageView.Get(caPrivateKeyStoragePath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if entry == nil {
|
||||
t.Fatalf("bad: expected a non-nil entry after upgrade")
|
||||
}
|
||||
|
||||
// Store at an older path
|
||||
err = config.StorageView.Put(&logical.StorageEntry{
|
||||
Key: caPublicKeyStoragePathDeprecated,
|
||||
Value: []byte(publicKey),
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Reading it should return the key as well as upgrade the storage path
|
||||
publicKeyEntry, err := caKey(config.StorageView, caPublicKey)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if publicKeyEntry == nil || publicKeyEntry.Key == "" {
|
||||
t.Fatalf("failed to read the stored public key")
|
||||
}
|
||||
|
||||
entry, err = config.StorageView.Get(caPublicKeyStoragePathDeprecated)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if entry != nil {
|
||||
t.Fatalf("bad: expected a nil entry after upgrade")
|
||||
}
|
||||
|
||||
entry, err = config.StorageView.Get(caPublicKeyStoragePath)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if entry == nil {
|
||||
t.Fatalf("bad: expected a non-nil entry after upgrade")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSSH_ConfigCAUpdateDelete(t *testing.T) {
|
||||
var resp *logical.Response
|
||||
var err error
|
||||
|
|
|
@ -19,19 +19,18 @@ func pathFetchPublicKey(b *backend) *framework.Path {
|
|||
}
|
||||
|
||||
func (b *backend) pathFetchPublicKey(req *logical.Request, data *framework.FieldData) (*logical.Response, error) {
|
||||
entry, err := req.Storage.Get("config/ca_public_key")
|
||||
publicKeyEntry, err := caKey(req.Storage, caPublicKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if entry == nil {
|
||||
if publicKeyEntry == nil || publicKeyEntry.Key == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
response := &logical.Response{
|
||||
Data: map[string]interface{}{
|
||||
logical.HTTPContentType: "text/plain",
|
||||
logical.HTTPRawBody: entry.Value,
|
||||
logical.HTTPRawBody: []byte(publicKeyEntry.Key),
|
||||
logical.HTTPStatusCode: 200,
|
||||
},
|
||||
}
|
||||
|
|
|
@ -44,6 +44,7 @@ type sshRole struct {
|
|||
AllowHostCertificates bool `mapstructure:"allow_host_certificates" json:"allow_host_certificates"`
|
||||
AllowBareDomains bool `mapstructure:"allow_bare_domains" json:"allow_bare_domains"`
|
||||
AllowSubdomains bool `mapstructure:"allow_subdomains" json:"allow_subdomains"`
|
||||
AllowUserKeyIDs bool `mapstructure:"allow_user_key_ids" json:"allow_user_key_ids"`
|
||||
}
|
||||
|
||||
func pathListRoles(b *backend) *framework.Path {
|
||||
|
@ -142,12 +143,17 @@ func pathRoles(b *backend) *framework.Path {
|
|||
"allowed_users": &framework.FieldSchema{
|
||||
Type: framework.TypeString,
|
||||
Description: `
|
||||
[Optional for all types]
|
||||
If this option is not specified, client can request for a credential for
|
||||
any valid user at the remote host, including the admin user. If only certain
|
||||
usernames are to be allowed, then this list enforces it. If this field is
|
||||
set, then credentials can only be created for default_user and usernames
|
||||
present in this list.
|
||||
[Optional for all types] [Works differently for CA type]
|
||||
If this option is not specified, or is '*', client can request a
|
||||
credential for any valid user at the remote host, including the
|
||||
admin user. If only certain usernames are to be allowed, then
|
||||
this list enforces it. If this field is set, then credentials
|
||||
can only be created for default_user and usernames present in
|
||||
this list. Setting this option will enable all the users with
|
||||
access this role to fetch credentials for all other usernames
|
||||
in this list. Use with caution. N.B.: with the CA type, an empty
|
||||
list means that no users are allowed; explicitly specify '*' to
|
||||
allow any user.
|
||||
`,
|
||||
},
|
||||
"allowed_domains": &framework.FieldSchema{
|
||||
|
@ -251,6 +257,15 @@ func pathRoles(b *backend) *framework.Path {
|
|||
If set, host certificates that are requested are allowed to use subdomains of those listed in "allowed_domains".
|
||||
`,
|
||||
},
|
||||
"allow_user_key_ids": &framework.FieldSchema{
|
||||
Type: framework.TypeBool,
|
||||
Description: `
|
||||
[Not applicable for Dynamic type] [Not applicable for OTP type] [Optional for CA type]
|
||||
If true, users can override the key ID for a signed certificate with the "key_id" field.
|
||||
When false, the key ID will always be the token display name.
|
||||
The key ID is logged by the SSH server and can be useful for auditing.
|
||||
`,
|
||||
},
|
||||
},
|
||||
|
||||
Callbacks: map[logical.Operation]framework.OperationFunc{
|
||||
|
@ -407,7 +422,6 @@ func (b *backend) pathRoleWrite(req *logical.Request, d *framework.FieldData) (*
|
|||
}
|
||||
|
||||
func (b *backend) createCARole(allowedUsers, defaultUser string, data *framework.FieldData) (*sshRole, *logical.Response) {
|
||||
|
||||
role := &sshRole{
|
||||
MaxTTL: data.Get("max_ttl").(string),
|
||||
TTL: data.Get("ttl").(string),
|
||||
|
@ -420,9 +434,14 @@ func (b *backend) createCARole(allowedUsers, defaultUser string, data *framework
|
|||
DefaultUser: defaultUser,
|
||||
AllowBareDomains: data.Get("allow_bare_domains").(bool),
|
||||
AllowSubdomains: data.Get("allow_subdomains").(bool),
|
||||
AllowUserKeyIDs: data.Get("allow_user_key_ids").(bool),
|
||||
KeyType: KeyTypeCA,
|
||||
}
|
||||
|
||||
if !role.AllowUserCertificates && !role.AllowHostCertificates {
|
||||
return nil, logical.ErrorResponse("Either 'allow_user_certificates' or 'allow_host_certificates' must be set to 'true'")
|
||||
}
|
||||
|
||||
defaultCriticalOptions := convertMapToStringValue(data.Get("default_critical_options").(map[string]interface{}))
|
||||
defaultExtensions := convertMapToStringValue(data.Get("default_extensions").(map[string]interface{}))
|
||||
|
||||
|
@ -533,6 +552,7 @@ func (b *backend) pathRoleRead(req *logical.Request, d *framework.FieldData) (*l
|
|||
"allow_host_certificates": role.AllowHostCertificates,
|
||||
"allow_bare_domains": role.AllowBareDomains,
|
||||
"allow_subdomains": role.AllowSubdomains,
|
||||
"allow_user_key_ids": role.AllowUserKeyIDs,
|
||||
"key_type": role.KeyType,
|
||||
"default_critical_options": role.DefaultCriticalOptions,
|
||||
"default_extensions": role.DefaultExtensions,
|
||||
|
|
|
@ -2,6 +2,8 @@ package ssh
|
|||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
|
@ -10,27 +12,22 @@ import (
|
|||
|
||||
"github.com/hashicorp/vault/helper/certutil"
|
||||
"github.com/hashicorp/vault/helper/parseutil"
|
||||
"github.com/hashicorp/vault/helper/errutil"
|
||||
"github.com/hashicorp/vault/helper/strutil"
|
||||
"github.com/hashicorp/vault/logical"
|
||||
"github.com/hashicorp/vault/logical/framework"
|
||||
"golang.org/x/crypto/ssh"
|
||||
)
|
||||
|
||||
type signingBundle struct {
|
||||
Certificate string `json:"certificate" structs:"certificate" mapstructure:"certificate"`
|
||||
}
|
||||
|
||||
type creationBundle struct {
|
||||
KeyId string
|
||||
ValidPrincipals []string
|
||||
PublicKey ssh.PublicKey
|
||||
CertificateType uint32
|
||||
TTL time.Duration
|
||||
SigningBundle signingBundle
|
||||
Signer ssh.Signer
|
||||
Role *sshRole
|
||||
criticalOptions map[string]string
|
||||
extensions map[string]string
|
||||
CriticalOptions map[string]string
|
||||
Extensions map[string]string
|
||||
}
|
||||
|
||||
func pathSign(b *backend) *framework.Path {
|
||||
|
@ -109,16 +106,16 @@ func (b *backend) pathSignCertificate(req *logical.Request, data *framework.Fiel
|
|||
|
||||
userPublicKey, err := parsePublicSSHKey(publicKey)
|
||||
if err != nil {
|
||||
return logical.ErrorResponse(fmt.Sprintf("unable to decode \"public_key\" as SSH key: %s", err)), nil
|
||||
}
|
||||
|
||||
keyId := data.Get("key_id").(string)
|
||||
if keyId == "" {
|
||||
keyId = req.DisplayName
|
||||
return logical.ErrorResponse(fmt.Sprintf("failed to parse public_key as SSH key: %s", err)), nil
|
||||
}
|
||||
|
||||
// Note that these various functions always return "user errors" so we pass
|
||||
// them as 4xx values
|
||||
keyId, err := b.calculateKeyId(data, req, role, userPublicKey)
|
||||
if err != nil {
|
||||
return logical.ErrorResponse(err.Error()), nil
|
||||
}
|
||||
|
||||
certificateType, err := b.calculateCertificateType(data, role)
|
||||
if err != nil {
|
||||
return logical.ErrorResponse(err.Error()), nil
|
||||
|
@ -152,32 +149,32 @@ func (b *backend) pathSignCertificate(req *logical.Request, data *framework.Fiel
|
|||
return logical.ErrorResponse(err.Error()), nil
|
||||
}
|
||||
|
||||
storedBundle, err := req.Storage.Get("config/ca_bundle")
|
||||
privateKeyEntry, err := caKey(req.Storage, caPrivateKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to fetch local CA certificate/key: %v", err)
|
||||
return nil, fmt.Errorf("failed to read CA private key: %v", err)
|
||||
}
|
||||
if storedBundle == nil {
|
||||
return logical.ErrorResponse("backend must be configured with a CA certificate/key"), nil
|
||||
if privateKeyEntry == nil || privateKeyEntry.Key == "" {
|
||||
return nil, fmt.Errorf("failed to read CA private key")
|
||||
}
|
||||
|
||||
var bundle signingBundle
|
||||
if err := storedBundle.DecodeJSON(&bundle); err != nil {
|
||||
return nil, fmt.Errorf("unable to decode local CA certificate/key: %v", err)
|
||||
signer, err := ssh.ParsePrivateKey([]byte(privateKeyEntry.Key))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse stored CA private key: %v", err)
|
||||
}
|
||||
|
||||
signingBundle := creationBundle{
|
||||
cBundle := creationBundle{
|
||||
KeyId: keyId,
|
||||
PublicKey: userPublicKey,
|
||||
SigningBundle: bundle,
|
||||
Signer: signer,
|
||||
ValidPrincipals: parsedPrincipals,
|
||||
TTL: ttl,
|
||||
CertificateType: certificateType,
|
||||
Role: role,
|
||||
criticalOptions: criticalOptions,
|
||||
extensions: extensions,
|
||||
CriticalOptions: criticalOptions,
|
||||
Extensions: extensions,
|
||||
}
|
||||
|
||||
certificate, err := signingBundle.sign()
|
||||
certificate, err := cBundle.sign()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -198,34 +195,37 @@ func (b *backend) pathSignCertificate(req *logical.Request, data *framework.Fiel
|
|||
}
|
||||
|
||||
func (b *backend) calculateValidPrincipals(data *framework.FieldData, defaultPrincipal, principalsAllowedByRole string, validatePrincipal func([]string, string) bool) ([]string, error) {
|
||||
if principalsAllowedByRole == "" {
|
||||
return nil, fmt.Errorf(`"role is not configured to allow any principles`)
|
||||
validPrincipals := ""
|
||||
validPrincipalsRaw, ok := data.GetOk("valid_principals")
|
||||
if ok {
|
||||
validPrincipals = validPrincipalsRaw.(string)
|
||||
} else {
|
||||
validPrincipals = defaultPrincipal
|
||||
}
|
||||
|
||||
validPrincipals := data.Get("valid_principals").(string)
|
||||
if validPrincipals == "" {
|
||||
if defaultPrincipal != "" {
|
||||
return []string{defaultPrincipal}, nil
|
||||
parsedPrincipals := strutil.RemoveDuplicates(strutil.ParseStringSlice(validPrincipals, ","), false)
|
||||
allowedPrincipals := strutil.RemoveDuplicates(strutil.ParseStringSlice(principalsAllowedByRole, ","), false)
|
||||
switch {
|
||||
case len(parsedPrincipals) == 0:
|
||||
// There is nothing to process
|
||||
return nil, nil
|
||||
case len(allowedPrincipals) == 0:
|
||||
// User has requested principals to be set, but role is not configured
|
||||
// with any principals
|
||||
return nil, fmt.Errorf("role is not configured to allow any principles")
|
||||
default:
|
||||
// Role was explicitly configured to allow any principal.
|
||||
if principalsAllowedByRole == "*" {
|
||||
return parsedPrincipals, nil
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf(`"valid_principals" not supplied and no default set in the role`)
|
||||
}
|
||||
|
||||
parsedPrincipals := strings.Split(validPrincipals, ",")
|
||||
|
||||
// Role was explicitly configured to allow any principal.
|
||||
if principalsAllowedByRole == "*" {
|
||||
for _, principal := range parsedPrincipals {
|
||||
if !validatePrincipal(allowedPrincipals, principal) {
|
||||
return nil, fmt.Errorf("%v is not a valid value for valid_principals", principal)
|
||||
}
|
||||
}
|
||||
return parsedPrincipals, nil
|
||||
}
|
||||
|
||||
allowedPrincipals := strings.Split(principalsAllowedByRole, ",")
|
||||
for _, principal := range parsedPrincipals {
|
||||
if !validatePrincipal(allowedPrincipals, principal) {
|
||||
return nil, fmt.Errorf(`%v is not a valid value for "valid_principals"`, principal)
|
||||
}
|
||||
}
|
||||
|
||||
return parsedPrincipals, nil
|
||||
}
|
||||
|
||||
func validateValidPrincipalForHosts(role *sshRole) func([]string, string) bool {
|
||||
|
@ -250,21 +250,43 @@ func (b *backend) calculateCertificateType(data *framework.FieldData, role *sshR
|
|||
switch requestedCertificateType {
|
||||
case "user":
|
||||
if !role.AllowUserCertificates {
|
||||
return 0, errors.New(`"cert_type" 'user' is not allowed by role`)
|
||||
return 0, errors.New("cert_type 'user' is not allowed by role")
|
||||
}
|
||||
certificateType = ssh.UserCert
|
||||
case "host":
|
||||
if !role.AllowHostCertificates {
|
||||
return 0, errors.New(`"cert_type" 'host' is not allowed by role`)
|
||||
return 0, errors.New("cert_type 'host' is not allowed by role")
|
||||
}
|
||||
certificateType = ssh.HostCert
|
||||
default:
|
||||
return 0, errors.New(`"cert_type" must be either 'user' or 'host'`)
|
||||
return 0, errors.New("cert_type must be either 'user' or 'host'")
|
||||
}
|
||||
|
||||
return certificateType, nil
|
||||
}
|
||||
|
||||
func (b *backend) calculateKeyId(data *framework.FieldData, req *logical.Request, role *sshRole, pubKey ssh.PublicKey) (string, error) {
|
||||
reqId := data.Get("key_id").(string)
|
||||
|
||||
if reqId != "" {
|
||||
if !role.AllowUserKeyIDs {
|
||||
return "", fmt.Errorf("setting key_id is not allowed by role")
|
||||
}
|
||||
return reqId, nil
|
||||
}
|
||||
|
||||
keyHash := sha256.Sum256(pubKey.Marshal())
|
||||
keyId := hex.EncodeToString(keyHash[:])
|
||||
|
||||
if req.DisplayName != "" {
|
||||
keyId = fmt.Sprintf("%s-%s", req.DisplayName, keyId)
|
||||
}
|
||||
|
||||
keyId = fmt.Sprintf("vault-%s", keyId)
|
||||
|
||||
return keyId, nil
|
||||
}
|
||||
|
||||
func (b *backend) calculateCriticalOptions(data *framework.FieldData, role *sshRole) (map[string]string, error) {
|
||||
unparsedCriticalOptions := data.Get("critical_options").(map[string]interface{})
|
||||
if len(unparsedCriticalOptions) == 0 {
|
||||
|
@ -310,7 +332,7 @@ func (b *backend) calculateExtensions(data *framework.FieldData, role *sshRole)
|
|||
}
|
||||
|
||||
if len(notAllowed) != 0 {
|
||||
return nil, fmt.Errorf("Extensions not on allowed list: %v", notAllowed)
|
||||
return nil, fmt.Errorf("extensions %v are not on allowed list", notAllowed)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -362,11 +384,6 @@ func (b *backend) calculateTTL(data *framework.FieldData, role *sshRole) (time.D
|
|||
}
|
||||
|
||||
func (b *creationBundle) sign() (*ssh.Certificate, error) {
|
||||
signingKey, err := ssh.ParsePrivateKey([]byte(b.SigningBundle.Certificate))
|
||||
if err != nil {
|
||||
return nil, errutil.InternalError{Err: fmt.Sprintf("stored SSH signing key cannot be parsed: %v", err)}
|
||||
}
|
||||
|
||||
serialNumber, err := certutil.GenerateSerialNumber()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -383,14 +400,14 @@ func (b *creationBundle) sign() (*ssh.Certificate, error) {
|
|||
ValidBefore: uint64(now.Add(b.TTL).In(time.UTC).Unix()),
|
||||
CertType: b.CertificateType,
|
||||
Permissions: ssh.Permissions{
|
||||
CriticalOptions: b.criticalOptions,
|
||||
Extensions: b.extensions,
|
||||
CriticalOptions: b.CriticalOptions,
|
||||
Extensions: b.Extensions,
|
||||
},
|
||||
}
|
||||
|
||||
err = certificate.SignCert(rand.Reader, signingKey)
|
||||
err = certificate.SignCert(rand.Reader, b.Signer)
|
||||
if err != nil {
|
||||
return nil, errutil.InternalError{Err: "Failed to generate signed SSH key"}
|
||||
return nil, fmt.Errorf("failed to generate signed SSH key")
|
||||
}
|
||||
|
||||
return certificate, nil
|
||||
|
|
|
@ -10,7 +10,7 @@ import (
|
|||
|
||||
credAppId "github.com/hashicorp/vault/builtin/credential/app-id"
|
||||
credAppRole "github.com/hashicorp/vault/builtin/credential/approle"
|
||||
credAwsEc2 "github.com/hashicorp/vault/builtin/credential/aws-ec2"
|
||||
credAws "github.com/hashicorp/vault/builtin/credential/aws"
|
||||
credCert "github.com/hashicorp/vault/builtin/credential/cert"
|
||||
credGitHub "github.com/hashicorp/vault/builtin/credential/github"
|
||||
credLdap "github.com/hashicorp/vault/builtin/credential/ldap"
|
||||
|
@ -71,7 +71,7 @@ func Commands(metaPtr *meta.Meta) map[string]cli.CommandFactory {
|
|||
CredentialBackends: map[string]logical.Factory{
|
||||
"approle": credAppRole.Factory,
|
||||
"cert": credCert.Factory,
|
||||
"aws-ec2": credAwsEc2.Factory,
|
||||
"aws": credAws.Factory,
|
||||
"app-id": credAppId.Factory,
|
||||
"github": credGitHub.Factory,
|
||||
"userpass": credUserpass.Factory,
|
||||
|
@ -118,6 +118,7 @@ func Commands(metaPtr *meta.Meta) map[string]cli.CommandFactory {
|
|||
"ldap": &credLdap.CLIHandler{},
|
||||
"okta": &credOkta.CLIHandler{},
|
||||
"cert": &credCert.CLIHandler{},
|
||||
"aws": &credAws.CLIHandler{},
|
||||
"radius": &credUserpass.CLIHandler{DefaultMount: "radius"},
|
||||
},
|
||||
}, nil
|
||||
|
|
|
@ -58,12 +58,12 @@ Usage: vault audit-disable [options] id
|
|||
|
||||
Disable an audit backend.
|
||||
|
||||
Once the audit backend is disabled, no more audit logs will be sent to
|
||||
Once the audit backend is disabled no more audit logs will be sent to
|
||||
it. The data associated with the audit backend isn't affected.
|
||||
|
||||
The "id" parameter should map to the id used with "audit-enable". If
|
||||
no specific ID was specified, then it is the name of the backend (the
|
||||
type of the backend).
|
||||
The "id" parameter should map to the "path" used in "audit-enable". If
|
||||
no path was provided to "audit-enable" you should use the backend
|
||||
type (e.g. "file").
|
||||
|
||||
General Options:
|
||||
` + meta.GeneralOptionsUsage()
|
||||
|
|
|
@ -312,7 +312,7 @@ func (c *AuthCommand) Help() string {
|
|||
helpText := `
|
||||
Usage: vault auth [options] [auth-information]
|
||||
|
||||
Authenticate with Vault with the given token or via any supported
|
||||
Authenticate with Vault using the given token or via any supported
|
||||
authentication backend.
|
||||
|
||||
By default, the -method is assumed to be token. If not supplied via the
|
||||
|
@ -399,7 +399,7 @@ func (h *tokenAuthHandler) Help() string {
|
|||
help := `
|
||||
No method selected with the "-method" flag, so the "auth" command assumes
|
||||
you'll be using raw token authentication. For this, specify the token to
|
||||
authenticate as as the parameter to "vault auth". Example:
|
||||
authenticate as the parameter to "vault auth". Example:
|
||||
|
||||
vault auth 123456
|
||||
|
||||
|
|
|
@ -58,10 +58,10 @@ Usage: vault auth-disable [options] path
|
|||
|
||||
Disable an already-enabled auth provider.
|
||||
|
||||
Once the auth provider is disabled, that path cannot be used anymore
|
||||
Once the auth provider is disabled its path can no longer be used
|
||||
to authenticate. All access tokens generated via the disabled auth provider
|
||||
will be revoked. This command will block until all tokens are revoked.
|
||||
If the command is exited early, the tokens will still be revoked.
|
||||
If the command is exited early the tokens will still be revoked.
|
||||
|
||||
General Options:
|
||||
` + meta.GeneralOptionsUsage()
|
||||
|
|
|
@ -82,7 +82,7 @@ General Options:
|
|||
` + meta.GeneralOptionsUsage() + `
|
||||
Auth Enable Options:
|
||||
|
||||
-description=<desc> Human-friendly description of the purpose for the
|
||||
-description=<desc> Human-friendly description of the purpose of the
|
||||
auth provider. This shows up in the auth -methods command.
|
||||
|
||||
-path=<path> Mount point for the auth provider. This defaults
|
||||
|
|
|
@ -295,12 +295,12 @@ Usage: vault generate-root [options] [key]
|
|||
|
||||
'generate-root' is used to create a new root token.
|
||||
|
||||
Root generation can only be done when the Vault is already unsealed. The
|
||||
Root generation can only be done when the vault is already unsealed. The
|
||||
operation is done online, but requires that a threshold of the current unseal
|
||||
keys be provided.
|
||||
|
||||
One (and only one) of the following must be provided at attempt
|
||||
initialization time:
|
||||
One (and only one) of the following must be provided when initializing the
|
||||
root generation attempt:
|
||||
|
||||
1) A 16-byte, base64-encoded One Time Password (OTP) provided in the '-otp'
|
||||
flag; the token is XOR'd with this value before it is returned once the final
|
||||
|
|
|
@ -245,11 +245,11 @@ func (c *InitCommand) runInit(check bool, initRequest *api.InitRequest) int {
|
|||
c.Ui.Output(fmt.Sprintf(
|
||||
"\n"+
|
||||
"Vault initialized with %d keys and a key threshold of %d. Please\n"+
|
||||
"securely distribute the above keys. When the Vault is re-sealed,\n"+
|
||||
"securely distribute the above keys. When the vault is re-sealed,\n"+
|
||||
"restarted, or stopped, you must provide at least %d of these keys\n"+
|
||||
"to unseal it again.\n\n"+
|
||||
"Vault does not store the master key. Without at least %d keys,\n"+
|
||||
"your Vault will remain permanently sealed.",
|
||||
"your vault will remain permanently sealed.",
|
||||
initRequest.SecretShares,
|
||||
initRequest.SecretThreshold,
|
||||
initRequest.SecretThreshold,
|
||||
|
@ -301,10 +301,10 @@ Usage: vault init [options]
|
|||
Initialize a new Vault server.
|
||||
|
||||
This command connects to a Vault server and initializes it for the
|
||||
first time. This sets up the initial set of master keys and sets up the
|
||||
first time. This sets up the initial set of master keys and the
|
||||
backend data store structure.
|
||||
|
||||
This command can't be called on an already-initialized Vault.
|
||||
This command can't be called on an already-initialized Vault server.
|
||||
|
||||
General Options:
|
||||
` + meta.GeneralOptionsUsage() + `
|
||||
|
|
|
@ -28,7 +28,7 @@ func (c *ListCommand) Run(args []string) int {
|
|||
|
||||
args = flags.Args()
|
||||
if len(args) != 1 || len(args[0]) == 0 {
|
||||
c.Ui.Error("read expects one argument")
|
||||
c.Ui.Error("list expects one argument")
|
||||
flags.Usage()
|
||||
return 1
|
||||
}
|
||||
|
|
|
@ -15,12 +15,13 @@ type MountCommand struct {
|
|||
|
||||
func (c *MountCommand) Run(args []string) int {
|
||||
var description, path, defaultLeaseTTL, maxLeaseTTL string
|
||||
var local bool
|
||||
var local, forceNoCache bool
|
||||
flags := c.Meta.FlagSet("mount", meta.FlagSetDefault)
|
||||
flags.StringVar(&description, "description", "", "")
|
||||
flags.StringVar(&path, "path", "", "")
|
||||
flags.StringVar(&defaultLeaseTTL, "default-lease-ttl", "", "")
|
||||
flags.StringVar(&maxLeaseTTL, "max-lease-ttl", "", "")
|
||||
flags.BoolVar(&forceNoCache, "force-no-cache", false, "")
|
||||
flags.BoolVar(&local, "local", false, "")
|
||||
flags.Usage = func() { c.Ui.Error(c.Help()) }
|
||||
if err := flags.Parse(args); err != nil {
|
||||
|
@ -31,7 +32,7 @@ func (c *MountCommand) Run(args []string) int {
|
|||
if len(args) != 1 {
|
||||
flags.Usage()
|
||||
c.Ui.Error(fmt.Sprintf(
|
||||
"\nMount expects one argument: the type to mount."))
|
||||
"\nmount expects one argument: the type to mount."))
|
||||
return 1
|
||||
}
|
||||
|
||||
|
@ -55,6 +56,7 @@ func (c *MountCommand) Run(args []string) int {
|
|||
Config: api.MountConfigInput{
|
||||
DefaultLeaseTTL: defaultLeaseTTL,
|
||||
MaxLeaseTTL: maxLeaseTTL,
|
||||
ForceNoCache: forceNoCache,
|
||||
},
|
||||
Local: local,
|
||||
}
|
||||
|
@ -93,7 +95,7 @@ Mount Options:
|
|||
the mount. This shows up in the mounts command.
|
||||
|
||||
-path=<path> Mount point for the logical backend. This
|
||||
defauls to the type of the mount.
|
||||
defaults to the type of the mount.
|
||||
|
||||
-default-lease-ttl=<duration> Default lease time-to-live for this backend.
|
||||
If not specified, uses the global default, or
|
||||
|
@ -105,6 +107,11 @@ Mount Options:
|
|||
the previously set value. Set to '0' to
|
||||
explicitly set it to use the global default.
|
||||
|
||||
-force-no-cache Forces the backend to disable caching. If not
|
||||
specified, uses the global default. This does
|
||||
not affect caching of the underlying encrypted
|
||||
data storage.
|
||||
|
||||
-local Mark the mount as a local mount. Local mounts
|
||||
are not replicated nor (if a secondary)
|
||||
removed by replication.
|
||||
|
|
|
@ -28,7 +28,7 @@ func (c *MountTuneCommand) Run(args []string) int {
|
|||
if len(args) != 1 {
|
||||
flags.Usage()
|
||||
c.Ui.Error(fmt.Sprintf(
|
||||
"\n'mount-tune' expects one arguments: the mount path"))
|
||||
"\nmount-tune expects one arguments: the mount path"))
|
||||
return 1
|
||||
}
|
||||
|
||||
|
|
|
@ -42,7 +42,7 @@ func (c *MountsCommand) Run(args []string) int {
|
|||
}
|
||||
sort.Strings(paths)
|
||||
|
||||
columns := []string{"Path | Type | Default TTL | Max TTL | Replication Behavior | Description"}
|
||||
columns := []string{"Path | Type | Default TTL | Max TTL | Force No Cache | Replication Behavior | Description"}
|
||||
for _, path := range paths {
|
||||
mount := mounts[path]
|
||||
defTTL := "system"
|
||||
|
@ -68,7 +68,8 @@ func (c *MountsCommand) Run(args []string) int {
|
|||
replicatedBehavior = "local"
|
||||
}
|
||||
columns = append(columns, fmt.Sprintf(
|
||||
"%s | %s | %s | %s | %s | %s", path, mount.Type, defTTL, maxTTL, replicatedBehavior, mount.Description))
|
||||
"%s | %s | %s | %s | %v | %s | %s", path, mount.Type, defTTL, maxTTL,
|
||||
mount.Config.ForceNoCache, replicatedBehavior, mount.Description))
|
||||
}
|
||||
|
||||
c.Ui.Output(columnize.SimpleFormat(columns))
|
||||
|
|
|
@ -40,7 +40,7 @@ func (c *PathHelpCommand) Run(args []string) int {
|
|||
if strings.Contains(err.Error(), "Vault is sealed") {
|
||||
c.Ui.Error(`Error: Vault is sealed.
|
||||
|
||||
The path-help command requires the Vault to be unsealed so that
|
||||
The path-help command requires the vault to be unsealed so that
|
||||
mount points of secret backends are known.`)
|
||||
} else {
|
||||
c.Ui.Error(fmt.Sprintf(
|
||||
|
@ -67,7 +67,7 @@ Usage: vault path-help [options] path
|
|||
providers provide built-in help. This command looks up and outputs that
|
||||
help.
|
||||
|
||||
The command requires that the Vault be unsealed, because otherwise
|
||||
The command requires that the vault be unsealed, because otherwise
|
||||
the mount points of the backends are unknown.
|
||||
|
||||
General Options:
|
||||
|
|
|
@ -194,11 +194,11 @@ func (c *RekeyCommand) Run(args []string) int {
|
|||
c.Ui.Output(fmt.Sprintf(
|
||||
"\n"+
|
||||
"Vault rekeyed with %d keys and a key threshold of %d. Please\n"+
|
||||
"securely distribute the above keys. When the Vault is re-sealed,\n"+
|
||||
"securely distribute the above keys. When the vault is re-sealed,\n"+
|
||||
"restarted, or stopped, you must provide at least %d of these keys\n"+
|
||||
"to unseal it again.\n\n"+
|
||||
"Vault does not store the master key. Without at least %d keys,\n"+
|
||||
"your Vault will remain permanently sealed.",
|
||||
"your vault will remain permanently sealed.",
|
||||
shares,
|
||||
threshold,
|
||||
threshold,
|
||||
|
@ -361,7 +361,7 @@ Usage: vault rekey [options] [key]
|
|||
a new set of unseal keys or to change the number of shares and the
|
||||
required threshold.
|
||||
|
||||
Rekey can only be done when the Vault is already unsealed. The operation
|
||||
Rekey can only be done when the vault is already unsealed. The operation
|
||||
is done online, but requires that a threshold of the current unseal
|
||||
keys be provided.
|
||||
|
||||
|
|
|
@ -24,7 +24,7 @@ func (c *RemountCommand) Run(args []string) int {
|
|||
if len(args) != 2 {
|
||||
flags.Usage()
|
||||
c.Ui.Error(fmt.Sprintf(
|
||||
"\nRemount expects two arguments: the from and to path"))
|
||||
"\nremount expects two arguments: the from and to path"))
|
||||
return 1
|
||||
}
|
||||
|
||||
|
@ -62,8 +62,8 @@ Usage: vault remount [options] from to
|
|||
|
||||
This command remounts a secret backend that is already mounted to
|
||||
a new path. All the secrets from the old path will be revoked, but
|
||||
the Vault data associated with the backend will be preserved (such
|
||||
as configuration data).
|
||||
the data associated with the backend (such as configuration), will
|
||||
be preserved.
|
||||
|
||||
Example: vault remount secret/ generic/
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ func (c *RenewCommand) Run(args []string) int {
|
|||
if len(args) < 1 || len(args) >= 3 {
|
||||
flags.Usage()
|
||||
c.Ui.Error(fmt.Sprintf(
|
||||
"\nRenew expects at least one argument: the lease ID to renew"))
|
||||
"\nrenew expects at least one argument: the lease ID to renew"))
|
||||
return 1
|
||||
}
|
||||
|
||||
|
|
|
@ -26,7 +26,7 @@ func (c *RevokeCommand) Run(args []string) int {
|
|||
if len(args) != 1 {
|
||||
flags.Usage()
|
||||
c.Ui.Error(fmt.Sprintf(
|
||||
"\nRevoke expects one argument: the ID to revoke"))
|
||||
"\nrevoke expects one argument: the ID to revoke"))
|
||||
return 1
|
||||
}
|
||||
leaseId := args[0]
|
||||
|
|
|
@ -36,7 +36,7 @@ func (c *SealCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
func (c *SealCommand) Synopsis() string {
|
||||
return "Seals the vault server"
|
||||
return "Seals the Vault server"
|
||||
}
|
||||
|
||||
func (c *SealCommand) Help() string {
|
||||
|
@ -47,8 +47,8 @@ Usage: vault seal [options]
|
|||
|
||||
Sealing a vault tells the Vault server to stop responding to any
|
||||
access operations until it is unsealed again. A sealed vault throws away
|
||||
its master key to unlock the data, so it physically is blocked from
|
||||
responding to operations again until the Vault is unsealed again with
|
||||
its master key to unlock the data, so it is physically blocked from
|
||||
responding to operations again until the vault is unsealed with
|
||||
the "unseal" command or via the API.
|
||||
|
||||
This command is idempotent, if the vault is already sealed it does nothing.
|
||||
|
|
|
@ -84,6 +84,7 @@ func (c *ServerCommand) Run(args []string) int {
|
|||
// start logging too early.
|
||||
logGate := &gatedwriter.Writer{Writer: colorable.NewColorable(os.Stderr)}
|
||||
var level int
|
||||
logLevel = strings.ToLower(strings.TrimSpace(logLevel))
|
||||
switch logLevel {
|
||||
case "trace":
|
||||
level = log.LevelTrace
|
||||
|
@ -173,8 +174,8 @@ func (c *ServerCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
// Ensure that a backend is provided
|
||||
if config.Backend == nil {
|
||||
c.Ui.Output("A physical backend must be specified")
|
||||
if config.Storage == nil {
|
||||
c.Ui.Output("A storage backend must be specified")
|
||||
return 1
|
||||
}
|
||||
|
||||
|
@ -194,11 +195,11 @@ func (c *ServerCommand) Run(args []string) int {
|
|||
|
||||
// Initialize the backend
|
||||
backend, err := physical.NewBackend(
|
||||
config.Backend.Type, c.logger, config.Backend.Config)
|
||||
config.Storage.Type, c.logger, config.Storage.Config)
|
||||
if err != nil {
|
||||
c.Ui.Output(fmt.Sprintf(
|
||||
"Error initializing backend of type %s: %s",
|
||||
config.Backend.Type, err))
|
||||
"Error initializing storage of type %s: %s",
|
||||
config.Storage.Type, err))
|
||||
return 1
|
||||
}
|
||||
|
||||
|
@ -224,7 +225,7 @@ func (c *ServerCommand) Run(args []string) int {
|
|||
|
||||
coreConfig := &vault.CoreConfig{
|
||||
Physical: backend,
|
||||
RedirectAddr: config.Backend.RedirectAddr,
|
||||
RedirectAddr: config.Storage.RedirectAddr,
|
||||
HAPhysical: nil,
|
||||
Seal: seal,
|
||||
AuditBackends: c.AuditBackends,
|
||||
|
@ -244,39 +245,39 @@ func (c *ServerCommand) Run(args []string) int {
|
|||
|
||||
var disableClustering bool
|
||||
|
||||
// Initialize the separate HA physical backend, if it exists
|
||||
// Initialize the separate HA storage backend, if it exists
|
||||
var ok bool
|
||||
if config.HABackend != nil {
|
||||
if config.HAStorage != nil {
|
||||
habackend, err := physical.NewBackend(
|
||||
config.HABackend.Type, c.logger, config.HABackend.Config)
|
||||
config.HAStorage.Type, c.logger, config.HAStorage.Config)
|
||||
if err != nil {
|
||||
c.Ui.Output(fmt.Sprintf(
|
||||
"Error initializing backend of type %s: %s",
|
||||
config.HABackend.Type, err))
|
||||
"Error initializing HA storage of type %s: %s",
|
||||
config.HAStorage.Type, err))
|
||||
return 1
|
||||
}
|
||||
|
||||
if coreConfig.HAPhysical, ok = habackend.(physical.HABackend); !ok {
|
||||
c.Ui.Output("Specified HA backend does not support HA")
|
||||
c.Ui.Output("Specified HA storage does not support HA")
|
||||
return 1
|
||||
}
|
||||
|
||||
if !coreConfig.HAPhysical.HAEnabled() {
|
||||
c.Ui.Output("Specified HA backend has HA support disabled; please consult documentation")
|
||||
c.Ui.Output("Specified HA storage has HA support disabled; please consult documentation")
|
||||
return 1
|
||||
}
|
||||
|
||||
coreConfig.RedirectAddr = config.HABackend.RedirectAddr
|
||||
disableClustering = config.HABackend.DisableClustering
|
||||
coreConfig.RedirectAddr = config.HAStorage.RedirectAddr
|
||||
disableClustering = config.HAStorage.DisableClustering
|
||||
if !disableClustering {
|
||||
coreConfig.ClusterAddr = config.HABackend.ClusterAddr
|
||||
coreConfig.ClusterAddr = config.HAStorage.ClusterAddr
|
||||
}
|
||||
} else {
|
||||
if coreConfig.HAPhysical, ok = backend.(physical.HABackend); ok {
|
||||
coreConfig.RedirectAddr = config.Backend.RedirectAddr
|
||||
disableClustering = config.Backend.DisableClustering
|
||||
coreConfig.RedirectAddr = config.Storage.RedirectAddr
|
||||
disableClustering = config.Storage.DisableClustering
|
||||
if !disableClustering {
|
||||
coreConfig.ClusterAddr = config.Backend.ClusterAddr
|
||||
coreConfig.ClusterAddr = config.Storage.ClusterAddr
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -378,12 +379,12 @@ CLUSTER_SYNTHESIS_COMPLETE:
|
|||
c.reloadFuncsLock = coreConfig.ReloadFuncsLock
|
||||
|
||||
// Compile server information for output later
|
||||
info["backend"] = config.Backend.Type
|
||||
info["storage"] = config.Storage.Type
|
||||
info["log level"] = logLevel
|
||||
info["mlock"] = fmt.Sprintf(
|
||||
"supported: %v, enabled: %v",
|
||||
mlock.Supported(), !config.DisableMlock && mlock.Supported())
|
||||
infoKeys = append(infoKeys, "log level", "mlock", "backend")
|
||||
infoKeys = append(infoKeys, "log level", "mlock", "storage")
|
||||
|
||||
if coreConfig.ClusterAddr != "" {
|
||||
info["cluster address"] = coreConfig.ClusterAddr
|
||||
|
@ -394,16 +395,16 @@ CLUSTER_SYNTHESIS_COMPLETE:
|
|||
infoKeys = append(infoKeys, "redirect address")
|
||||
}
|
||||
|
||||
if config.HABackend != nil {
|
||||
info["HA backend"] = config.HABackend.Type
|
||||
infoKeys = append(infoKeys, "HA backend")
|
||||
if config.HAStorage != nil {
|
||||
info["HA storage"] = config.HAStorage.Type
|
||||
infoKeys = append(infoKeys, "HA storage")
|
||||
} else {
|
||||
// If the backend supports HA, then note it
|
||||
// If the storage supports HA, then note it
|
||||
if coreConfig.HAPhysical != nil {
|
||||
if coreConfig.HAPhysical.HAEnabled() {
|
||||
info["backend"] += " (HA available)"
|
||||
info["storage"] += " (HA available)"
|
||||
} else {
|
||||
info["backend"] += " (HA disabled)"
|
||||
info["storage"] += " (HA disabled)"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -564,12 +565,12 @@ CLUSTER_SYNTHESIS_COMPLETE:
|
|||
core.SetClusterListenerAddrs(clusterAddrs)
|
||||
core.SetClusterSetupFuncs(vault.WrapHandlerForClustering(handler, c.logger))
|
||||
|
||||
// If we're in dev mode, then initialize the core
|
||||
// If we're in Dev mode, then initialize the core
|
||||
if dev {
|
||||
init, err := c.enableDev(core, devRootTokenID)
|
||||
if err != nil {
|
||||
c.Ui.Output(fmt.Sprintf(
|
||||
"Error initializing dev mode: %s", err))
|
||||
"Error initializing Dev mode: %s", err))
|
||||
return 1
|
||||
}
|
||||
|
||||
|
@ -974,7 +975,7 @@ Usage: vault server [options]
|
|||
with "vault unseal" or the API before this server can respond to requests.
|
||||
This must be done for every server.
|
||||
|
||||
If the server is being started against a storage backend that has
|
||||
If the server is being started against a storage backend that is
|
||||
brand new (no existing Vault data in it), it must be initialized with
|
||||
"vault init" or the API first.
|
||||
|
||||
|
|
|
@ -21,8 +21,8 @@ import (
|
|||
// Config is the configuration for the vault server.
|
||||
type Config struct {
|
||||
Listeners []*Listener `hcl:"-"`
|
||||
Backend *Backend `hcl:"-"`
|
||||
HABackend *Backend `hcl:"-"`
|
||||
Storage *Storage `hcl:"-"`
|
||||
HAStorage *Storage `hcl:"-"`
|
||||
|
||||
HSM *HSM `hcl:"-"`
|
||||
|
||||
|
@ -51,7 +51,7 @@ func DevConfig(ha, transactional bool) *Config {
|
|||
DisableCache: false,
|
||||
DisableMlock: true,
|
||||
|
||||
Backend: &Backend{
|
||||
Storage: &Storage{
|
||||
Type: "inmem",
|
||||
},
|
||||
|
||||
|
@ -75,11 +75,11 @@ func DevConfig(ha, transactional bool) *Config {
|
|||
|
||||
switch {
|
||||
case ha && transactional:
|
||||
ret.Backend.Type = "inmem_transactional_ha"
|
||||
ret.Storage.Type = "inmem_transactional_ha"
|
||||
case !ha && transactional:
|
||||
ret.Backend.Type = "inmem_transactional"
|
||||
ret.Storage.Type = "inmem_transactional"
|
||||
case ha && !transactional:
|
||||
ret.Backend.Type = "inmem_ha"
|
||||
ret.Storage.Type = "inmem_ha"
|
||||
}
|
||||
|
||||
return ret
|
||||
|
@ -95,8 +95,8 @@ func (l *Listener) GoString() string {
|
|||
return fmt.Sprintf("*%#v", *l)
|
||||
}
|
||||
|
||||
// Backend is the backend configuration for the server.
|
||||
type Backend struct {
|
||||
// Storage is the underlying storage configuration for the server.
|
||||
type Storage struct {
|
||||
Type string
|
||||
RedirectAddr string
|
||||
ClusterAddr string
|
||||
|
@ -104,7 +104,7 @@ type Backend struct {
|
|||
Config map[string]string
|
||||
}
|
||||
|
||||
func (b *Backend) GoString() string {
|
||||
func (b *Storage) GoString() string {
|
||||
return fmt.Sprintf("*%#v", *b)
|
||||
}
|
||||
|
||||
|
@ -215,14 +215,14 @@ func (c *Config) Merge(c2 *Config) *Config {
|
|||
result.Listeners = append(result.Listeners, l)
|
||||
}
|
||||
|
||||
result.Backend = c.Backend
|
||||
if c2.Backend != nil {
|
||||
result.Backend = c2.Backend
|
||||
result.Storage = c.Storage
|
||||
if c2.Storage != nil {
|
||||
result.Storage = c2.Storage
|
||||
}
|
||||
|
||||
result.HABackend = c.HABackend
|
||||
if c2.HABackend != nil {
|
||||
result.HABackend = c2.HABackend
|
||||
result.HAStorage = c.HAStorage
|
||||
if c2.HAStorage != nil {
|
||||
result.HAStorage = c2.HAStorage
|
||||
}
|
||||
|
||||
result.HSM = c.HSM
|
||||
|
@ -349,6 +349,8 @@ func ParseConfig(d string, logger log.Logger) (*Config, error) {
|
|||
|
||||
valid := []string{
|
||||
"atlas",
|
||||
"storage",
|
||||
"ha_storage",
|
||||
"backend",
|
||||
"ha_backend",
|
||||
"hsm",
|
||||
|
@ -366,15 +368,28 @@ func ParseConfig(d string, logger log.Logger) (*Config, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if o := list.Filter("backend"); len(o.Items) > 0 {
|
||||
if err := parseBackends(&result, o); err != nil {
|
||||
return nil, fmt.Errorf("error parsing 'backend': %s", err)
|
||||
// Look for storage but still support old backend
|
||||
if o := list.Filter("storage"); len(o.Items) > 0 {
|
||||
if err := parseStorage(&result, o, "storage"); err != nil {
|
||||
return nil, fmt.Errorf("error parsing 'storage': %s", err)
|
||||
}
|
||||
} else {
|
||||
if o := list.Filter("backend"); len(o.Items) > 0 {
|
||||
if err := parseStorage(&result, o, "backend"); err != nil {
|
||||
return nil, fmt.Errorf("error parsing 'backend': %s", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if o := list.Filter("ha_backend"); len(o.Items) > 0 {
|
||||
if err := parseHABackends(&result, o); err != nil {
|
||||
return nil, fmt.Errorf("error parsing 'ha_backend': %s", err)
|
||||
if o := list.Filter("ha_storage"); len(o.Items) > 0 {
|
||||
if err := parseHAStorage(&result, o, "ha_storage"); err != nil {
|
||||
return nil, fmt.Errorf("error parsing 'ha_storage': %s", err)
|
||||
}
|
||||
} else {
|
||||
if o := list.Filter("ha_backend"); len(o.Items) > 0 {
|
||||
if err := parseHAStorage(&result, o, "ha_backend"); err != nil {
|
||||
return nil, fmt.Errorf("error parsing 'ha_backend': %s", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -476,22 +491,22 @@ func isTemporaryFile(name string) bool {
|
|||
(strings.HasPrefix(name, "#") && strings.HasSuffix(name, "#")) // emacs
|
||||
}
|
||||
|
||||
func parseBackends(result *Config, list *ast.ObjectList) error {
|
||||
func parseStorage(result *Config, list *ast.ObjectList, name string) error {
|
||||
if len(list.Items) > 1 {
|
||||
return fmt.Errorf("only one 'backend' block is permitted")
|
||||
return fmt.Errorf("only one %q block is permitted", name)
|
||||
}
|
||||
|
||||
// Get our item
|
||||
item := list.Items[0]
|
||||
|
||||
key := "backend"
|
||||
key := name
|
||||
if len(item.Keys) > 0 {
|
||||
key = item.Keys[0].Token.Value().(string)
|
||||
}
|
||||
|
||||
var m map[string]string
|
||||
if err := hcl.DecodeObject(&m, item.Val); err != nil {
|
||||
return multierror.Prefix(err, fmt.Sprintf("backend.%s:", key))
|
||||
return multierror.Prefix(err, fmt.Sprintf("%s.%s:", name, key))
|
||||
}
|
||||
|
||||
// Pull out the redirect address since it's common to all backends
|
||||
|
@ -516,12 +531,12 @@ func parseBackends(result *Config, list *ast.ObjectList) error {
|
|||
if v, ok := m["disable_clustering"]; ok {
|
||||
disableClustering, err = strconv.ParseBool(v)
|
||||
if err != nil {
|
||||
return multierror.Prefix(err, fmt.Sprintf("backend.%s:", key))
|
||||
return multierror.Prefix(err, fmt.Sprintf("%s.%s:", name, key))
|
||||
}
|
||||
delete(m, "disable_clustering")
|
||||
}
|
||||
|
||||
result.Backend = &Backend{
|
||||
result.Storage = &Storage{
|
||||
RedirectAddr: redirectAddr,
|
||||
ClusterAddr: clusterAddr,
|
||||
DisableClustering: disableClustering,
|
||||
|
@ -531,22 +546,22 @@ func parseBackends(result *Config, list *ast.ObjectList) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func parseHABackends(result *Config, list *ast.ObjectList) error {
|
||||
func parseHAStorage(result *Config, list *ast.ObjectList, name string) error {
|
||||
if len(list.Items) > 1 {
|
||||
return fmt.Errorf("only one 'ha_backend' block is permitted")
|
||||
return fmt.Errorf("only one %q block is permitted", name)
|
||||
}
|
||||
|
||||
// Get our item
|
||||
item := list.Items[0]
|
||||
|
||||
key := "backend"
|
||||
key := name
|
||||
if len(item.Keys) > 0 {
|
||||
key = item.Keys[0].Token.Value().(string)
|
||||
}
|
||||
|
||||
var m map[string]string
|
||||
if err := hcl.DecodeObject(&m, item.Val); err != nil {
|
||||
return multierror.Prefix(err, fmt.Sprintf("ha_backend.%s:", key))
|
||||
return multierror.Prefix(err, fmt.Sprintf("%s.%s:", name, key))
|
||||
}
|
||||
|
||||
// Pull out the redirect address since it's common to all backends
|
||||
|
@ -571,12 +586,12 @@ func parseHABackends(result *Config, list *ast.ObjectList) error {
|
|||
if v, ok := m["disable_clustering"]; ok {
|
||||
disableClustering, err = strconv.ParseBool(v)
|
||||
if err != nil {
|
||||
return multierror.Prefix(err, fmt.Sprintf("backend.%s:", key))
|
||||
return multierror.Prefix(err, fmt.Sprintf("%s.%s:", name, key))
|
||||
}
|
||||
delete(m, "disable_clustering")
|
||||
}
|
||||
|
||||
result.HABackend = &Backend{
|
||||
result.HAStorage = &Storage{
|
||||
RedirectAddr: redirectAddr,
|
||||
ClusterAddr: clusterAddr,
|
||||
DisableClustering: disableClustering,
|
||||
|
@ -647,6 +662,7 @@ func parseListeners(result *Config, list *ast.ObjectList) error {
|
|||
"tls_min_version",
|
||||
"tls_cipher_suites",
|
||||
"tls_prefer_server_cipher_suites",
|
||||
"tls_require_and_verify_client_cert",
|
||||
"token",
|
||||
}
|
||||
if err := checkHCLKeys(item.Val, valid); err != nil {
|
||||
|
|
|
@ -37,7 +37,7 @@ func TestLoadConfigFile(t *testing.T) {
|
|||
},
|
||||
},
|
||||
|
||||
Backend: &Backend{
|
||||
Storage: &Storage{
|
||||
Type: "consul",
|
||||
RedirectAddr: "foo",
|
||||
Config: map[string]string{
|
||||
|
@ -45,7 +45,7 @@ func TestLoadConfigFile(t *testing.T) {
|
|||
},
|
||||
},
|
||||
|
||||
HABackend: &Backend{
|
||||
HAStorage: &Storage{
|
||||
Type: "consul",
|
||||
RedirectAddr: "snafu",
|
||||
Config: map[string]string{
|
||||
|
@ -105,7 +105,7 @@ func TestLoadConfigFile_json(t *testing.T) {
|
|||
},
|
||||
},
|
||||
|
||||
Backend: &Backend{
|
||||
Storage: &Storage{
|
||||
Type: "consul",
|
||||
Config: map[string]string{
|
||||
"foo": "bar",
|
||||
|
@ -171,7 +171,7 @@ func TestLoadConfigFile_json2(t *testing.T) {
|
|||
},
|
||||
},
|
||||
|
||||
Backend: &Backend{
|
||||
Storage: &Storage{
|
||||
Type: "consul",
|
||||
Config: map[string]string{
|
||||
"foo": "bar",
|
||||
|
@ -179,7 +179,7 @@ func TestLoadConfigFile_json2(t *testing.T) {
|
|||
DisableClustering: true,
|
||||
},
|
||||
|
||||
HABackend: &Backend{
|
||||
HAStorage: &Storage{
|
||||
Type: "consul",
|
||||
Config: map[string]string{
|
||||
"bar": "baz",
|
||||
|
@ -234,7 +234,7 @@ func TestLoadConfigDir(t *testing.T) {
|
|||
},
|
||||
},
|
||||
|
||||
Backend: &Backend{
|
||||
Storage: &Storage{
|
||||
Type: "consul",
|
||||
Config: map[string]string{
|
||||
"foo": "bar",
|
||||
|
|
|
@ -97,6 +97,15 @@ func listenerWrapTLS(
|
|||
}
|
||||
tlsConf.PreferServerCipherSuites = preferServer
|
||||
}
|
||||
if v, ok := config["tls_require_and_verify_client_cert"]; ok {
|
||||
requireClient, err := strconv.ParseBool(v)
|
||||
if err != nil {
|
||||
return nil, nil, nil, fmt.Errorf("invalid value for 'tls_require_and_verify_client_cert': %v", err)
|
||||
}
|
||||
if requireClient {
|
||||
tlsConf.ClientAuth = tls.RequireAndVerifyClientCert
|
||||
}
|
||||
}
|
||||
|
||||
ln = tls.NewListener(ln, tlsConf)
|
||||
props["tls"] = "enabled"
|
||||
|
|
|
@ -11,7 +11,7 @@
|
|||
"node_id": "foo_node"
|
||||
}
|
||||
}],
|
||||
"backend": {
|
||||
"storage": {
|
||||
"consul": {
|
||||
"foo": "bar",
|
||||
"disable_clustering": "true"
|
||||
|
|
|
@ -12,12 +12,12 @@
|
|||
}
|
||||
}
|
||||
],
|
||||
"backend":{
|
||||
"storage":{
|
||||
"consul":{
|
||||
"foo":"bar"
|
||||
}
|
||||
},
|
||||
"ha_backend":{
|
||||
"ha_storage":{
|
||||
"consul":{
|
||||
"bar":"baz",
|
||||
"disable_clustering": "true"
|
||||
|
|
|
@ -64,8 +64,8 @@ func TestServer_GoodSeparateHA(t *testing.T) {
|
|||
t.Fatalf("bad: %d\n\n%s\n\n%s", code, ui.ErrorWriter.String(), ui.OutputWriter.String())
|
||||
}
|
||||
|
||||
if !strings.Contains(ui.OutputWriter.String(), "HA Backend:") {
|
||||
t.Fatalf("did not find HA Backend: %s", ui.OutputWriter.String())
|
||||
if !strings.Contains(ui.OutputWriter.String(), "HA Storage:") {
|
||||
t.Fatalf("did not find HA Storage: %s", ui.OutputWriter.String())
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -271,7 +271,7 @@ func (c *SSHCommand) defaultRole(mountPoint, ip string) (string, error) {
|
|||
}
|
||||
|
||||
func (c *SSHCommand) Synopsis() string {
|
||||
return "Initiate a SSH session"
|
||||
return "Initiate an SSH session"
|
||||
}
|
||||
|
||||
func (c *SSHCommand) Help() string {
|
||||
|
@ -282,12 +282,12 @@ Usage: vault ssh [options] username@ip
|
|||
|
||||
This command generates a key and uses it to establish an SSH
|
||||
connection with the target machine. This operation requires
|
||||
that SSH backend is mounted and at least one 'role' be registed
|
||||
with vault at priori.
|
||||
that the SSH backend is mounted and at least one 'role' is
|
||||
registered with Vault beforehand.
|
||||
|
||||
For setting up SSH backends with one-time-passwords, installation
|
||||
of agent in target machines is required.
|
||||
See [https://github.com/hashicorp/vault-ssh-agent]
|
||||
of vault-ssh-helper or a compatible agent on target machines
|
||||
is required. See [https://github.com/hashicorp/vault-ssh-agent].
|
||||
|
||||
General Options:
|
||||
` + meta.GeneralOptionsUsage() + `
|
||||
|
|
|
@ -120,7 +120,7 @@ General Options:
|
|||
Token Options:
|
||||
|
||||
-id="7699125c-d8...." The token value that clients will use to authenticate
|
||||
with vault. If not provided this defaults to a 36
|
||||
with Vault. If not provided this defaults to a 36
|
||||
character UUID. A root token is required to specify
|
||||
the ID of a token.
|
||||
|
||||
|
@ -151,8 +151,8 @@ Token Options:
|
|||
up in the audit log. This can be specified multiple
|
||||
times.
|
||||
|
||||
-orphan If specified, the token will have no parent. Only
|
||||
This prevents the new token from being revoked with
|
||||
-orphan If specified, the token will have no parent. This
|
||||
prevents the new token from being revoked with
|
||||
your token. Requires a root/sudo token to use.
|
||||
|
||||
-no-default-policy If specified, the token will not have the "default"
|
||||
|
|
|
@ -15,8 +15,11 @@ type TokenRevokeCommand struct {
|
|||
func (c *TokenRevokeCommand) Run(args []string) int {
|
||||
var mode string
|
||||
var accessor bool
|
||||
var self bool
|
||||
var token string
|
||||
flags := c.Meta.FlagSet("token-revoke", meta.FlagSetDefault)
|
||||
flags.BoolVar(&accessor, "accessor", false, "")
|
||||
flags.BoolVar(&self, "self", false, "")
|
||||
flags.StringVar(&mode, "mode", "", "")
|
||||
flags.Usage = func() { c.Ui.Error(c.Help()) }
|
||||
if err := flags.Parse(args); err != nil {
|
||||
|
@ -24,15 +27,21 @@ func (c *TokenRevokeCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
args = flags.Args()
|
||||
if len(args) != 1 {
|
||||
switch {
|
||||
case len(args) == 1 && !self:
|
||||
token = args[0]
|
||||
case len(args) != 0 && self:
|
||||
flags.Usage()
|
||||
c.Ui.Error(fmt.Sprintf(
|
||||
"\ntoken-revoke expects one argument"))
|
||||
"\ntoken-revoke expects no arguments when revoking self"))
|
||||
return 1
|
||||
case len(args) != 1 && !self:
|
||||
flags.Usage()
|
||||
c.Ui.Error(fmt.Sprintf(
|
||||
"\ntoken-revoke expects one argument or the 'self' flag"))
|
||||
return 1
|
||||
}
|
||||
|
||||
token := args[0]
|
||||
|
||||
client, err := c.Client()
|
||||
if err != nil {
|
||||
c.Ui.Error(fmt.Sprintf(
|
||||
|
@ -43,14 +52,22 @@ func (c *TokenRevokeCommand) Run(args []string) int {
|
|||
var fn func(string) error
|
||||
// Handle all 6 possible combinations
|
||||
switch {
|
||||
case !accessor && mode == "":
|
||||
case !accessor && self && mode == "":
|
||||
fn = client.Auth().Token().RevokeSelf
|
||||
case !accessor && !self && mode == "":
|
||||
fn = client.Auth().Token().RevokeTree
|
||||
case !accessor && mode == "orphan":
|
||||
case !accessor && !self && mode == "orphan":
|
||||
fn = client.Auth().Token().RevokeOrphan
|
||||
case !accessor && mode == "path":
|
||||
case !accessor && !self && mode == "path":
|
||||
fn = client.Sys().RevokePrefix
|
||||
case accessor && mode == "":
|
||||
case accessor && !self && mode == "":
|
||||
fn = client.Auth().Token().RevokeAccessor
|
||||
case accessor && self:
|
||||
c.Ui.Error("token-revoke cannot be run on self when 'accessor' flag is set")
|
||||
return 1
|
||||
case self && mode != "":
|
||||
c.Ui.Error("token-revoke cannot be run on self when 'mode' flag is set")
|
||||
return 1
|
||||
case accessor && mode == "orphan":
|
||||
c.Ui.Error("token-revoke cannot be run for 'orphan' mode when 'accessor' flag is set")
|
||||
return 1
|
||||
|
@ -99,7 +116,7 @@ Usage: vault token-revoke [options] [token|accessor]
|
|||
Token can be revoked using the token accessor. This can be done by
|
||||
setting the '-accessor' flag. Note that when '-accessor' flag is set,
|
||||
'-mode' should not be set for 'orphan' or 'path'. This is because,
|
||||
a token accessor always revokes the token along with it's child tokens.
|
||||
a token accessor always revokes the token along with its child tokens.
|
||||
|
||||
General Options:
|
||||
` + meta.GeneralOptionsUsage() + `
|
||||
|
@ -110,6 +127,8 @@ Token Options:
|
|||
via '/auth/token/lookup-accessor/<accessor>' endpoint.
|
||||
Accessor is used when there is no access to token ID.
|
||||
|
||||
-self A boolean flag, if set, the operation is performed on the currently
|
||||
authenticated token i.e. lookup-self.
|
||||
|
||||
-mode=value The type of revocation to do. See the documentation
|
||||
above for more information.
|
||||
|
|
|
@ -23,7 +23,7 @@ func (c *UnmountCommand) Run(args []string) int {
|
|||
if len(args) != 1 {
|
||||
flags.Usage()
|
||||
c.Ui.Error(fmt.Sprintf(
|
||||
"\nUnmount expects one argument: the path to unmount"))
|
||||
"\nunmount expects one argument: the path to unmount"))
|
||||
return 1
|
||||
}
|
||||
|
||||
|
|
|
@ -97,7 +97,7 @@ func (c *UnsealCommand) Run(args []string) int {
|
|||
}
|
||||
|
||||
func (c *UnsealCommand) Synopsis() string {
|
||||
return "Unseals the vault server"
|
||||
return "Unseals the Vault server"
|
||||
}
|
||||
|
||||
func (c *UnsealCommand) Help() string {
|
||||
|
@ -105,7 +105,7 @@ func (c *UnsealCommand) Help() string {
|
|||
Usage: vault unseal [options] [key]
|
||||
|
||||
Unseal the vault by entering a portion of the master key. Once all
|
||||
portions are entered, the Vault will be unsealed.
|
||||
portions are entered, the vault will be unsealed.
|
||||
|
||||
Every Vault server initially starts as sealed. It cannot perform any
|
||||
operation except unsealing until it is sealed. Secrets cannot be accessed
|
||||
|
|
|
@ -37,7 +37,7 @@ func (c *UnwrapCommand) Run(args []string) int {
|
|||
case 1:
|
||||
tokenID = args[0]
|
||||
default:
|
||||
c.Ui.Error("Unwrap expects zero or one argument (the ID of the wrapping token)")
|
||||
c.Ui.Error("unwrap expects zero or one argument (the ID of the wrapping token)")
|
||||
flags.Usage()
|
||||
return 1
|
||||
}
|
||||
|
|
|
@ -52,7 +52,7 @@ func IPBelongsToCIDRBlocksString(ipAddr string, cidrList, separator string) (boo
|
|||
return false, fmt.Errorf("invalid IP address")
|
||||
}
|
||||
|
||||
return IPBelongsToCIDRBlocksSlice(ipAddr, strutil.ParseDedupAndSortStrings(cidrList, separator))
|
||||
return IPBelongsToCIDRBlocksSlice(ipAddr, strutil.ParseDedupLowercaseAndSortStrings(cidrList, separator))
|
||||
}
|
||||
|
||||
// IPBelongsToCIDRBlocksSlice checks if the given IP is encompassed by any of the given
|
||||
|
@ -95,7 +95,7 @@ func ValidateCIDRListString(cidrList string, separator string) (bool, error) {
|
|||
return false, fmt.Errorf("missing separator")
|
||||
}
|
||||
|
||||
return ValidateCIDRListSlice(strutil.ParseDedupAndSortStrings(cidrList, separator))
|
||||
return ValidateCIDRListSlice(strutil.ParseDedupLowercaseAndSortStrings(cidrList, separator))
|
||||
}
|
||||
|
||||
// ValidateCIDRListSlice checks if the given list of CIDR blocks are valid
|
||||
|
|
|
@ -9,6 +9,7 @@ import (
|
|||
"strings"
|
||||
|
||||
"github.com/hashicorp/vault/helper/jsonutil"
|
||||
"github.com/mitchellh/mapstructure"
|
||||
)
|
||||
|
||||
// Builder is a struct to build a key/value mapping based on a list
|
||||
|
@ -107,6 +108,17 @@ func (b *Builder) add(raw string) error {
|
|||
}
|
||||
}
|
||||
|
||||
// Repeated keys will be converted into a slice
|
||||
if existingValue, ok := b.result[key]; ok {
|
||||
var sliceValue []interface{}
|
||||
if err := mapstructure.WeakDecode(existingValue, &sliceValue); err != nil {
|
||||
return err
|
||||
}
|
||||
sliceValue = append(sliceValue, value)
|
||||
b.result[key] = sliceValue
|
||||
return nil
|
||||
}
|
||||
|
||||
b.result[key] = value
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -85,3 +85,36 @@ func TestBuilder_stdinTwice(t *testing.T) {
|
|||
t.Fatal("should error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuilder_sameKeyTwice(t *testing.T) {
|
||||
var b Builder
|
||||
err := b.Add("foo=bar", "foo=baz")
|
||||
if err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
}
|
||||
|
||||
expected := map[string]interface{}{
|
||||
"foo": []interface{}{"bar", "baz"},
|
||||
}
|
||||
actual := b.Map()
|
||||
if !reflect.DeepEqual(actual, expected) {
|
||||
t.Fatalf("bad: %#v", actual)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuilder_sameKeyMultipleTimes(t *testing.T) {
|
||||
var b Builder
|
||||
err := b.Add("foo=bar", "foo=baz", "foo=bay", "foo=bax", "bar=baz")
|
||||
if err != nil {
|
||||
t.Fatalf("err: %s", err)
|
||||
}
|
||||
|
||||
expected := map[string]interface{}{
|
||||
"foo": []interface{}{"bar", "baz", "bay", "bax"},
|
||||
"bar": "baz",
|
||||
}
|
||||
actual := b.Map()
|
||||
if !reflect.DeepEqual(actual, expected) {
|
||||
t.Fatalf("bad: %#v", actual)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -61,7 +61,7 @@ func SanitizePolicies(policies []string, addDefault bool) []string {
|
|||
policies = append(policies, "default")
|
||||
}
|
||||
|
||||
return strutil.RemoveDuplicates(policies)
|
||||
return strutil.RemoveDuplicates(policies, true)
|
||||
}
|
||||
|
||||
// EquivalentPolicies checks whether the given policy sets are equivalent, as in,
|
||||
|
|
|
@ -32,14 +32,14 @@ func StrListSubset(super, sub []string) bool {
|
|||
// Parses a comma separated list of strings into a slice of strings.
|
||||
// The return slice will be sorted and will not contain duplicate or
|
||||
// empty items. The values will be converted to lower case.
|
||||
func ParseDedupAndSortStrings(input string, sep string) []string {
|
||||
func ParseDedupLowercaseAndSortStrings(input string, sep string) []string {
|
||||
input = strings.TrimSpace(input)
|
||||
parsed := []string{}
|
||||
if input == "" {
|
||||
// Don't return nil
|
||||
return parsed
|
||||
}
|
||||
return RemoveDuplicates(strings.Split(input, sep))
|
||||
return RemoveDuplicates(strings.Split(input, sep), true)
|
||||
}
|
||||
|
||||
// Parses a comma separated list of `<key>=<value>` tuples into a
|
||||
|
@ -49,7 +49,7 @@ func ParseKeyValues(input string, out map[string]string, sep string) error {
|
|||
return fmt.Errorf("'out is nil")
|
||||
}
|
||||
|
||||
keyValues := ParseDedupAndSortStrings(input, sep)
|
||||
keyValues := ParseDedupLowercaseAndSortStrings(input, sep)
|
||||
if len(keyValues) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
@ -174,19 +174,31 @@ func ParseArbitraryStringSlice(input string, sep string) []string {
|
|||
return ret
|
||||
}
|
||||
|
||||
// Removes duplicate and empty elements from a slice of strings.
|
||||
// This also converts the items in the slice to lower case and
|
||||
// returns a sorted slice.
|
||||
func RemoveDuplicates(items []string) []string {
|
||||
// TrimStrings takes a slice of strings and returns a slice of strings
|
||||
// with trimmed spaces
|
||||
func TrimStrings(items []string) []string {
|
||||
ret := make([]string, len(items))
|
||||
for i, item := range items {
|
||||
ret[i] = strings.TrimSpace(item)
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
// Removes duplicate and empty elements from a slice of strings. This also may
|
||||
// convert the items in the slice to lower case and returns a sorted slice.
|
||||
func RemoveDuplicates(items []string, lowercase bool) []string {
|
||||
itemsMap := map[string]bool{}
|
||||
for _, item := range items {
|
||||
item = strings.ToLower(strings.TrimSpace(item))
|
||||
item = strings.TrimSpace(item)
|
||||
if lowercase {
|
||||
item = strings.ToLower(item)
|
||||
}
|
||||
if item == "" {
|
||||
continue
|
||||
}
|
||||
itemsMap[item] = true
|
||||
}
|
||||
items = []string{}
|
||||
items = make([]string, 0, len(itemsMap))
|
||||
for item, _ := range itemsMap {
|
||||
items = append(items, item)
|
||||
}
|
||||
|
|
|
@ -315,3 +315,12 @@ func TestGlobbedStringsMatch(t *testing.T) {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestTrimStrings(t *testing.T) {
|
||||
input := []string{"abc", "123", "abcd ", "123 "}
|
||||
expected := []string{"abc", "123", "abcd", "123"}
|
||||
actual := TrimStrings(input)
|
||||
if !reflect.DeepEqual(expected, actual) {
|
||||
t.Fatalf("Bad TrimStrings: expected:%#v, got:%#v", expected, actual)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -274,6 +274,7 @@ func requestAuth(core *vault.Core, r *http.Request, req *logical.Request) *logic
|
|||
te, err := core.LookupToken(v)
|
||||
if err == nil && te != nil {
|
||||
req.ClientTokenAccessor = te.Accessor
|
||||
req.ClientTokenRemainingUses = te.NumUses
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -80,6 +80,7 @@ func TestSysMounts_headerAuth(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -89,6 +90,7 @@ func TestSysMounts_headerAuth(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -98,6 +100,7 @@ func TestSysMounts_headerAuth(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -108,6 +111,7 @@ func TestSysMounts_headerAuth(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -117,6 +121,7 @@ func TestSysMounts_headerAuth(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -126,6 +131,7 @@ func TestSysMounts_headerAuth(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
|
|
@ -120,7 +120,7 @@ func handleSysGenerateRootUpdate(core *vault.Core) http.Handler {
|
|||
if req.Key == "" {
|
||||
respondError(
|
||||
w, http.StatusBadRequest,
|
||||
errors.New("'key' must specified in request body as JSON"))
|
||||
errors.New("'key' must be specified in request body as JSON"))
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
@ -32,6 +32,7 @@ func TestSysMounts(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -41,6 +42,7 @@ func TestSysMounts(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -50,6 +52,7 @@ func TestSysMounts(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -60,6 +63,7 @@ func TestSysMounts(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -69,6 +73,7 @@ func TestSysMounts(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -78,6 +83,7 @@ func TestSysMounts(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -119,6 +125,7 @@ func TestSysMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -128,6 +135,7 @@ func TestSysMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -137,6 +145,7 @@ func TestSysMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -146,6 +155,7 @@ func TestSysMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -156,6 +166,7 @@ func TestSysMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -165,6 +176,7 @@ func TestSysMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -174,6 +186,7 @@ func TestSysMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -183,6 +196,7 @@ func TestSysMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -246,6 +260,7 @@ func TestSysRemount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -255,6 +270,7 @@ func TestSysRemount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -264,6 +280,7 @@ func TestSysRemount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -273,6 +290,7 @@ func TestSysRemount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -283,6 +301,7 @@ func TestSysRemount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -292,6 +311,7 @@ func TestSysRemount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -301,6 +321,7 @@ func TestSysRemount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -310,6 +331,7 @@ func TestSysRemount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -354,6 +376,7 @@ func TestSysUnmount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -363,6 +386,7 @@ func TestSysUnmount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -372,6 +396,7 @@ func TestSysUnmount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -382,6 +407,7 @@ func TestSysUnmount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -391,6 +417,7 @@ func TestSysUnmount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -400,6 +427,7 @@ func TestSysUnmount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -441,6 +469,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -450,6 +479,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -459,6 +489,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -468,6 +499,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -478,6 +510,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -487,6 +520,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -496,6 +530,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -505,6 +540,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -567,6 +603,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("259196400"),
|
||||
"max_lease_ttl": json.Number("259200000"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -576,6 +613,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -585,6 +623,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -594,6 +633,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -604,6 +644,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("259196400"),
|
||||
"max_lease_ttl": json.Number("259200000"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -613,6 +654,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -622,6 +664,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": false,
|
||||
},
|
||||
|
@ -631,6 +674,7 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"config": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("0"),
|
||||
"max_lease_ttl": json.Number("0"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"local": true,
|
||||
},
|
||||
|
@ -656,9 +700,11 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"data": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("259196400"),
|
||||
"max_lease_ttl": json.Number("259200000"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"default_lease_ttl": json.Number("259196400"),
|
||||
"max_lease_ttl": json.Number("259200000"),
|
||||
"force_no_cache": false,
|
||||
}
|
||||
|
||||
testResponseStatus(t, resp, 200)
|
||||
|
@ -687,9 +733,11 @@ func TestSysTuneMount(t *testing.T) {
|
|||
"data": map[string]interface{}{
|
||||
"default_lease_ttl": json.Number("40"),
|
||||
"max_lease_ttl": json.Number("80"),
|
||||
"force_no_cache": false,
|
||||
},
|
||||
"default_lease_ttl": json.Number("40"),
|
||||
"max_lease_ttl": json.Number("80"),
|
||||
"force_no_cache": false,
|
||||
}
|
||||
|
||||
testResponseStatus(t, resp, 200)
|
||||
|
|
|
@ -48,6 +48,10 @@ func TestSysMountConfig(t *testing.T) {
|
|||
t.Fatalf("Expected default lease TTL: %d, got %d",
|
||||
expectedMaxTTL, mountConfig.MaxLeaseTTL)
|
||||
}
|
||||
|
||||
if mountConfig.ForceNoCache == true {
|
||||
t.Fatalf("did not expect force cache")
|
||||
}
|
||||
}
|
||||
|
||||
// testMount sets up a test mount of a generic backend w/ a random path; caller
|
||||
|
|
|
@ -168,7 +168,7 @@ func handleSysRekeyUpdate(core *vault.Core, recovery bool) http.Handler {
|
|||
if req.Key == "" {
|
||||
respondError(
|
||||
w, http.StatusBadRequest,
|
||||
errors.New("'key' must specified in request body as JSON"))
|
||||
errors.New("'key' must be specified in request body as JSON"))
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
@ -88,7 +88,7 @@ func handleSysUnseal(core *vault.Core) http.Handler {
|
|||
if !req.Reset && req.Key == "" {
|
||||
respondError(
|
||||
w, http.StatusBadRequest,
|
||||
errors.New("'key' must specified in request body as JSON, or 'reset' set to true"))
|
||||
errors.New("'key' must be specified in request body as JSON, or 'reset' set to true"))
|
||||
return
|
||||
}
|
||||
|
||||
|
|
|
@ -13,9 +13,9 @@ import (
|
|||
log "github.com/mgutz/logxi/v1"
|
||||
|
||||
"github.com/hashicorp/go-multierror"
|
||||
"github.com/hashicorp/vault/helper/parseutil"
|
||||
"github.com/hashicorp/vault/helper/errutil"
|
||||
"github.com/hashicorp/vault/helper/logformat"
|
||||
"github.com/hashicorp/vault/helper/parseutil"
|
||||
"github.com/hashicorp/vault/logical"
|
||||
)
|
||||
|
||||
|
@ -587,6 +587,10 @@ func (t FieldType) Zero() interface{} {
|
|||
return map[string]interface{}{}
|
||||
case TypeDurationSecond:
|
||||
return 0
|
||||
case TypeSlice:
|
||||
return []interface{}{}
|
||||
case TypeStringSlice, TypeCommaStringSlice:
|
||||
return []string{}
|
||||
default:
|
||||
panic("unknown type: " + t.String())
|
||||
}
|
||||
|
|
|
@ -5,6 +5,7 @@ import (
|
|||
"fmt"
|
||||
|
||||
"github.com/hashicorp/vault/helper/parseutil"
|
||||
"github.com/hashicorp/vault/helper/strutil"
|
||||
"github.com/mitchellh/mapstructure"
|
||||
)
|
||||
|
||||
|
@ -30,7 +31,8 @@ func (d *FieldData) Validate() error {
|
|||
}
|
||||
|
||||
switch schema.Type {
|
||||
case TypeBool, TypeInt, TypeMap, TypeDurationSecond, TypeString:
|
||||
case TypeBool, TypeInt, TypeMap, TypeDurationSecond, TypeString, TypeSlice,
|
||||
TypeStringSlice, TypeCommaStringSlice:
|
||||
_, _, err := d.getPrimitive(field, schema)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error converting input %v for field %s: %s", value, field, err)
|
||||
|
@ -105,7 +107,8 @@ func (d *FieldData) GetOkErr(k string) (interface{}, bool, error) {
|
|||
}
|
||||
|
||||
switch schema.Type {
|
||||
case TypeBool, TypeInt, TypeMap, TypeDurationSecond, TypeString:
|
||||
case TypeBool, TypeInt, TypeMap, TypeDurationSecond, TypeString,
|
||||
TypeSlice, TypeStringSlice, TypeCommaStringSlice:
|
||||
return d.getPrimitive(k, schema)
|
||||
default:
|
||||
return nil, false,
|
||||
|
@ -177,6 +180,36 @@ func (d *FieldData) getPrimitive(
|
|||
}
|
||||
return result, true, nil
|
||||
|
||||
case TypeSlice:
|
||||
var result []interface{}
|
||||
if err := mapstructure.WeakDecode(raw, &result); err != nil {
|
||||
return nil, true, err
|
||||
}
|
||||
return result, true, nil
|
||||
|
||||
case TypeStringSlice:
|
||||
var result []string
|
||||
if err := mapstructure.WeakDecode(raw, &result); err != nil {
|
||||
return nil, true, err
|
||||
}
|
||||
return strutil.TrimStrings(result), true, nil
|
||||
|
||||
case TypeCommaStringSlice:
|
||||
var result []string
|
||||
config := &mapstructure.DecoderConfig{
|
||||
Result: &result,
|
||||
WeaklyTypedInput: true,
|
||||
DecodeHook: mapstructure.StringToSliceHookFunc(","),
|
||||
}
|
||||
decoder, err := mapstructure.NewDecoder(config)
|
||||
if err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
if err := decoder.Decode(raw); err != nil {
|
||||
return nil, false, err
|
||||
}
|
||||
return strutil.TrimStrings(result), true, nil
|
||||
|
||||
default:
|
||||
panic(fmt.Sprintf("Unknown type: %s", schema.Type))
|
||||
}
|
||||
|
|
|
@ -146,6 +146,105 @@ func TestFieldDataGet(t *testing.T) {
|
|||
"foo",
|
||||
0,
|
||||
},
|
||||
|
||||
"slice type, empty slice": {
|
||||
map[string]*FieldSchema{
|
||||
"foo": &FieldSchema{Type: TypeSlice},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"foo": []interface{}{},
|
||||
},
|
||||
"foo",
|
||||
[]interface{}{},
|
||||
},
|
||||
|
||||
"slice type, filled, mixed slice": {
|
||||
map[string]*FieldSchema{
|
||||
"foo": &FieldSchema{Type: TypeSlice},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"foo": []interface{}{123, "abc"},
|
||||
},
|
||||
"foo",
|
||||
[]interface{}{123, "abc"},
|
||||
},
|
||||
|
||||
"string slice type, filled slice": {
|
||||
map[string]*FieldSchema{
|
||||
"foo": &FieldSchema{Type: TypeStringSlice},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"foo": []interface{}{123, "abc"},
|
||||
},
|
||||
"foo",
|
||||
[]string{"123", "abc"},
|
||||
},
|
||||
|
||||
"comma string slice type, comma string with one value": {
|
||||
map[string]*FieldSchema{
|
||||
"foo": &FieldSchema{Type: TypeCommaStringSlice},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"foo": "value1",
|
||||
},
|
||||
"foo",
|
||||
[]string{"value1"},
|
||||
},
|
||||
|
||||
"comma string slice type, comma string with multi value": {
|
||||
map[string]*FieldSchema{
|
||||
"foo": &FieldSchema{Type: TypeCommaStringSlice},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"foo": "value1,value2,value3",
|
||||
},
|
||||
"foo",
|
||||
[]string{"value1", "value2", "value3"},
|
||||
},
|
||||
|
||||
"comma string slice type, nil string slice value": {
|
||||
map[string]*FieldSchema{
|
||||
"foo": &FieldSchema{Type: TypeCommaStringSlice},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"foo": "",
|
||||
},
|
||||
"foo",
|
||||
[]string{},
|
||||
},
|
||||
|
||||
"commma string slice type, string slice with one value": {
|
||||
map[string]*FieldSchema{
|
||||
"foo": &FieldSchema{Type: TypeCommaStringSlice},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"foo": []interface{}{"value1"},
|
||||
},
|
||||
"foo",
|
||||
[]string{"value1"},
|
||||
},
|
||||
|
||||
"comma string slice type, string slice with multi value": {
|
||||
map[string]*FieldSchema{
|
||||
"foo": &FieldSchema{Type: TypeCommaStringSlice},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"foo": []interface{}{"value1", "value2", "value3"},
|
||||
},
|
||||
"foo",
|
||||
[]string{"value1", "value2", "value3"},
|
||||
},
|
||||
|
||||
"comma string slice type, empty string slice value": {
|
||||
map[string]*FieldSchema{
|
||||
"foo": &FieldSchema{Type: TypeCommaStringSlice},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"foo": []interface{}{},
|
||||
},
|
||||
"foo",
|
||||
[]string{},
|
||||
},
|
||||
}
|
||||
|
||||
for name, tc := range cases {
|
||||
|
|
|
@ -13,6 +13,16 @@ const (
|
|||
// TypeDurationSecond represent as seconds, this can be either an
|
||||
// integer or go duration format string (e.g. 24h)
|
||||
TypeDurationSecond
|
||||
|
||||
// TypeSlice represents a slice of any type
|
||||
TypeSlice
|
||||
// TypeStringSlice is a helper for TypeSlice that returns a sanitized
|
||||
// slice of strings
|
||||
TypeStringSlice
|
||||
// TypeCommaStringSlice is a helper for TypeSlice that returns a sanitized
|
||||
// slice of strings and also supports parsing a comma-separated list in
|
||||
// a string field
|
||||
TypeCommaStringSlice
|
||||
)
|
||||
|
||||
func (t FieldType) String() string {
|
||||
|
@ -27,6 +37,8 @@ func (t FieldType) String() string {
|
|||
return "map"
|
||||
case TypeDurationSecond:
|
||||
return "duration (sec)"
|
||||
case TypeSlice, TypeStringSlice, TypeCommaStringSlice:
|
||||
return "slice"
|
||||
default:
|
||||
return "unknown type"
|
||||
}
|
||||
|
|
|
@ -82,9 +82,18 @@ type Request struct {
|
|||
// request path with the MountPoint trimmed off.
|
||||
MountPoint string `json:"mount_point" structs:"mount_point" mapstructure:"mount_point"`
|
||||
|
||||
// MountType is provided so that a logical backend can make decisions
|
||||
// based on the specific mount type (e.g., if a mount type has different
|
||||
// aliases, generating different defaults depending on the alias)
|
||||
MountType string `json:"mount_type" structs:"mount_type" mapstructure:"mount_type"`
|
||||
|
||||
// WrapInfo contains requested response wrapping parameters
|
||||
WrapInfo *RequestWrapInfo `json:"wrap_info" structs:"wrap_info" mapstructure:"wrap_info"`
|
||||
|
||||
// ClientTokenRemainingUses represents the allowed number of uses left on the
|
||||
// token supplied
|
||||
ClientTokenRemainingUses int `json:"client_token_remaining_uses" structs:"client_token_remaining_uses" mapstructure:"client_token_remaining_uses"`
|
||||
|
||||
// For replication, contains the last WAL on the remote side after handling
|
||||
// the request, used for best-effort avoidance of stale read-after-write
|
||||
lastRemoteWAL uint64
|
||||
|
|
|
@ -12,7 +12,7 @@ import (
|
|||
|
||||
log "github.com/mgutz/logxi/v1"
|
||||
|
||||
"github.com/Azure/azure-sdk-for-go/storage"
|
||||
"github.com/Azure/azure-storage-go"
|
||||
"github.com/armon/go-metrics"
|
||||
"github.com/hashicorp/errwrap"
|
||||
)
|
||||
|
@ -59,12 +59,23 @@ func newAzureBackend(conf map[string]string, logger log.Logger) (Backend, error)
|
|||
}
|
||||
|
||||
client, err := storage.NewBasicClient(accountName, accountKey)
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to create Azure client: %v", err)
|
||||
return nil, fmt.Errorf("failed to create Azure client: %v", err)
|
||||
}
|
||||
|
||||
client.GetBlobService().CreateContainerIfNotExists(container, storage.ContainerAccessTypePrivate)
|
||||
contObj := client.GetBlobService().GetContainerReference(container)
|
||||
created, err := contObj.CreateIfNotExists()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to upsert container: %v", err)
|
||||
}
|
||||
if created {
|
||||
err = contObj.SetPermissions(storage.ContainerPermissions{
|
||||
AccessType: storage.ContainerAccessTypePrivate,
|
||||
}, 0, "")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to set permissions on newly-created container: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
maxParStr, ok := conf["max_parallel"]
|
||||
var maxParInt int
|
||||
|
@ -156,7 +167,8 @@ func (a *AzureBackend) List(prefix string) ([]string, error) {
|
|||
a.permitPool.Acquire()
|
||||
defer a.permitPool.Release()
|
||||
|
||||
list, err := a.client.ListBlobs(a.container, storage.ListBlobsParameters{Prefix: prefix})
|
||||
contObj := a.client.GetContainerReference(a.container)
|
||||
list, err := contObj.ListBlobs(storage.ListBlobsParameters{Prefix: prefix})
|
||||
|
||||
if err != nil {
|
||||
// Break early.
|
||||
|
|
|
@ -9,7 +9,7 @@ import (
|
|||
"github.com/hashicorp/vault/helper/logformat"
|
||||
log "github.com/mgutz/logxi/v1"
|
||||
|
||||
"github.com/Azure/azure-sdk-for-go/storage"
|
||||
"github.com/Azure/azure-storage-go"
|
||||
)
|
||||
|
||||
func TestAzureBackend(t *testing.T) {
|
||||
|
@ -35,7 +35,8 @@ func TestAzureBackend(t *testing.T) {
|
|||
})
|
||||
|
||||
defer func() {
|
||||
cleanupClient.GetBlobService().DeleteContainerIfExists(container)
|
||||
contObj := cleanupClient.GetBlobService().GetContainerReference(container)
|
||||
contObj.DeleteIfExists()
|
||||
}()
|
||||
|
||||
if err != nil {
|
||||
|
|
|
@ -233,10 +233,12 @@ func newConsulBackend(conf map[string]string, logger log.Logger) (Backend, error
|
|||
kv: client.KV(),
|
||||
permitPool: NewPermitPool(maxParInt),
|
||||
serviceName: service,
|
||||
serviceTags: strutil.ParseDedupAndSortStrings(tags, ","),
|
||||
serviceTags: strutil.ParseDedupLowercaseAndSortStrings(tags, ","),
|
||||
checkTimeout: checkTimeout,
|
||||
disableRegistration: disableRegistration,
|
||||
consistencyMode: consistencyMode,
|
||||
notifyActiveCh: make(chan notifyEvent),
|
||||
notifySealedCh: make(chan notifyEvent),
|
||||
}
|
||||
return c, nil
|
||||
}
|
||||
|
@ -321,6 +323,9 @@ func (c *ConsulBackend) Transaction(txns []TxnEntry) error {
|
|||
ops = append(ops, cop)
|
||||
}
|
||||
|
||||
c.permitPool.Acquire()
|
||||
defer c.permitPool.Release()
|
||||
|
||||
ok, resp, _, err := c.kv.Txn(ops, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
|
|
|
@ -3,7 +3,10 @@ package physical
|
|||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/coreos/etcd/client"
|
||||
"github.com/coreos/go-semver/semver"
|
||||
|
@ -13,6 +16,7 @@ import (
|
|||
var (
|
||||
EtcdSyncConfigError = errors.New("client setup failed: unable to parse etcd sync field in config")
|
||||
EtcdSyncClusterError = errors.New("client setup failed: unable to sync etcd cluster")
|
||||
EtcdMultipleBootstrapError = errors.New("client setup failed: multiple discovery or bootstrap flags specified, use either \"address\" or \"discovery_srv\"")
|
||||
EtcdAddressError = errors.New("client setup failed: address must be valid URL (ex. 'scheme://host:port')")
|
||||
EtcdSemaphoreKeysEmptyError = errors.New("lock queue is empty")
|
||||
EtcdLockHeldError = errors.New("lock already held")
|
||||
|
@ -95,3 +99,47 @@ func getEtcdAPIVersion(c client.Client) (string, error) {
|
|||
|
||||
return "3", nil
|
||||
}
|
||||
|
||||
// Retrieves the config option in order of priority:
|
||||
// 1. The named environment variable if it exist
|
||||
// 2. The key in the config map
|
||||
func getEtcdOption(conf map[string]string, confKey, envVar string) (string, bool) {
|
||||
confVal, inConf := conf[confKey]
|
||||
envVal, inEnv := os.LookupEnv(envVar)
|
||||
if inEnv {
|
||||
return envVal, true
|
||||
}
|
||||
return confVal, inConf
|
||||
}
|
||||
|
||||
func getEtcdEndpoints(conf map[string]string) ([]string, error) {
|
||||
address, staticBootstrap := getEtcdOption(conf, "address", "ETCD_ADDR")
|
||||
domain, useSrv := getEtcdOption(conf, "discovery_srv", "ETCD_DISCOVERY_SRV")
|
||||
if useSrv && staticBootstrap {
|
||||
return nil, EtcdMultipleBootstrapError
|
||||
}
|
||||
|
||||
if staticBootstrap {
|
||||
endpoints := strings.Split(address, Etcd2MachineDelimiter)
|
||||
// Verify that the machines are valid URLs
|
||||
for _, e := range endpoints {
|
||||
u, urlErr := url.Parse(e)
|
||||
if urlErr != nil || u.Scheme == "" {
|
||||
return nil, EtcdAddressError
|
||||
}
|
||||
}
|
||||
return endpoints, nil
|
||||
}
|
||||
|
||||
if useSrv {
|
||||
discoverer := client.NewSRVDiscover()
|
||||
endpoints, err := discoverer.Discover(domain)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to discover etcd endpoints through SRV discovery: %v", err)
|
||||
}
|
||||
return endpoints, nil
|
||||
}
|
||||
|
||||
// Set a default endpoints list if no option was set
|
||||
return []string{"http://127.0.0.1:2379"}, nil
|
||||
}
|
||||
|
|
|
@ -4,7 +4,6 @@ import (
|
|||
"context"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
|
@ -118,23 +117,9 @@ func newEtcd2Backend(conf map[string]string, logger log.Logger) (Backend, error)
|
|||
}
|
||||
|
||||
func newEtcdV2Client(conf map[string]string) (client.Client, error) {
|
||||
// Set a default machines list and check for an overriding address value.
|
||||
machines := "http://127.0.0.1:2379"
|
||||
if address, ok := conf["address"]; ok {
|
||||
machines = address
|
||||
}
|
||||
machinesEnv := os.Getenv("ETCD_ADDR")
|
||||
if machinesEnv != "" {
|
||||
machines = machinesEnv
|
||||
}
|
||||
machinesParsed := strings.Split(machines, Etcd2MachineDelimiter)
|
||||
|
||||
// Verify that the machines are valid URLs
|
||||
for _, machine := range machinesParsed {
|
||||
u, urlErr := url.Parse(machine)
|
||||
if urlErr != nil || u.Scheme == "" {
|
||||
return nil, EtcdAddressError
|
||||
}
|
||||
endpoints, err := getEtcdEndpoints(conf)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Create a new client from the supplied address and attempt to sync with the
|
||||
|
@ -160,7 +145,7 @@ func newEtcdV2Client(conf map[string]string) (client.Client, error) {
|
|||
}
|
||||
|
||||
cfg := client.Config{
|
||||
Endpoints: machinesParsed,
|
||||
Endpoints: endpoints,
|
||||
Transport: cTransport,
|
||||
}
|
||||
|
||||
|
|
|
@ -32,6 +32,9 @@ type EtcdBackend struct {
|
|||
etcd *clientv3.Client
|
||||
}
|
||||
|
||||
// etcd default lease duration is 60s. set to 15s for faster recovery.
|
||||
const etcd3LockTimeoutInSeconds = 15
|
||||
|
||||
// newEtcd3Backend constructs a etcd3 backend.
|
||||
func newEtcd3Backend(conf map[string]string, logger log.Logger) (Backend, error) {
|
||||
// Get the etcd path form the configuration.
|
||||
|
@ -45,10 +48,9 @@ func newEtcd3Backend(conf map[string]string, logger log.Logger) (Backend, error)
|
|||
path = "/" + path
|
||||
}
|
||||
|
||||
// Set a default machines list and check for an overriding address value.
|
||||
endpoints := []string{"http://127.0.0.1:2379"}
|
||||
if address, ok := conf["address"]; ok {
|
||||
endpoints = strings.Split(address, ",")
|
||||
endpoints, err := getEtcdEndpoints(conf)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cfg := clientv3.Config{
|
||||
|
@ -228,7 +230,7 @@ type EtcdLock struct {
|
|||
|
||||
// Lock is used for mutual exclusion based on the given key.
|
||||
func (c *EtcdBackend) LockWith(key, value string) (Lock, error) {
|
||||
session, err := concurrency.NewSession(c.etcd)
|
||||
session, err := concurrency.NewSession(c.etcd, concurrency.WithTTL(etcd3LockTimeoutInSeconds))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -262,7 +264,7 @@ func (c *EtcdLock) Lock(stopCh <-chan struct{}) (<-chan struct{}, error) {
|
|||
}
|
||||
return nil, err
|
||||
}
|
||||
if _, err := c.etcd.Put(ctx, c.etcdMu.Key(), c.value); err != nil {
|
||||
if _, err := c.etcd.Put(ctx, c.etcdMu.Key(), c.value, clientv3.WithLease(c.etcdSession.Lease())); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue