Add an updated `target_ec2_instances` module that is capable of
dynamically splitting target instances over subnet/az's that are
compatible with the AMI architecture and the associated instance type
for the architecture. Use the `target_ec2_instances` module where
necessary. Ensure that `raft` storage scenarios don't provision
unnecessary infrastructure with a new `target_ec2_shim` module.
After a lot of trial, the state of Ec2 spot instance capacity, their
associated APIs, and current support for different fleet types in AWS
Terraform provider, have proven to make using spot instances for
scenario targets too unreliable.
The current state of each method:
* `target_ec2_fleet`: unusable due to the fact that the `instant` type
does not guarantee fulfillment of either `spot` or `on-demand`
instance request types. The module does support both `on-demand` and
`spot` request types and is capable of bidding across a maximum of
four availability zones, which makes it an attractive choice if the
`instant` type would always fulfill requests. Perhaps a `request` type
with `wait_for_fulfillment` option like `aws_spot_fleet_request` would
make it more viable for future consideration.
* `target_ec2_spot_fleet`: more reliable if bidding for target instances
that have capacity in the chosen zone. Issues in the AWS provider
prevent us from bidding across multiple zones succesfully. Over the
last 2-3 months target capacity for the instance types we'd prefer to
use has dropped dramatically and the price is near-or-at on-demand.
The volatility for nearly no cost savings means we should put this
option on the shelf for now.
* `target_ec2_instances`: the most reliable method we've got. It is now
capable of automatically determing which subnets and availability
zones to provision targets in and has been updated to be usable for
both Vault and Consul targets. By default we use the cheapest medium
instance types that we've found are reliable to test vault.
* Update .gitignore
* enos/modules/create_vpc: create a subnet for every availability zone
* enos/modules/target_ec2_fleet: bid across the maximum of four
availability zones for targets
* enos/modules/target_ec2_spot_fleet: attempt to make the spot fleet bid
across more availability zones for targets
* enos/modules/target_ec2_instances: create module to use
ec2:RunInstances for scenario targets
* enos/modules/target_ec2_shim: create shim module to satisfy the
target module interface
* enos/scenarios: use target_ec2_shim for backend targets on raft
storage scenarios
* enos/modules/az_finder: remove unsed module
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Ryan Cragun <me@ryan.ec>
Determine the allocation pool size for the spot fleet by the allocation
strategy. This allows us to ensure a consistent attribute plan during
re-runs which avoid rebuilding the target fleets.
Signed-off-by: Ryan Cragun <me@ryan.ec>
The security groups that allow access to remote machines in Enos
scenarios have been configured to only allow port 22 (SSH) from the
public IP address of machine executing the Enos scenario. To achieve
this we previously utilized the `enos_environment.public_ip_address`
attribute. Sometime in mid March we started seeing sporadic SSH i/o
timeout errors when attempting to execute Enos resources against SSH
transport targets. We've only ever seen this when communicating from
Azure hosted runners to AWS hosted machines.
While testing we were able to confirm that in some cases the public IP
address resolved using DNS over UDP4 to Google and OpenDNS name servers
did not match what was resolved when using the HTTPS/TCP IP address
service hosted by AWS. The Enos data source was implemented in a way
that we'd attempt resolution of a single name server and only attempt
resolving from the next if previous name server could not get a result.
We'd then allow-list that single IP address. That's a problem if we can
resolve two different public IP addresses depending our endpoint address.
This change utlizes the new `enos_environment.public_ip_addresses`
attribute and subsequent behavior change. Now the data source will
attempt to resolve our public IP address via name servers hosted by
Google, OpenDNS, Cloudflare, and AWS. We then return a unique set of
these IP addresses and allow-list all of them in our security group. It
is our hope that this resolves these i/o timeout errors that seem like
they're caused by the security group black-holing our attempted access
because the IP we resolved does not match what we're actually exiting
with.
Signed-off-by: Ryan Cragun <me@ryan.ec>
The previous strategy for provisioning infrastructure targets was to use
the cheapest instances that could reliably perform as Vault cluster
nodes. With this change we introduce a new model for target node
infrastructure. We've replaced on-demand instances for a spot
fleet. While the spot price fluctuates based on dynamic pricing,
capacity, region, instance type, and platform, cost savings for our
most common combinations range between 20-70%.
This change only includes spot fleet targets for Vault clusters.
We'll be updating our Consul backend bidding in another PR.
* Create a new `vault_cluster` module that handles installation,
configuration, initializing, and unsealing Vault clusters.
* Create a `target_ec2_instances` module that can provision a group of
instances on-demand.
* Create a `target_ec2_spot_fleet` module that can bid on a fleet of
spot instances.
* Extend every Enos scenario to utilize the spot fleet target acquisition
strategy and the `vault_cluster` module.
* Update our Enos CI modules to handle both the `aws-nuke` permissions
and also the privileges to provision spot fleets.
* Only use us-east-1 and us-west-2 in our scenario matrices as costs are
lower than us-west-1.
Signed-off-by: Ryan Cragun <me@ryan.ec>
* Adding an Enos test for undo logs
* fixing a typo
* feedback
* fixing typo
* running make fmt
* removing a dependency
* var name change
* fixing a variable
* fix builder
* fix product version
* adding required fields
* feedback
* add artifcat bundle back
* fmt check
* point to correct instance
* minor fix
* feedback
* feedback
Introducing a new approach to testing Vault artifacts before merge
and after merge/notorization/signing. Rather than run a few static
scenarios across the artifacts, we now have the ability to run a
pseudo random sample of scenarios across many different build artifacts.
We've added 20 possible scenarios for the AMD64 and ARM64 binary
bundles, which we've broken into five test groups. On any given push to
a pull request branch, we will now choose a random test group and
execute its corresponding scenarios against the resulting build
artifacts. This gives us greater test coverage but lets us split the
verification across many different pull requests.
The post-merge release testing pipeline behaves in a similar fashion,
however, the artifacts that we use for testing have been notarized and
signed prior to testing. We've also reduce the number of groups so that
we run more scenarios after merge to a release branch.
We intend to take what we've learned building this in Github Actions and
roll it into an easier to use feature that is native to Enos. Until then,
we'll have to manually add scenarios to each matrix file and manually
number the test group. It's important to note that Github requires every
matrix to include at least one vector, so every artifact that is being
tested must include a single scenario in order for all workflows to pass
and thus satisfy branch merge requirements.
* Add support for different artifact types to enos-run
* Add support for different runner type to enos-run
* Add arm64 scenarios to build matrix
* Expand build matrices to include different variants
* Update Consul versions in Enos scenarios and matrices
* Refactor enos-run environment
* Add minimum version filtering support to enos-run. This allows us to
automatically exclude scenarios that require a more recent version of
Vault
* Add maximum version filtering support to enos-run. This allows us to
automatically exclude scenarios that require an older version of
Vault
* Fix Node 12 deprecation warnings
* Rename enos-verify-stable to enos-release-testing-oss
* Convert artifactory matrix into enos-release-testing-oss matrices
* Add all Vault editions to Enos scenario matrices
* Fix verify version with complex Vault edition metadata
* Rename the crt-builder to ci-helper
* Add more version helpers to ci-helper and Makefile
* Update CODEOWNERS for quality team
* Add support for filtering matrices by group and version constraints
* Add support for pseudo random test scenario execution
Signed-off-by: Ryan Cragun <me@ryan.ec>
Here we make the following major changes:
* Centralize CRT builder logic into a script utility so that we can share the
logic for building artifacts in CI or locally.
* Simplify the build workflow by calling a reusable workflow many times
instead of repeating the contents.
* Create a workflow that validates whether or not the build workflow and all
child workflows have succeeded to allow for merge protection.
Motivation
* We need branch requirements for the build workflow and all subsequent
integration tests (QT-353)
* We need to ensure that the Enos local builder works (QT-558)
* Debugging build failures can be difficult because one has to hand craft the
steps to recreate the build
* Merge conflicts between Vault OSS and Vault ENT build workflows are quite
painful. As the build workflow must be the same file and name we'll reduce
what is contained in each that is unique. Implementations of building
will be unique per edition so we don't have to worry about conflict
resolution.
* Since we're going to be touching the build workflow to do the first two
items we might as well try and improve those other issues at the same time
to reduce the overhead of backports and conflicts.
Considerations
* Build logic for Vault OSS and Vault ENT differs
* The Enos local builder was duplicating a lot of what we did in the CRT build
workflow
* Version and other artifact metadata has been an issue before. Debugging it
has been tedious and error prone.
* The build workflow is full of brittle copy and paste that is hard to
understand, especially for all of the release editions in Vault Enterprise
* Branch check requirements for workflows are incredibly painful to use for
workflows that are dynamic or change often. The required workflows have to be
configured in Github settings by administrators. They would also prevent us
from having simple docs PRs since required integration workflows always have
to run to satisfy branch requirements.
* Doormat credentials requirements that are coming will require us to modify
which event types trigger workflows. This changes those ahead of time since
we're doing so much to build workflow. The only noticeable impact will be
that the build workflow no longer runs on pushes to non-main or release
branches. In order to test other branches it requires a workflow_dispatch
from the Actions tab or a pull request.
Solutions
* Centralize the logic that determines build metadata and creates releasable
Vault artifacts. Instead of cargo-culting logic multiple times in the build
workflow and the Enos local modules, we now have a crt-builder script which
determines build metadata and also handles building the UI, Vault, and the
package bundle. There are make targets for all of the available sub-commands.
Now what we use in the pipeline is the same thing as the local builder, and
it can be executed locally by developers. The crt-builder script works in OSS
and Enterprise so we will never have to deal with them being divergent or with
special casing things in the build workflow.
* Refactor the bulk of the Vault building into a reusable workflow that we can
call multiple times. This allows us to define Vault builds in a much simpler
manner and makes resolving merge conflicts much easier.
* Rather than trying to maintain a list and manually configure the branch check
requirements for build, we'll trigger a single workflow that uses the github
event system to determine if the build workflow (all of the sub-workflows
included) have passed. We'll then create branch restrictions on that single
workflow down the line.
Signed-off-by: Ryan Cragun me@ryan.ec
- The provider isn't needed and there is an error in the source
anyways.
❯ enos scenario validate managed_keys
Scenario: managed_keys [arch:arm64 backend:raft builder:local distro:ubuntu edition:ent seal:awskms]
Generate: ✅
Init: ❌
Error: Invalid provider registry host
The host "hashicorp.com" given in in provider source address
"hashicorp.com/qti/enos" does not offer a Terraform provider registry.
Add our initial Enos integration tests to Vault. The Enos scenario
workflow will automatically be run on branches that are created from the
`hashicorp/vault` repository. See the README.md in ./enos a full description
of how to compose and execute scenarios locally.
* Simplify the metadata build workflow jobs
* Automatically determine the Go version from go.mod
* Add formatting check for Enos integration scenarios
* Add Enos smoke and upgrade integration scenarios
* Add Consul backend matrix support
* Add Ubuntu and RHEL distro support
* Add Vault edition support
* Add Vault architecture support
* Add Vault builder support
* Add Vault Shamir and awskms auto-unseal support
* Add Raft storage support
* Add Raft auto-join voter verification
* Add Vault version verification
* Add Vault seal verification
* Add in-place upgrade support for all variants
* Add four scenario variants to CI. These test a maximal distribution of
the aforementioned variants with the `linux/amd64` Vault install
bundle.
Signed-off-by: Ryan Cragun <me@ryan.ec>
Co-authored-by: Rebecca Willett <rwillett@hashicorp.com>
Co-authored-by: Jaymala <jaymalasinha@gmail.com>