The design of the session TTLs is based on the Google Chubby approach
(http://research.google.com/archive/chubby-osdi06.pdf). The Session
struct has an additional TTL field now. This attaches an implicit
heartbeat based failure detector. Tracking of heartbeats is done by
the current leader and not persisted via the Raft log. The implication
of this is during a leader failover, we do not retain the last
heartbeat times.
Similar to Chubby, the TTL represents a lower-bound. Consul promises
not to terminate a session before the TTL has expired, but is allowed
to extend the expiration past it. This enables us to reset the TTL on
a leader failover. The TTL is also extended when the client does a
heartbeat. Like Chubby, this means a TTL is extended on creation,
heartbeat or failover.
Additionally, because we must account for time requests are in transit
and the relative rates of clocks on the clients and servers, Consul
will take the conservative approach of internally multiplying the TTL
by 2x. This helps to compensate for network latency and clock skew
without violating the contract.
Reference: https://docs.google.com/document/d/1Y5-pahLkUaA7Kz4SBU_mehKiyt9yaaUGcBTMZR7lToY/edit?usp=sharing
Namely, don't check the DNS names in TLS certificates when connecting to
other servers.
As of golang 1.3, crypto/tls no longer natively supports doing partial
verification (verifying the cert issuer but not the hostname), so we
have to disable verification entirely and then do the issuer
verification ourselves. Fortunately, crypto/x509 makes this relatively
straightforward.
If the "server_name" configuration option is passed, we preserve the
existing behavior of checking that server name everywhere.
No option is provided to retain the current behavior of checking the
remote certificate against the local node name, since that behavior
seems clearly buggy and unintentional, and I have difficulty imagining
it is actually being used anywhere. It would be relatively
straightforward to restore if desired, however.
This allows for us to automatically bootstrap a cluster of nodes after
'n' number of server nodes join. All servers must have the same 'n' set, or
they will fail to join the cluster; all servers will not join the peer set
until they hit 'n' server nodes.
If the raft commit index is not empty, '-expect=n' does nothing because it
thinks you've already bootstrapped.
Signed-off-by: Robert Xu <robxu9@gmail.com>