a9a04141a3
When creating a TCP proxy bridge for Connect tasks, we are at the mercy of either end for managing the connection state. For long lived gRPC connections the proxy could reasonably expect to stay open until the context was cancelled. For the HTTP connections used by connect native tasks, we experience connection disconnects. The proxy gets recreated as needed on follow up requests, however we also emit a WARN log when the connection is broken. This PR lowers the WARN to a TRACE, because these disconnects are to be expected. Ideally we would be able to proxy at the HTTP layer, however Consul or the connect native task could be configured to expect mTLS, preventing Nomad from MiTM the requests. We also can't mange the proxy lifecycle more intelligently, because we have no control over the HTTP client or server and how they wish to manage connection state. What we have now works, it's just noisy. Fixes #10933 |
||
---|---|---|
.. | ||
interfaces | ||
state | ||
taskrunner | ||
alloc_runner.go | ||
alloc_runner_hooks.go | ||
alloc_runner_test.go | ||
alloc_runner_unix_test.go | ||
allocdir_hook.go | ||
cgroup_hook.go | ||
config.go | ||
consul_grpc_sock_hook.go | ||
consul_grpc_sock_hook_test.go | ||
consul_http_sock_hook.go | ||
consul_http_sock_hook_test.go | ||
csi_hook.go | ||
groupservice_hook.go | ||
groupservice_hook_test.go | ||
health_hook.go | ||
health_hook_test.go | ||
migrate_hook.go | ||
network_hook.go | ||
network_hook_test.go | ||
network_manager_linux.go | ||
network_manager_linux_test.go | ||
network_manager_nonlinux.go | ||
networking.go | ||
networking_bridge_linux.go | ||
networking_cni.go | ||
networking_cni_test.go | ||
task_hook_coordinator.go | ||
task_hook_coordinator_test.go | ||
testing.go | ||
upstream_allocs_hook.go | ||
util.go |