open-nomad/website/content/docs/networking/index.mdx
Seth Hoenig 4347c1d705
docs: move CNI reference plugins installation to CNI overview page (#17068)
* docs: move CNI reference plugins installation to CNI overview page

This PR moves the instruction steps for install the CNI reference plugins
from the Consul Mesh integration page to the general Networking CNI page.

These plugins are required for bridge networking, not just Consul Mesh,
so it makes sense to have them on the general CNI page.

Closes #17038

* docs: fix a link to post install steps
2023-05-04 11:32:06 -05:00

332 lines
13 KiB
Plaintext

---
layout: docs
page_title: Networking
description: |-
Learn about Nomad's networking internals and how it compares with other
tools.
---
# Networking
Nomad is a workload orchestrator and so it focuses on the scheduling aspects of
a deployment, touching areas such as networking as little as possible.
**Networking in Nomad is usually done via _configuration_ instead of
_infrastructure_**. This means that Nomad provides ways for you to access the
information you need to connect your workloads instead of running additional
components behind the scenes, such as DNS servers and load balancers.
This can be confusing at first since it is quite different from what you may
be used to from other tools. This section explains how networking works in
Nomad, some of the different patterns and configurations you are likely to find
and use, and how Nomad differs from other tools in this aspect.
## Allocation networking
The base unit of scheduling in Nomad is an [allocation][], which means that all
tasks in the same allocation run in the same client and share common resources,
such as disk and networking. Allocations can request access to network
resources, such as ports, using the [`network`][jobspec_network] block. At its
simplest configuration, a `network` block can be defined as:
```hcl
job "..." {
# ...
group "..." {
network {
port "http" {}
}
# ...
}
}
```
Nomad reserves a random port in the client between [`min_dynamic_port`][] and
[`max_dynamic_port`][] that has not been allocated yet and creates a port
mapping from the host network interface to the allocation.
<!-- Source: https://drive.google.com/file/d/1q4a2ab0TyLEPdWiO2DIianAPWuPqLqZ4/view?usp=share_link -->
[![Nomad Port Mapping](/img/networking/port_mapping.png)](/img/networking/port_mapping.png)
The selected port number can be accessed by tasks using the
[`NOMAD_PORT_<label>`][runtime_environment] environment variable to bind and
expose the workload at the client's IP address and the given port.
The specific configuration process depends on what you are running, but it is
usually done using a configuration file rendered from a [`template`][] or
passed directly via command line arguments:
```hcl
job "..." {
# ...
group "..." {
network {
port "http" {}
}
task "..." {
# ...
config {
args = [
"--port=${NOMAD_PORT_http}",
]
}
}
}
}
```
It is also possible to request a specific port number, instead of a random one,
by setting a [`static`][] value for the `port`. **This should only be used by
specialized workloads**, such as load balancers and system jobs, since it can
be hard to manage them manually to avoid scheduling collisions.
With the task listening at one of the client's ports, other processes can
access it directly using the client's IP and port, but first they need to find
these values. This process is called [service discovery][].
When using IP and port to connect allocations it is important to make sure your
network topology and routing configuration allow the Nomad clients to
communicate with each other.
## Bridge networking
Linux clients support a network [`mode`][network_mode] called [`bridge`][]. A
bridge network acts like a virtual network switch allowing processes connected
to the bridge to reach each other while isolating them from others.
When an allocation uses bridge networking, the Nomad agent creates a bridge
called `nomad` (or the value set in [`bridge_network_name`][]) using the
[`bridge` CNI plugin][cni_bridge] if one doesn't exist yet. Before using this
mode you must first [install the CNI plugins][cni_install] into your clients.
By default a single bridge is created in each Nomad client.
<!-- Source: https://drive.google.com/file/d/1q4a2ab0TyLEPdWiO2DIianAPWuPqLqZ4/view?usp=share_link -->
[![Nomad Bridge](/img/networking/bridge.png)](/img/networking/bridge.png)
Allocations that use the `bridge` network mode run in an isolated network
namespace and are connected to the bridge. This allows Nomad to map random
ports from the host to specific port numbers inside the allocation that are
expected by the tasks.
For example, an HTTP server that listens on port `3000` by default can be
configured with the following `network` block:
```hcl
job "..." {
# ...
group "..." {
network {
mode = "bridge"
port "http" {
to = 3000
}
}
# ...
}
}
```
To allow communication between allocations in different clients, Nomad creates
an `iptables` rule to forward requests from the host network interface to the
bridge. This results in three different network access scopes:
- Tasks that bind to the loopback interface (`localhost` or `127.0.0.1`) are
accessible only from within the allocation.
- Tasks that bind to the bridge (or other general addresses, such as `0.0.0.0`)
without `port` forwarding are only accessible from within the same client.
- Tasks that bind to the bridge (or other general addresses, such as `0.0.0.0`)
with `port` forwarding are accessible from external sources.
~> **Warning:** To prevent any type of external access when using `bridge`
network mode make sure to bind your workloads to the loopback interface
only.
Bridge networking is at the core of [service mesh][] and a requirement when
using [Consul Service Mesh][consul_service_mesh].
### Bridge networking with Docker
The Docker daemon manages its own network configuration and creates its own
[bridge network][docker_bridge], network namespaces, and [`iptable`
rules][docker_iptables]. Tasks using the `docker` task driver connect to the
Docker bridge instead of using the one created by Nomad and, by default, each
container runs in its own Docker managed network namespace.
When using `bridge` network mode, Nomad creates a placeholder container using
the image defined in [`infra_image`][] to initialize a Docker network namespace
that is shared by all tasks in the allocation to allow them to communicate with
each other.
The Docker task driver has its own task-level
[`network_mode`][docker_network_mode] configuration. Its default value depends
on the group-level [`network.mode`][network_mode] configuration.
~> **Warning:** The task-level `network_mode` may conflict with the group-level
`network.mode` configuration and generate unexpected results. If you set the
group `network.mode = "bridge"` you should not set the Docker config
`network_mode`.
```hcl
group "..." {
network {
mode = "bridge"
}
task "..." {
driver = "docker"
config {
# This conflicts with the group-level network.mode configuration and
# should not be used.
network_mode = "bridge"
# ...
}
}
}
```
The diagram below illustrates what happens when a Docker task is configured
incorrectly.
<!-- Source: https://drive.google.com/file/d/1q4a2ab0TyLEPdWiO2DIianAPWuPqLqZ4/view?usp=share_link -->
[![Nomad Bridge](/img/networking/docker_bridge.png)](/img/networking/docker_bridge.png)
The tasks in the rightmost allocation are not able to communicate with each
other using their loopback interface because they were placed in different
network namespaces.
Since the group `network.mode` is `bridge`, Nomad creates the pause container
to establish a shared network namespace for all tasks, but setting the
task-level `network_mode` to `bridge` places the task in a different namespace.
This prevents, for example, a task from communicating with its sidecar proxy in
a [service mesh][] deployment.
Refer to the [`network_mode`][docker_network_mode] documentation and the
[Networking][docker_networking] section for more information.
-> **Note:** Docker Desktop in non-Linux environments runs a local virtual
machine, adding an extra layer of indirection. Refer to the
[FAQ][faq_docker] for more details.
## Comparing with other tools
### Kubernetes and Docker Compose
Networking in Kubernetes and Docker Compose works differently than in Nomad. To
access a container you use a fully qualified domain name such as `db` in Docker
Compose or `db.prod.svc.cluster.local` in Kubernetes. This process relies on
additional infrastructure to resolve the hostname and distribute the requests
across multiple containers.
Docker Compose allows you to run and manage multiple containers using units
called services.
```yaml
version: "3.9"
services:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
ports:
- "8001:5432"
```
To access a service from another container you can reference the service name
directly, for example using `postgres://db:5432`. In order to enable this
pattern, Docker Compose includes an [internal DNS services][docker_dns] and a
load balancer that is transparent to user. When running in Swarm mode, Docker
Compose also requires an overlay network to route requests across hosts.
Kubernetes provides the [`Service`][k8s_service] abstraction that can be used
to declare how a set of Pods are accessed.
```yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
```
To access the Service you use a FQDN such as
`my-service.prod.svc.cluster.local`. This name is resolved by the [DNS
service][k8s_dns] which is an add-on that runs in all nodes. Along with this
service, each node also runs a [`kube-proxy`][k8s_kubeproxy] instance to
distribute requests to all Pods matched by the Service.
You can use the same FQDN networking style with Nomad using [Consul's DNS
interface][consul_dns] and configuring your clients with [DNS
forwarding][consul_dns_forwarding], and deploying a [load
balancer][nomad_load_balancer].
Another key difference from Nomad is that in Kubernetes and Docker Compose each
container has its own IP address, requiring a virtual network to map physical
IP addresses to virtual ones. In case of Docker Compose in Swarm mode an
[`overlay`][docker_overlay] is also required to enable traffic across multiple
hosts. This allows multiple containers running the same service to listen on
the same port number.
In Nomad, allocations use the IP address of the client they are running and are
assigned random port numbers and so Nomad service discovery with DNS uses
[`SRV` records][dns_srv] instead of `A` or `AAA` records.
## Next topics
- [Service Discovery][service discovery]
- [Service Mesh][service mesh]
- [Container Network Interface][cni]
## Additional resources
- [Understanding Networking in Nomad - Karan Sharma](https://mrkaran.dev/posts/nomad-networking-explained/)
- [Understanding Nomad Networking Patterns - Luiz Aoqui, HashiTalks: Canada 2021](https://www.youtube.com/watch?v=wTA5HxB_uuk)
[`bridge_network_name`]: /nomad/docs/configuration/client#bridge_network_name
[`bridge`]: /nomad/docs/job-specification/network#bridge
[`infra_image`]: /nomad/docs/drivers/docker#infra_image
[`max_dynamic_port`]: /nomad/docs/configuration/client#max_dynamic_port
[`min_dynamic_port`]: /nomad/docs/configuration/client#min_dynamic_port
[`static`]: /nomad/docs/job-specification/network#static
[`template`]: /nomad/docs/job-specification/template#template-examples
[allocation]: /nomad/docs/concepts/architecture#allocation
[cni]: /nomad/docs/networking/cni
[cni_bridge]: https://www.cni.dev/plugins/current/main/bridge/
[cni_install]: /nomad/docs/install#post-installation-steps
[consul_dns]: /consul/docs/discovery/dns
[consul_dns_forwarding]: /consul/tutorials/networking/dns-forwarding
[consul_service_mesh]: /nomad/docs/integrations/consul-connect
[dns_srv]: https://en.wikipedia.org/wiki/SRV_record
[docker_bridge]: https://docs.docker.com/network/bridge/
[docker_compose]: https://docs.docker.com/compose/
[docker_dns]: https://docs.docker.com/config/containers/container-networking/#dns-services
[docker_iptables]: https://docs.docker.com/network/iptables/
[docker_network_mode]: https://nomadproject.io/drivers/docker#network_mode
[docker_networking]: /nomad/docs/drivers/docker#networking
[docker_overlay]: https://docs.docker.com/network/overlay/
[docker_swarm]: https://docs.docker.com/engine/swarm/
[faq_docker]: /nomad/docs/faq#q-how-to-connect-to-my-host-network-when-using-docker-desktop-windows-and-macos
[jobspec_network]: /nomad/docs/job-specification/network
[k8s_dns]: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
[k8s_ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
[k8s_kubeproxy]: https://kubernetes.io/docs/concepts/overview/components/#kube-proxy
[k8s_service]: https://kubernetes.io/docs/concepts/services-networking/service/
[network_mode]: /nomad/docs/job-specification/network#mode
[nomad_load_balancer]: /nomad/tutorials/load-balancing
[runtime_environment]: /nomad/docs/runtime/environment#network-related-variables
[service discovery]: /nomad/docs/networking/service-discovery
[service mesh]: /nomad/docs/networking/service-mesh