Fix formatting

This commit is contained in:
Seth Vargo 2016-10-03 18:23:50 -04:00
parent 6dea6df919
commit c3f06658b6
No known key found for this signature in database
GPG Key ID: 905A90C2949E8787
5 changed files with 471 additions and 418 deletions

View File

@ -21,55 +21,55 @@ environment variables.
<th>Description</th>
</tr>
<tr>
<td>NOMAD_ALLOC_DIR</td>
<td><tt>NOMAD_ALLOC_DIR</tt></td>
<td>Path to the shared alloc directory</td>
</tr>
<tr>
<td>NOMAD_TASK_DIR</td>
<td><tt>NOMAD_TASK_DIR</tt></td>
<td>Path to the local task directory</td>
</tr>
<tr>
<td>NOMAD_MEMORY_LIMIT</td>
<td><tt>NOMAD_MEMORY_LIMIT</tt></td>
<td>The task's memory limit in MB</td>
</tr>
<tr>
<td>NOMAD_CPU_LIMIT</td>
<td><tt>NOMAD_CPU_LIMIT</tt></td>
<td>The task's CPU limit in MHz</td>
</tr>
<tr>
<td>NOMAD_ALLOC_ID</td>
<td><tt>NOMAD_ALLOC_ID</tt></td>
<td>The allocation ID of the task</td>
</tr>
<tr>
<td>NOMAD_ALLOC_NAME</td>
<td><tt>NOMAD_ALLOC_NAME</tt></td>
<td>The allocation name of the task</td>
</tr>
<tr>
<td>NOMAD_ALLOC_INDEX</td>
<td><tt>NOMAD_ALLOC_INDEX</tt></td>
<td>The allocation index; useful to distinguish instances of task groups</td>
</tr>
<tr>
<td>NOMAD_TASK_NAME</td>
<td><tt>NOMAD_TASK_NAME</tt></td>
<td>The task's name</td>
</tr>
<tr>
<td>NOMAD_IP_"label"</td>
<td><tt>NOMAD_IP_&lt;label&gt;</tt></td>
<td>The IP of the the port with the given label</td>
</tr>
<tr>
<td>NOMAD_PORT_"label"</td>
<td><tt>NOMAD_PORT_&lt;label&gt;</tt></td>
<td>The port value with the given label</td>
</tr>
<tr>
<td>NOMAD_ADDR_"label"</td>
<td><tt>NOMAD_ADDR_&lt;label&gt;</tt></td>
<td>The IP:Port pair of the the port with the given label</td>
</tr>
<tr>
<td>NOMAD_HOST_PORT_"label"</td>
<td><tt>NOMAD_HOST_PORT_&lt;label&gt;</tt></td>
<td>The host port for the given label if the port is port mapped</td>
</tr>
<tr>
<td>NOMAD_META_"key"</td>
<td><tt>NOMAD_META_&lt;key&gt;</tt></td>
<td>The metadata of the task</td>
</tr>
</table>
@ -136,7 +136,7 @@ directories can be read through the following environment variables:
The job specification also allows you to specify a `meta` block to supply arbitrary
configuration to a task. This allows you to easily provide job-specific
configuration even if you use the same executable unit in multiple jobs. These
key-value pairs are passed through to the job as `NOMAD_META_"key"={value}`,
key-value pairs are passed through to the job as `NOMAD_META_<key>=<value>`,
where `key` is UPPERCASED from the job specification.
Currently there is no enforcement that the meta values be lowercase, but using

View File

@ -9,46 +9,67 @@ description: |-
# Job Specification
Jobs can be specified either in [HCL](https://github.com/hashicorp/hcl) or JSON.
HCL is meant to strike a balance between human readable and editable, and machine-friendly.
HCL is meant to strike a balance between human readable and editable, and
machine-friendly.
For machine-friendliness, Nomad can also read JSON configurations. In general, we recommend
using the HCL syntax.
For machine-friendliness, Nomad can also read JSON configurations. In general,
we recommend using the HCL syntax.
## HCL Syntax
For a detailed description of HCL general syntax, [see this guide](https://github.com/hashicorp/hcl#syntax).
Here we cover the details of the Job specification for Nomad:
For a detailed description of HCL general syntax, [see this
guide](https://github.com/hashicorp/hcl#syntax). Here we cover the details of
the Job specification for Nomad:
```
# Define a job called my-service
job "my-service" {
# Job should run in the US region
```hcl
# This declares a job named "docs". There can be exactly one
# job declaration per job file.
job "docs" {
# Specify this job should run in the region named "us". Regions
# are defined by the Nomad servers' configuration.
region = "us"
# Spread tasks between us-west-1 and us-east-1
# Spread the tasks in this job between us-west-1 and us-east-1.
datacenters = ["us-west-1", "us-east-1"]
# run this with service scheduler
# Run this job as a "service" type. Each job type has different
# properties. See the documentation below for more examples.
type = "service"
# Rolling updates should be sequential
# Specify this job to have rolling updates, two-at-a-time, with
# 30 second intervals.
update {
stagger = "30s"
max_parallel = 1
}
# A group defines a series of tasks that should be co-located
# on the same client (host). All tasks within a group will be
# placed on the same host.
group "webs" {
# We want 5 web servers
# Specify the number of these tasks we want.
count = 5
# Create a web front end using a docker image
# Create an individual task (unit of work). This particular
# task utilizes a Docker container to front a web application.
task "frontend" {
# Specify the driver to be "docker". Nomad supports
# multiple drivers.
driver = "docker"
# Configuration is specific to each driver.
config {
image = "hashicorp/web-frontend"
}
# The service block tells Nomad how to register this service
# with Consul for service discovery and monitoring.
service {
# This tells Consul to monitor the service on the port
# labled "http". Since Nomad allocates high dynamic port
# numbers, we use labels to refer to them.
port = "http"
check {
type = "http"
path = "/health"
@ -56,20 +77,32 @@ job "my-service" {
timeout = "2s"
}
}
# It is possible to set environment variables which will be
# available to the job when it runs.
env {
DB_HOST = "db01.example.com"
DB_USER = "web"
DB_PASSWORD = "loremipsum"
DB_PASS = "loremipsum"
}
# Specify the maximum resources required to run the job,
# include CPU, memory, and bandwidth.
resources {
cpu = 500
memory = 128
cpu = 500 # MHz
memory = 128 # MB
network {
mbits = 100
# Request for a dynamic port
port "http" {
}
# Request for a static port
# This requests a dynamic port named "http". This will
# be something like "46283", but we refer to it via the
# label "http".
port "http" {}
# This requests a static port on 443 on the host. This
# will restrict this task to running once per host, since
# there is only one port 443 on each host.
port "https" {
static = 443
}
@ -80,12 +113,12 @@ job "my-service" {
}
```
This is a fairly simple example job, but demonstrates many of the features and syntax
of the job specification. The primary "objects" are the job, task group, and task.
Each job file has only a single job, however a job may have multiple task groups,
and each task group may have multiple tasks. Task groups are a set of tasks that
must be co-located on a machine. Groups with a single task and count of one
can be declared outside of a group which is created implicitly.
This is a fairly simple example job, but demonstrates many of the features and
syntax of the job specification. The primary "objects" are the job, task group,
and task. Each job file has only a single job, however a job may have multiple
task groups, and each task group may have multiple tasks. Task groups are a set
of tasks that must be co-located on a machine. Groups with a single task and
count of one can be declared outside of a group which is created implicitly.
Constraints can be specified at the job, task group, or task level to restrict
where a task is eligible for running. An example constraint looks like:
@ -103,9 +136,7 @@ This metadata is opaque to Nomad and can be used for any purpose, including
defining constraints on the metadata. Metadata can be specified by:
```
# Setup ELB via metadata and setup foo
meta {
foo = "bar"
elb_mode = "tcp"
elb_check_interval = "10s"
}
@ -155,22 +186,21 @@ The `job` object supports the following keys:
* `update` - Specifies the task's update strategy. When omitted, rolling
updates are disabled. The `update` block supports the following keys:
* `max_parallel` - `max_parallel` is given as an integer value and specifies
the number of tasks that can be updated at the same time.
* `max_parallel` - integer that specifies the number of tasks that can be
updated at the same time.
* `stagger` - `stagger` introduces a delay between sets of task updates and
is given as an as a time duration. If stagger is provided as an integer,
seconds are assumed. Otherwise the "s", "m", and "h" suffix can be used,
such as "30s".
* `stagger` - introduces a delay between sets of task updates and is given as
an as a time duration. If stagger is provided as an integer, seconds are
assumed. Otherwise the "s", "m", and "h" suffix can be used, such as "30s".
An example `update` block:
Here is an example `update` block:
```
```hcl
update {
// Update 3 tasks at a time.
# Update 3 tasks at a time.
max_parallel = 3
// Wait 30 seconds between updates.
# Wait 30 seconds between updates.
stagger = "30s"
}
```
@ -182,7 +212,7 @@ The `job` object supports the following keys:
timezone to ensure consistent evaluation when Nomad Servers span multiple
time zones. The `periodic` block is optional and supports the following keys:
* `enabled` - `enabled` determines whether the periodic job will spawn child
* `enabled` - determines whether the periodic job will spawn child
jobs. `enabled` is defaulted to true if the block is included.
* `cron` - A cron expression configuring the interval the job is launched
@ -190,19 +220,19 @@ The `job` object supports the following keys:
[here](https://github.com/gorhill/cronexpr#implementation) for full
documentation of supported cron specs and the predefined expressions.
* <a id="prohibit_overlap">`prohibit_overlap`</a> - `prohibit_overlap` can
* <a id="prohibit_overlap">`prohibit_overlap`</a> - this can
be set to true to enforce that the periodic job doesn't spawn a new
instance of the job if any of the previous jobs are still running. It is
defaulted to false.
An example `periodic` block:
Here is an example `periodic` block:
```
```hcl
periodic {
// Launch every 15 minutes
# Launch every 15 minutes
cron = "*/15 * * * * *"
// Do not allow overlapping runs.
# Do not allow overlapping runs.
prohibit_overlap = true
}
```
@ -250,17 +280,17 @@ The `task` object supports the following keys:
task transitions to the dead state. [Click
here](/docs/jobspec/servicediscovery.html) to learn more about services.
* `env` - A map of key/value representing environment variables that
will be passed along to the running process. Nomad variables are
interpreted when set in the environment variable values. See the table of
interpreted variables [here](/docs/jobspec/interpreted.html).
* `env` - A map of key/value representing environment variables that will be
passed along to the running process. Nomad variables are interpreted when set
in the environment variable values. See the table of interpreted variables
[here](/docs/jobspec/interpreted.html).
For example the below environment map will be reinterpreted:
```
```hcl
env {
// The value will be interpreted by the client and set to the
// correct value.
# The value will be interpreted by the client and set to the
# correct value.
NODE_CLASS = "${nomad.class}"
}
```
@ -304,12 +334,12 @@ The `network` object supports the following keys:
* `mbits` (required) - The number of MBits in bandwidth required.
* `port` - `port` is a repeatable object that can be used to specify both
* `port` - a repeatable object that can be used to specify both
dynamic ports and reserved ports. It has the following format:
```
```hcl
port "label" {
// If the `static` field is omitted, a dynamic port will be assigned.
# If the `static` field is omitted, a dynamic port is assigned.
static = 6539
}
```
@ -332,35 +362,34 @@ The `restart` object supports the following keys:
time duration using the `s`, `m`, and `h` suffixes, such as `30s`. A random
jitter of up to 25% is added to the delay.
* `mode` - Controls the behavior when the task fails more than `attempts`
times in an interval. Possible values are listed below:
* `mode` - Controls the behavior when the task fails more than `attempts` times
in an interval. Possible values are listed below:
* `delay` - `delay` will delay the next restart until the next `interval` is
reached.
* `delay` - delay the next restart until the next `interval` is reached.
* `fail` - `fail` will not restart the task again.
* `fail` - do not restart the task again on failure.
The default `batch` restart policy is:
The default `batch` restart policy is:
```
restart {
```hcl
restart {
attempts = 15
delay = "15s"
interval = "168h" # 7 days
mode = "delay"
}
```
}
```
The default non-batch restart policy is:
The default non-batch restart policy is:
```
restart {
```hcl
restart {
interval = "1m"
attempts = 2
delay = "15s"
mode = "delay"
}
```
}
```
### Constraint
@ -369,9 +398,9 @@ The `constraint` object supports the following keys:
* `attribute` - Specifies the attribute to examine for the
constraint. See the table of attributes [here](/docs/jobspec/interpreted.html#interpreted_node_vars).
* `operator` - Specifies the comparison operator. Defaults to equality,
and can be `=`, `==`, `is`, `!=`, `not`, `>`, `>=`, `<`, `<=`. The
ordering is compared lexically. The following are equivalent:
* `operator` - Specifies the comparison operator. Defaults to equality, and can
be `=`, `==`, `is`, `!=`, `not`, `>`, `>=`, `<`, `<=`. The ordering is
compared lexically. The following are equivalent:
* `=`, `==` and `is`
* `!=` and `not`
@ -414,10 +443,10 @@ The `logs` object configures the log rotation policy for a task's `stdout` and
`MB`.
If the amount of disk resource requested for the task is less than the total
amount of disk space needed to retain the rotated set of files, Nomad will return
a validation error when a job is submitted.
amount of disk space needed to retain the rotated set of files, Nomad will
return a validation error when a job is submitted.
```
```hcl
logs {
max_files = 3
max_file_size = 10 # Size is in MB
@ -459,8 +488,8 @@ The `artifact` object supports the following keys:
[here](https://github.com/hashicorp/go-getter/tree/ef5edd3d8f6f482b775199be2f3734fd20e04d4a#protocol-specific-options-1).
An example is given below:
```
options {
```hcl
options {
# Validate the downloaded artifact
checksum = "md5:c4aa853ad2215426eb7d70a21922e794"
@ -468,22 +497,22 @@ options {
aws_access_key_id = "<id>"
aws_access_key_secret = "<secret>"
aws_access_token = "<token>"
}
```
}
```
An example of downloading and unzipping an archive is as simple as:
An example of downloading and unzipping an archive is as simple as:
```
artifact {
# The archive will be extracted before the task is run, making
# it easy to ship configurations with your binary.
```hcl
artifact {
# The archive will be extracted before the task is run,
# making it easy to ship configurations with your binary.
source = "https://example.com/my.zip"
options {
checksum = "md5:7f4b3e3b4dd5150d4e5aaaa5efada4c3"
}
}
```
}
```
#### S3 examples
@ -491,24 +520,27 @@ S3 has several different types of addressing and more detail can be found
[here](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
S3 region specific endpoints can be found
[here](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region)
[here](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region).
Path based style:
```
```hcl
artifact {
source = "https://s3-us-west-2.amazonaws.com/my-bucket-example/my_app.tar.gz"
}
```
or to override automatic detection in the URL, use the S3-specific syntax
```
```hcl
artifact {
source = "s3::https://s3-eu-west-1.amazonaws.com/my-bucket-example/my_app.tar.gz"
}
```
Virtual hosted based style
```
```hcl
artifact {
source = "my-bucket-example.s3-eu-west-1.amazonaws.com/my_app.tar.gz"
}

View File

@ -15,140 +15,140 @@ which will emit a JSON version of the job.
## JSON Syntax
Below is an example of a JSON object that submits a `Periodic` job to Nomad:
Below is an example of a JSON object that submits a `periodic` job to Nomad:
```
```json
{
"Job": {
"Region": "global",
"ID": "example",
"Name": "example",
"Type": "batch",
"Priority": 50,
"AllAtOnce": false,
"Datacenters": [
"Job":{
"Region":"global",
"ID":"example",
"Name":"example",
"Type":"batch",
"Priority":50,
"AllAtOnce":false,
"Datacenters":[
"dc1"
],
"Constraints": [
"Constraints":[
{
"LTarget": "${attr.kernel.name}",
"RTarget": "linux",
"Operand": "="
"LTarget":"${attr.kernel.name}",
"RTarget":"linux",
"Operand":"="
}
],
"TaskGroups": [
"TaskGroups":[
{
"Name": "cache",
"Count": 1,
"Constraints": null,
"Tasks": [
"Name":"cache",
"Count":1,
"Constraints":null,
"Tasks":[
{
"Name": "redis",
"Driver": "docker",
"User": "foo-user",
"Config": {
"image": "redis:latest",
"port_map": [
"Name":"redis",
"Driver":"docker",
"User":"foo-user",
"Config":{
"image":"redis:latest",
"port_map":[
{
"db": 6379
"db":6379
}
]
},
"Constraints": null,
"Env": {
"foo": "bar",
"baz": "pipe"
"Constraints":null,
"Env":{
"foo":"bar",
"baz":"pipe"
},
"Services": [
"Services":[
{
"Name": "cache-redis",
"Tags": [
"Name":"cache-redis",
"Tags":[
"global",
"cache"
],
"PortLabel": "db",
"Checks": [
"PortLabel":"db",
"Checks":[
{
"Id": "",
"Name": "alive",
"Type": "tcp",
"Command": "",
"Args": null,
"Path": "",
"Protocol": "",
"Interval": 10000000000,
"Timeout": 2000000000
"Id":"",
"Name":"alive",
"Type":"tcp",
"Command":"",
"Args":null,
"Path":"",
"Protocol":"",
"Interval":10000000000,
"Timeout":2000000000
}
]
}
],
"Resources": {
"CPU": 500,
"MemoryMB": 256,
"DiskMB": 300,
"IOPS": 0,
"Networks": [
"Resources":{
"CPU":500,
"MemoryMB":256,
"DiskMB":300,
"IOPS":0,
"Networks":[
{
"ReservedPorts": [
"ReservedPorts":[
{
"Label": "rpc",
"Value": 25566
"Label":"rpc",
"Value":25566
}
],
"DynamicPorts": [
"DynamicPorts":[
{
"Label": "db"
"Label":"db"
}
],
"MBits": 10
"MBits":10
}
]
},
"Meta": {
"foo": "bar",
"baz": "pipe"
"Meta":{
"foo":"bar",
"baz":"pipe"
},
"KillTimeout": 5000000000,
"LogConfig": {
"MaxFiles": 10,
"MaxFileSizeMB": 10
"KillTimeout":5000000000,
"LogConfig":{
"MaxFiles":10,
"MaxFileSizeMB":10
},
"Artifacts": [
"Artifacts":[
{
"GetterSource": "http://foo.com/artifact.tar.gz",
"GetterOptions": {
"checksum": "md5:c4aa853ad2215426eb7d70a21922e794"
"GetterSource":"http://foo.com/artifact.tar.gz",
"GetterOptions":{
"checksum":"md5:c4aa853ad2215426eb7d70a21922e794"
},
"RelativeDest": "local/"
"RelativeDest":"local/"
}
]
}
],
"RestartPolicy": {
"Interval": 300000000000,
"Attempts": 10,
"Delay": 25000000000,
"Mode": "delay"
"RestartPolicy":{
"Interval":300000000000,
"Attempts":10,
"Delay":25000000000,
"Mode":"delay"
},
"Meta": {
"foo": "bar",
"baz": "pipe"
"Meta":{
"foo":"bar",
"baz":"pipe"
}
}
],
"Update": {
"Stagger": 10000000000,
"MaxParallel": 1
"Update":{
"Stagger":10000000000,
"MaxParallel":1
},
"Periodic": {
"Enabled": true,
"Spec": "* * * * *",
"SpecType": "cron",
"ProhibitOverlap": true
"Periodic":{
"Enabled":true,
"Spec":"* * * * *",
"SpecType":"cron",
"ProhibitOverlap":true
},
"Meta": {
"foo": "bar",
"baz": "pipe"
"Meta":{
"foo":"bar",
"baz":"pipe"
}
}
}
@ -200,11 +200,13 @@ The `Job` object supports the following keys:
An example `Update` block:
```
```json
{
"Update": {
"MaxParallel" : 3,
"Stagger" : 10000000000
}
}
```
* `Periodic` - `Periodic` allows the job to be scheduled at fixed times, dates
@ -230,13 +232,15 @@ The `Job` object supports the following keys:
An example `periodic` block:
```
```json
{
"Periodic": {
"Spec": "*/15 * * * * *"
"SpecType": "cron",
"Enabled": true,
"ProhibitOverlap": true
}
}
```
### Task Group
@ -286,10 +290,12 @@ The `Task` object supports the following keys:
For example the below environment map will be reinterpreted:
```
```json
{
"Env": {
"NODE_CLASS" : "${nomad.class}"
}
}
```
* `KillTimeout` - `KillTimeout` is a time duration in nanoseconds. It can be
@ -468,10 +474,12 @@ If the amount of disk resource requested for the task is less than the total
amount of disk space needed to retain the rotated set of files, Nomad will return
a validation error when a job is submitted.
```
"LogConfig: {
```json
{
"LogConfig": {
"MaxFiles": 3,
"MaxFileSizeMB": 10
}
}
```
@ -506,30 +514,31 @@ The `Artifact` object supports the following keys:
[here](https://github.com/hashicorp/go-getter/tree/ef5edd3d8f6f482b775199be2f3734fd20e04d4a#protocol-specific-options-1).
An example is given below:
```
"GetterOptions": {
```json
{
"GetterOptions": {
"checksum": "md5:c4aa853ad2215426eb7d70a21922e794",
"aws_access_key_id": "<id>",
"aws_access_key_secret": "<secret>",
"aws_access_token": "<token>"
}
}
```
An example of downloading and unzipping an archive is as simple as:
```
"Artifacts": [
```json
{
"Artifacts": [
{
# The archive will be extracted before the task is run, making
# it easy to ship configurations with your binary.
"GetterSource": "https://example.com/my.zip",
"GetterOptions": {
"checksum": "md5:7f4b3e3b4dd5150d4e5aaaa5efada4c3"
}
}
]
]
}
```
#### S3 examples
@ -541,28 +550,37 @@ S3 region specific endpoints can be found
[here](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region)
Path based style:
```
"Artifacts": [
```json
{
"Artifacts": [
{
"GetterSource": "https://s3-us-west-2.amazonaws.com/my-bucket-example/my_app.tar.gz",
}
]
]
}
```
or to override automatic detection in the URL, use the S3-specific syntax
```
"Artifacts": [
```json
{
"Artifacts": [
{
"GetterSource": "s3::https://s3-eu-west-1.amazonaws.com/my-bucket-example/my_app.tar.gz",
}
]
]
}
```
Virtual hosted based style
```
"Artifacts": [
```json
{
"Artifacts": [
{
"GetterSource": "my-bucket-example.s3-eu-west-1.amazonaws.com/my_app.tar.gz",
}
]
]
}
```

View File

@ -32,11 +32,9 @@ port will be allocated dynamically by the scheduler, and your service will have
to read an environment variable (see below) to know which port to bind to at
startup.
```
task "webservice" {
...
```hcl
task "example" {
resources {
...
network {
port "http" {}
port "https" {}
@ -51,11 +49,9 @@ Static ports bind your job to a specific port on the host they're placed on.
Since multiple services cannot share a port, the port must be open in order to
place your task.
```
task "dnsservice" {
...
```hcl
task "example" {
resources {
...
network {
port "dns" {
static = 53
@ -75,7 +71,7 @@ discovery, and used for the name of the environment variable that indicates
which port your application should bind to. For example, we've labeled this
port `http`:
```
```hcl
port "http" {}
```
@ -94,15 +90,17 @@ means that your application can listen on a fixed port (it does not need to
read the environment variable) and the dynamic port will be mapped to the port
in your container or VM.
```
driver = "docker"
```hcl
task "example" {
driver = "docker"
port "http" {}
port "http" {}
config {
config {
port_map = {
http = 8080
}
}
}
```

View File

@ -11,8 +11,8 @@ description: |-
Nomad schedules workloads of various types across a cluster of generic hosts.
Because of this, placement is not known in advance and you will need to use
service discovery to connect tasks to other services deployed across your
cluster. Nomad integrates with [Consul](https://www.consul.io) to provide service
discovery and monitoring.
cluster. Nomad integrates with [Consul](https://www.consul.io) to provide
service discovery and monitoring.
Note that in order to use Consul with Nomad, you will need to configure and
install Consul on your nodes alongside Nomad, or schedule it as a system job.
@ -34,18 +34,22 @@ ports.
A brief example of a service definition in a Task
```
```hcl
group "database" {
task "mysql" {
driver = "docker"
service {
tags = ["master", "mysql"]
port = "db"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
check {
type = "script"
name = "check_table"
@ -55,13 +59,14 @@ group "database" {
timeout = "5s"
}
}
resources {
cpu = 500
memory = 1024
network {
mbits = 10
port "db" {
}
port "db" {}
}
}
}