Theme NexT works best with JavaScript enabled
0%

Docker Manual


IMPORTANT:
Some of the content here is a personal summary/abbreviation of contents on the Offical Docker Guide. Feel free to refer to the official site if you think some of the sections written here are not clear.


Docker Intro

Docker provides the ability to package and run an application in a loosely isolated environment called a container, which separate your applications from your infrastructure. The isolation and security allow you to run many containers simultaneously on a given host.

The use of containers to deploy applications is called containerization. Containers are not new, but their use for easily deploying applications is.

Docker provides tooling and a platform to manage the lifecycle of your containers:

  • Develop your application and its supporting components using containers.
  • The container becomes the unit for distributing and testing your application.
  • When you’re ready, deploy your application into your production environment, as a container or an orchestrated service. This works the same whether your production environment is a local data center, a cloud provider, or a hybrid of the two.

Docker Basics

Docker is mainly composed of the following elements (the first three is referred as Docker Engine by the official site):

  • A server (docker daemon) which is a type of long-running program called a daemon process (the dockerdcommand).
  • A command line interface (CLI) client that is used by most users (the dockercommand).
  • A REST API which specifies interfaces that CLI programs can use to talk to the daemon and instruct it what to do.
  • A Docker registry, which stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default.

So we see that Docker uses a client-server architecture. The docker client talks to the docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon.

Docker Architecture

  • The Docker daemon

    • The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.
  • The Docker client

    • The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.
  • Docker registries

    • A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry. If you use Docker Datacenter (DDC), it includes Docker Trusted Registry (DTR).
  • Docker objects

    • When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects. The section below is a brief overview of some of those objects.

Docker Objects

  • IMAGE
    • An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization. For example, you may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application run.
  • CONTAINER

    • A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.

      Fundementally, a container is nothing but a running process, with some added encapsulation features applied to it in order to keep it isolated from the host and from other containers. One of the most important aspects of container isolation is that each container interacts with its own private filesystem.

      By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

      A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.

  • SERVICE

    • Services allow you to scale containers across multiple Docker daemons, which all work together as a swarm with multiple managers and workers. Each member of a swarm is a Docker daemon, and all the daemons communicate using the Docker API. A service allows you to define states, such as the number of replicas of the service that must be available at any given time.

      By default, the service is load-balanced across all worker nodes. To the consumer, the Docker service appears to be a single application. Docker Engine supports swarm mode in Docker 1.12 and higher.

Installing Docker

Please follow the Official Guide.

Basic Workflow

In general, the development workflow for containerized applications looks like this:

  1. Create a Docker image containing the components of your application
  2. Test those components
  3. Assemble your containers using the image file and supporting infrastructure into a complete application
  4. Test, share, and deploy your complete containerized application.

Quickstart for Building an Image and a Container

First, we could use an existing project to demonstrate some concepts mentioned above.

  1. Run:

    1
    2
    git clone https://github.com/dockersamples/node-bulletin-board
    cd node-bulletin-board/bulletin-board-app

    Then you will see that there is a file called Dockerfile. Dockerfiles describe how to build an image and assemble a container with a private filesystem, and can also contain some metadata describing how to run a container based on this image.

    The bulletin board app Dockerfile looks like this:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    # Use the official image as a parent image.
    FROM node:current-slim

    # Set the working directory.
    WORKDIR /usr/src/app

    # Copy the file from your host to your current location.
    COPY package.json .

    # Run the command inside your image filesystem.
    RUN npm install

    # Inform Docker that the container is listening on the specified port at runtime.
    EXPOSE 8080

    # Run the specified command within the container.
    CMD [ "npm", "start" ]

    # Copy the rest of your app's source code from your host to your image filesystem.
    COPY . .

    You can think of these Dockerfile commands as a step-by-step recipe on how to build up your image.

  2. Make sure you’re in the directory node-bulletin-board/bulletin-board-app in a terminal or PowerShell using the cd command. Let’s build your bulletin board image:

    1
    docker build --tag bulletinboard:1.0 .

    You’ll see Docker step through each instruction in your Dockerfile, building up your image as it goes:

    • Start FROM the pre-existing node:current-slim image. This is an official image, built by the node.js vendors and validated by Docker to be a high-quality image containing the Node.js Long Term Support (LTS) interpreter and basic dependencies.

    • Use WORKDIR to specify that all subsequent actions should be taken from the directory /usr/src/app in your image filesystem (never the host’s filesystem).

    • COPY the file package.json from your host to the present location (.) in your image (so in this case, to /usr/src/app/package.json)

    • RUN the command npm install inside your image filesystem (which will read package.json to determine your app’s node dependencies, and install them)

    • COPY in the rest of your app’s source code from your host to your image filesystem.

      If successful, the build process should end with a message Successfully tagged bulletinboard:1.0.

      The steps above built up the filesystem of our image, but there are other lines in your Dockerfile.

    • The CMD directive is the first example of specifying some metadata in your image that describes how to run a container based on this image. In this case, it’s saying that the containerized process that this image is meant to support is npm start.

    • The EXPOSE 8080 informs Docker that the container is listening on port 8080 at runtime.

  3. Now you have an image and you can start a container based on the image:

    1
    docker run --publish 8000:8080 --detach --name  bb bulletinboard:1.0

    This has a couple of common flags:

    • --publish asks Docker to forward traffic incoming on the host’s port 8000, to the container’s port 8080. Containers have their own private set of ports, so if you want to reach one from the network, you have to forward traffic to it in this way. Otherwise, firewall rules will prevent all network traffic from reaching your container, as a default security posture.
    • --detach asks Docker to run this container in the background.
    • --name specifies a name with which you can refer to your container in subsequent commands, in this case bb.
  4. Now, you can visit the application in a browser at localhost:8000. At this step, it would be the time to run unit tests, for example. You would normally do everything you could to ensure your container works the way you expected.

    Once you’re satisfied that your bulletin board container works correctly, you can delete it:

    1
    docker rm --force bb

    The --force option stops a running container, so it can be removed. If you stop the container running with docker stop bb first, then you do not need to use --force to remove it.

At this point, you’ve successfully built an image, performed a simple containerization of an application using that image, and confirmed that your app runs successfully in its container.

Quickstart for Sharing Images on Docker Hub

At this point, you’ve built a containerized application. The final step is to share your images on a registry like Docker Hub, so they can be easily downloaded and run on any destination machine. (first you need to have a Docker Hub account, if needed help, follow this guide)

  1. Once you have an account, you need to create a repository. At this point, you can only specify a repository name, for example, bulletin, and click create.

  2. Now you are ready to share your image on Docker Hub, but there’s one thing you must do first: images must be namespaced correctly to share on Docker Hub. Specifically, you must name images like <Docker ID>/<Repository Name>:<tag>.

    For example, if your username is abc and the name of your created repository is bulletin, then you need to tag your image to be: abc/bulletin:<whateverVersionHere>. This is because, when you push in the next step, Docker will look up specifically at location abc/bulletin in Docker Hub. If you spelled something wrongly, it will not go to the correct location.

  3. Finally, push your image to Docker Hub. For example, like this:

    1
    docker push abc/bulletin:1.0

Now that your image is available on Docker Hub, you’ll be able to run it anywhere. By moving images around in this way, you no longer need to install any dependencies except Docker on the machines you want to run your software on. The dependencies of containerized applications are completely encapsulated and isolated within your images, which you can share using Docker Hub as described above.

However, there is one thing to keep in mind: at the moment, you’ve only pushed your image to Docker Hub; what about your Dockerfile? A crucial best practice is to keep these in version control (e.g. using git), perhaps alongside your source code for your application. You can add a link or note in your Docker Hub repository description indicating where these files can be found, preserving the record not only of how your image was built, but how it’s meant to be run as a full application.

Starting and Stopping Containers

  • To create a new container from an image and start it, use docker run:

    1
    docker run [options] <image> [command] [argument]

    Usually, you would want to specify a name for your container, which means you add the --name option. For example:

    1
    docker run ––name=Ubuntu-Test ubuntu:14.04

    where ubuntu:14.04 would be the ubuntu image with version/tag 14.04. However, if you do not define a name for your newly created container, the deamon will generate a random string name, which you can later check with the docker ps command (see the section Viewing and Removing Containers)

    However, this would only start a container. If you would like to interact with it, you need to add the –i and –t options to enter the interative mode. For example, to enter the Ubuntu bash in the ubuntu container created in the above example, you could run:

    1
    docker run –it ––name=Ubuntu_Test ubuntu:14.04

    Instead of using -i or -t options, use the attach command to connect to a running container as well:

    1
    docker attach container_id

    (To exit the container, you can type EXIT)

  • To stop a container, you could use the docker stop command:

    1
    docker stop [option] <container_id>

    By default, you get a 10 second grace period if you run docker stop. This means that the stop command instructs the container to stop services after that 10 seconds period. You can also use the --time option to define a different grace period expressed in seconds, for example:

    1
    docker stop --time=20 <container_id>

    If you want to immediately kill a docker container without waiting for the grace period to end, use:

    1
    docker kill [option] <container_id>

    Additionally, a useful command is to stop (or kill) all running containers:

    1
    docker stop $(docker ps –a –q)

    If you want to kill those containers, just replace the stop with kill.

Viewing and Removing Unwanted Images

  • To view the images, you could use the command:

    1
    docker images

    to provide you with a list of images that you have created. For example, they might look like this:

    1
    2
    3
    4
    5
    REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
    bulletinboard 1.0 2cd431e8ee37 3 hours ago 182MB
    mysql 8.0 30f937e841c8 4 days ago 541MB
    node current-slim 8ec3841e41bb 4 days ago 165MB
    ubuntu latest 1d622ef86b13 4 weeks ago 73.9MB
  • To remove an image, you need to specify both the name and the version (tag) in the format <name>:<version>. For example, to remove the bulletinboard image, you could do:

    1
    docker image rm -f bulletinboard:1.0

    However, if it is at the latest version/tag, then you can just specify the name:

    1
    docker image rm -f ubuntu

    Note:

    • The -f or --force option is only necessay when you have the container running. If it is not running, it can be removed just by specifying docker image rm

Viewing and Removing Containers

  • To list all running containers, you can use the command:

    1
    docker ps

    which, for example, could gives you:

    1
    2
    CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS                             PORTS                 NAMES
    a22b3d54d5f4 mysql/mysql-server:latest "/entrypoint.sh mysq…" 18 seconds ago Up 17 seconds (health: starting) 3306/tcp, 33060/tcp test-sql-server

    However, if you would like to see also stopped containers, you could use the -a or --all flag:

    1
    docker ps -a

    Some of the other useful options include:

    Option Default Description
    --filter , -f - Filter output based on conditions provided
    --format - Pretty-print containers using a Go template
    --last, -n -1 Show n last created containers (includes all states)
    --latest , -l - Show the latest created container (includes all states)
    --no-trunc - Don’t truncate output
    --quiet , -q - Only display numeric IDs

    The --filter could be quite useful to select containers you want to see, for example, if you only want to see containers with name having the substring test, you could run:

    1
    docker ps --filter "name=test"

    For a full list of examples using --format, please refer to the Docker Doc here

  • On the other hand, to remove unwanted containers, you could use the command:

    1
    docker container rm <container-id>

    where the <container-id> could be seen if you run the docker ps command shown above. If the container is running, you could also force remove it with the --force or -f option.

    For other commands/options related to docker container -rm please refer to the Docker Doc here

Copying Files to and from a Container

  • To copy local files to a container, you can use teh command:

    1
    docker cp <localPath>/<fromFile> <container-id>:<pathInContainer>/<toFile>

    For example:

    1
    docker cp foo.txt test-container:~/foo.txt

    Then you can see the file foo.txt in your container at the path root/foo.txt

  • Similarily, to copy a file from a container to your local machine, you can use:

    1
    docker cp <container-id>:<pathInContainer>/<fromFile> <localPath>/<toFile>

Configuring your Docker Daemon

To configure the Docker daemon using a JSON file, create a file at /etc/docker/daemon.json on Linux systems, or C:\ProgramData\docker\config\daemon.json on Windows. On MacOS go to the whale in the taskbar > Preferences > Daemon > Advanced.

For example, if you want your Docker daemon to run in debug mode, uses TLS, and listens for traffic routed to 192.168.59.3 on port 2376, you would have the following configuration in your daemon.json ( You can learn what configuration options are available in the dockerd reference docs):

1
2
3
4
5
6
7
{
"debug": true,
"tls": true,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem",
"hosts": ["tcp://192.168.59.3:2376"]
}

However, you could also manually start a daemon with the same configuration in your command line:

1
2
3
4
5
dockerd --debug \
--tls=true \
--tlscert=/var/docker/server.pem \
--tlskey=/var/docker/serverkey.pem \
--host tcp://192.168.59.3:2376

Note:

  • If you use a daemon.json file and also pass options to the dockerd command manually or using start-up scripts, and these options conflict, Docker fails to start with an error such as:
1
2
3
unable to configure the Docker daemon with file /etc/docker/daemon.json:
the following directives are specified both as a flag and in the configuration
file: hosts: (from flag: [unix:///var/run/docker.sock], from file: [tcp://127.0.0.1:2376])

If you see an error similar to this one and you are starting the daemon manually with flags, you may need to adjust your flags or the daemon.json to remove the conflict.

This means if you want to enter the debug mode, you can set the debug key to true in the daemon.json file. This method works for every Docker platform and is recommended.

  • If the file is empty, add the following:

    1
    2
    3
    {
    "debug": true
    }
  • If the file already contains JSON, just add the key "debug": true, being careful to add a comma to the end of the line if it is not the last line before the closing bracket. Also verify that if the log-level key is set, it is set to either info or debug. info is the default, and possible values are debug, info, warn, error, fatal.

  • Send a HUP signal to the daemon to cause it to reload its configuration.

    • On Linux hosts, use the following command.

      1
      $ sudo kill -SIGHUP $(pidof dockerd)
    • On Windows hosts, restart Docker.

Note:

  • Instead of following this procedure, you can also stop the Docker daemon and restart it manually with the debug flag -D. However, this may result in Docker restarting with a different environment than the one the hosts’ startup scripts create, and this may make debugging more difficult.

Starting your Docker Daemon Manually

Once Docker is installed, you need to start the Docker daemon. Most Linux distributions use systemctl to start services. If you do not have systemctl, use the service command.

  • systemctl:

    1
    $ sudo systemctl start docker
  • service:

    1
    $ sudo service docker start

Starting your Containers Automatically

Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker restarts. Restart policies ensure that linked containers are started in the correct order. Docker recommends that you use restart policies, and avoid using process managers to start containers.

To configure the restart policy for a container, use the --restart flag when using the docker run command. The value of the --restart flag can be any of the following:

Flag Description
no Do not automatically restart the container. (the default)
on-failure Restart the container if it exits due to an error, which manifests as a non-zero exit code.
always Always restart the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted. (See the second bullet listed in restart policy details)
unless-stopped Similar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts.

The following example starts a Redis container and configures it to always restart unless it is explicitly stopped or Docker is restarted.

1
$ docker run -d --restart unless-stopped redis

Note:

  • A restart policy only takes effect after a container starts successfully (when you run the command specifying the restart policy). In this case, starting successfully means that the container is up for at least 10 seconds and Docker has started monitoring it. This prevents a container which does not start at all from going into a restart loop.
  • If you manually stop a container, its restart policy is ignored until the Docker daemon restarts or the container is manually restarted. This is another attempt to prevent a restart loop.
  • Restart policies only apply to containers. Restart policies for swarm services are configured differently. See the flags related to service restart.

Keep Containers Alive during Daemon Downtime

By default, when the Docker daemon terminates, it shuts down running containers. Starting with Docker Engine 1.12, you can configure the daemon so that containers remain running if the daemon becomes unavailable. This functionality is called live restore.

There are two ways to enable the live restore setting to keep containers alive when the daemon becomes unavailable. Only do one of the following.

  • Add the configuration to the daemon configuration file. On Linux, this defaults to /etc/docker/daemon.json. On Docker Desktop for Mac or Docker Desktop for Windows, select the Docker icon from the task bar, then click Preferences -> Daemon -> Advanced.

    • Use the following JSON to enable live-restore.

      1
      2
      3
      {
      "live-restore": true
      }
    • Then, restart the Docker daemon. On Linux, you can avoid a restart (and avoid any downtime for your containers) by reloading the Docker daemon. If you use systemd, then use the command systemctl reload docker. Otherwise, send a SIGHUP signal to the dockerd process.

  • The other way is to use the --live-restore flag when you start the dockerd process manually with the --live-restore flag. This approach is not recommended because it does not set up the environment that systemd or another process manager would use when starting the Docker process. This can cause unexpected behavior.

Note:

  • The live restore option only works to restore containers if the daemon options, such as bridge IP addresses and graph driver, did not change. If any of these daemon-level configuration options have changed, the live restore may not work and you may need to manually start the containers.
  • If the daemon is down for a long time, running containers may fill up the FIFO log the daemon normally reads. A full log blocks containers from logging more data. The default buffer size is 64K. If the buffers fill, you must restart the Docker daemon to flush them.

On Linux, you can modify the kernel’s buffer size by changing /proc/sys/fs/pipe-max-size. You cannot modify the buffer size on Docker Desktop for Mac or Docker Desktop for Windows.

  • The live restore option only pertains to standalone containers, and not to swarm services. Swarm services are managed by swarm managers. If swarm managers are not available, swarm services continue to run on worker nodes but cannot be managed until enough swarm managers are available to maintain a quorum.

Running Multiple Services in a Container

This involves writing scripts and editing the Dockerfile. For details please refer to this link.

Viewing Runtime metrics for a Container

There are a couple of ways where you can view the current runtime metrics for your container.

  1. You can use the docker stats command to live stream a container’s runtime metrics. The command supports CPU, memory usage, memory limit, and network IO metrics.

    The following is a sample output from the docker stats command:

    1
    2
    3
    4
    5
    $ docker stats redis1 redis2

    CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
    redis1 0.07% 796 KB / 64 MB 1.21% 788 B / 648 B 3.568 MB / 512 KB
    redis2 0.07% 2.746 MB / 64 MB 4.29% 1.266 KB /
  2. Using Linux Control Groups

    Q: What is a Linux Control Group?

    Truth be told, certain software applications in the wild may need to be controlled or limited—at least for the sake of stability and, to some degree, security. Far too often, a bug or just bad code can disrupt an entire machine and potentially cripple an entire ecosystem. Fortunately, a way exists to keep those same applications in check. Control groups (cgroups) is a kernel feature that limits, accounts for and isolates the CPU, memory, disk I/O and network’s usage of one or more processes.

    The primary design goal for cgroups was to provide a unified interface to manage processes or whole operating-system-level virtualization, including Linux Containers, or LXC (a topic I plan to revisit in more detail in a follow-up article). The cgroups framework provides the following:

    • Resource limiting: a group can be configured not to exceed a specified memory limit or use more than the desired amount of processors or be limited to specific peripheral devices.

    • Prioritization: one or more groups may be configured to utilize fewer or more CPUs or disk I/O throughput.

    • Accounting: a group’s resource usage is monitored and measured.

    • Control: groups of processes can be frozen or stopped and restarted.

      For more details, please visit this site.

      Now, for each container, one cgroup is created in each hierarchy. On older systems with older versions of the LXC userland tools, the name of the cgroup is the name of the container. With more recent versions of the LXC tools, the cgroup is lxc/<container_name>.

    For Docker containers using cgroups, the container name is the full ID or long ID of the container. If a container shows up as ae836c95b4c3 in docker ps, its long ID might be something like ae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79. You can look it up with docker inspect or docker ps --no-trunc.

    Putting everything together to look at the memory metrics for a Docker container, take a look at /sys/fs/cgroup/memory/docker/<longid>/, where for each subsystem (memory, CPU, and block I/O), one or more pseudo-files exist and contain statistics:

    - **MEMORY METRICS: `MEMORY.STAT`**
    - **CPU metrics: `cpuacct.stat`**
    - **Network metrics** those are not directly exposed by the control group, but information can be gathered from other sources. For details, please follow the link below.

    For a detailed explaination of what you see in all those files, please refer to the Offical Docker Documentation

View Logs for a Particular Container or Service

The docker logs command shows information logged by a running container. The docker service logs command shows information logged by all containers participating in a service. The information that is logged and the format of the log depends almost entirely on the container’s endpoint command.

By default, docker logs or docker service logs shows the command’s output just as it would appear if you ran the command interactively in a terminal. UNIX and Linux commands typically open three I/O streams when they run, called STDIN, STDOUT, and STDERR.

  • STDIN is the command’s input stream (i.e. user input), which may include input from the keyboard or input from another command.
  • STDOUT is usually a command’s normal output
  • STDERR is typically used to output error messages.

By default, docker logs shows the command’s STDOUT and STDERR.

However, in some cases, docker logs may not show useful information unless you take additional steps.

  • If you use a logging driver which sends logs to a file, an external host, a database, or another logging back-end, docker logs may not show useful information.
    • In this case, your logs are processed in other ways and you may choose not to use docker logs (continue to the section Configure logging drivers).
  • If your image runs a non-interactive process such as a web server or a database, that application may send its output to log files instead of STDOUT and STDERR.
    • In this case, the official nginx image shows one workaround, and the official Apache httpd image shows another.
      • The official nginx image creates a symbolic link from /var/log/nginx/access.log to /dev/stdout, and creates another symbolic link from /var/log/nginx/error.log to /dev/stderr, overwriting the log files and causing logs to be sent to the relevant special device instead. See this Dockerfile.
      • The official httpd driver changes the httpd application’s configuration to write its normal output directly to /proc/self/fd/1 (which is STDOUT) and its errors to /proc/self/fd/2 (which is STDERR). See this Dockerfile.

Configuring Logging Drivers

Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers (I interpret them as driving your logs to a specific place with a specific format).

By default, Docker uses the json file to log all the outputs. The file of a container’s logs can be found in (if you use the default log format which is json):

1
/var/lib/docker/containers/<container id>/<container id>-json.log

This will contain the same log output as you will see if you run docker logs <container-id>. If you do not specify a logging driver, the default is json-file. Thus, the default output for commands such as docker inspect <CONTAINER> is also JSON.

To configure the Docker daemon from the default to a specific logging driver, set the value of log-driver to the name of the logging driver in the daemon.json file, which is located in /etc/docker/ on Linux hosts or C:\ProgramData\docker\config\ on Windows server hosts.The following example explicitly sets the default logging driver to syslog:

1
2
3
{
"log-driver": "syslog"
}

If the logging driver has configurable options, you can set them in the daemon.json file as a JSON object with the key log-opts. The following example sets two configurable options on the json-file logging driver:

1
2
3
4
5
6
7
8
9
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}

and you can also specify directly for a particular container (this will overwrite the JSON file):

1
2
3
4
$ docker run -d --name=<container-name> \
--log-driver json-file \
--log-opt max-size=10m --log-opt max-file=3 --log-opt labels=production_status --log-opt env=os,customer \
<yourImage>:<tag>

where:

  • max-size
    • The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (k=kb, m=mb, or g=gb). Defaults to -1 (unlimited).
    • Example: --log-opt max-size=10m
  • max-file
    • The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. Only effective when max-size is also set. A positive integer. Defaults to 1.
    • Example: --log-opt max-file=3
  • labels
    • Applies when starting the Docker daemon. A comma-separated list of logging-related labels this daemon accepts. Used for advanced log tag options.
    • Example: --log-opt labels=production_status,geo
  • env
    • Applies when starting the Docker daemon. A comma-separated list of logging-related environment variables this daemon accepts. Used for advanced log tag options.
    • Example: --log-opt env=os,customer

For more details on the JSON File logging driver, please visit this link

Note:

  • log-opts configuration options in the daemon.json configuration file must be provided as strings. Boolean and numeric values (such as the value for max-file in the example above) must therefore be enclosed in quotes (").

Now, To find the current default logging driver for the Docker daemon, run docker info and search for Logging Driver. You can use the following command on Linux, macOS, or PowerShell on Windows:

1
2
3
$ docker info --format '{{.LoggingDriver}}'

json-file

To find the current logging driver for a running container, if the daemon is using the json-file logging driver, run the following docker inspect command, substituting the container name or ID for <CONTAINER>:

1
2
3
$ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <CONTAINER>

json-file

Use environment variables or labels with logging drivers

Some logging drivers add the value of a container’s --env|-e or –label flags to the container’s logs. This example starts a container using the Docker daemon’s default logging driver (let’s assume json-file) but sets the environment variable os=ubuntu.

1
$ docker run -dit --label production_status=testing -e os=ubuntu alpine sh

If the logging driver supports it, this adds additional fields to the logging output. The following output is generated by the json-file logging driver:

1
"attrs":{"production_status":"testing","os":"ubuntu"}

Supported Logging Drivers

The following logging drivers are supported. If you are using logging driver plugins, you may see more options.

Driver Description
none No logs are available for the container and docker logs does not return any output.
local Logs are stored in a custom format designed for minimal overhead.
json-file The logs are formatted as JSON. The default logging driver for Docker.
syslog Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.
journald Writes log messages to journald. The journald daemon must be running on the host machine.
gelf Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.
fluentd Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.
awslogs Writes log messages to Amazon CloudWatch Logs.
splunk Writes log messages to splunk using the HTTP Event Collector.
etwlogs Writes log messages as Event Tracing for Windows (ETW) events. Only available on Windows platforms.
gcplogs Writes log messages to Google Cloud Platform (GCP) Logging.
logentries Writes log messages to Rapid7 Logentries.

However, there are limitations for using logging drivers in general as well:

  • Users of Docker Enterprise can make use of “dual logging”, which enables you to use the docker logs command for any logging driver. Refer to reading logs when using remote logging drivers for information about using docker logs to read container logs locally for many third party logging solutions, including:
    • syslog
    • gelf
    • fluentd
    • awslogs
    • splunk
    • etwlogs
    • gcplogs
    • Logentries
  • When using Docker Community Engine, the docker logs command is only available on the following drivers:
    local
    • json-file
    • journald
  • Reading log information requires decompressing rotated log files, which causes a temporary increase in disk usage (until the log entries from the rotated files are read) and an increased CPU usage while decompressing.
  • The capacity of the host storage where the Docker data directory resides determines the maximum size of the log file information.

Using a Logging Driver Plugin

Docker logging plugins allow you to extend and customize Docker’s logging capabilities beyond those of the built-in logging drivers. A logging service provider can implement their own plugins and make them available on Docker Hub, or a private registry.

  1. First, you need to install the logging driver plugin

    • To install a logging driver plugin, use docker plugin install <org/image>, using the information provided by the plugin developer.
  2. You can list all installed plugins using docker plugin ls, and you can inspect a specific plugin using docker inspect.

  3. If you wan to configure the plugin as the default logging driver:

    • After the plugin is installed, you can configure the Docker daemon to use it as the default by setting the plugin’s name as the value of the log-driver key in the daemon.json, as detailed in the previous section. If the logging driver supports additional options, you can set those as the values of the log-opts array in the same file.
  4. If you want to configure a container to use the plugin as the logging driver:

    • After the plugin is installed, you can configure a container to use the plugin as its logging driver by specifying the --log-driver flag to docker run, as detailed in the previous section. If the logging driver supports additional options, you can specify them using one or more --log-opt flags with the option name as the key and the option value as the value.

Customizing the Logging Driver Tag

The tag log option specifies how to format a tag that identifies the container’s log messages. By default, the system uses the first 12 characters of the container ID. To override this behavior, specify a tag option:

1
$ docker run --log-driver=fluentd --log-opt fluentd-address=myhost.local:24224 --log-opt tag="testTag"

Often, you would like to use a special template markup: --log-opt tag=".ImageName/.Name/.ID" value yields syslog log lines like:

1
Aug  7 18:33:19 HOSTNAME hello-world/foobar/5790672ab6a0[9103]: Hello from Docker.
Markup Description
{{.ID}} The first 12 characters of the container ID.
{{.FullID}} The full container ID.
{{.Name}} The container name.
{{.ImageID}} The first 12 characters of the container’s image ID.
{{.ImageFullID}} The container’s full image ID.
{{.ImageName}} The name of the image used by the container.
{{.DaemonName}} The name of the docker program (docker).

Dokcer Network Overview

One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. Docker containers and services do not even need to be aware that they are deployed on Docker, or whether their peers are also Docker workloads or not. Whether your Docker hosts run Linux, Windows, or a mix of the two, you can use Docker to manage them in a platform-agnostic way.

However. this topic does not go into OS-specific details about how Docker networks work, so you will not find information about how Docker manipulates iptables rules on Linux or how it manipulates routing rules on Windows servers, and you will not find detailed information about how Docker forms and encapsulates packets or handles encryption.

Network Drivers

Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:

  • bridge:

    • The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate.
    • This is usually the best choice when you need multiple containers to communicate on the same Docker host.
    • For more details, see the bridge networks section.
  • host:

    • For standalone containers, this will remove network isolation between the container and the Docker host, and use the host’s networking directly. host is only available for swarm services on Docker 17.06 and higher.
    • This is the best choice when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.
    • For more details, see the host networks section
  • overlay:

    • Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers.
    • This is the best choice when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.
    • This is the best choice when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.
    • For more details, see the overlay networks section
  • macvlan:

    • Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
    • For more details, see the mcvlan networks section
  • none:

    • For this container, disable all networking. Usually used in conjunction with a custom network driver. none is not available for swarm services.
    • For more details, see the none networks section
  • Network plugins:

    • You can install and use third-party network plugins with Docker. These plugins are available from Docker Hub or from third-party vendors. See the vendor’s documentation for installing and using a given network plugin.

Using bridge networks

In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network.

The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. However, bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network.

Now, for bridge networks, we can have either the default bridge network or the user-defined bridge network.

When you start Docker, a default bridge network (also called bridge) is created automatically, and newly-started containers connect to it unless otherwise specified. You can also create user-defined custom bridge networks. User-defined bridge networks are superior to the default bridge network.

  • User-defined bridges provide automatic DNS resolution between containers.

    Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.

    Imagine an application with a web front-end and a database back-end. If you call your containers web and db, the web container can connect to the db container at db, no matter which Docker host the application stack is running on.

    If you run the same application stack on the default bridge network, you need to manually create links between the containers (using the legacy --link flag). These links need to be created in both directions, so you can see this gets complex with more than two containers which need to communicate. Alternatively, you can manipulate the /etc/hosts files within the containers, but this creates problems that are difficult to debug.

  • User-defined bridges provide better isolation.

    All containers without a --network specified, are attached to the default bridge network. This can be a risk, as unrelated stacks/services/containers are then able to communicate.

    Using a user-defined network provides a scoped network in which only containers attached to that network are able to communicate.

    If you use a user-defined bridge, during a container’s lifetime, you can connect or disconnect it from user-defined networks on the fly. To remove a container from the default bridge network, you need to stop the container and recreate it with different network options.

  • Each user-defined network creates a configurable bridge.

    If your containers use the default bridge network, you can configure it, but all the containers use the same settings, such as MTU and iptables rules. In addition, configuring the default bridge network happens outside of Docker itself, and requires a restart of Docker.

    User-defined bridge networks are created and configured using docker network create. If different groups of applications have different network requirements, you can configure each user-defined bridge separately, as you create it.

  • Linked containers on the default bridge network share environment variables.

    Originally, the only way to share environment variables between two containers was to link them using the --link flag. This type of variable sharing is not possible with user-defined networks. However, there are superior ways to share environment variables. A few ideas:

    • Multiple containers can mount a file or directory containing the shared information, using a Docker volume.

    • Multiple containers can be started together using docker-compose and the compose file can define the shared variables.

    • You can use swarm services instead of standalone containers, and take advantage of shared secrets and configs.

In general, containers connected to the same user-defined bridge network effectively expose all ports to each other. For a port to be accessible to containers or non-Docker hosts on different networks, that port must be published using the -p or --publish flag.

Creating and Removing a user-defined bridge

  • You can use the docker network create <networkName> command to create a user-defined bridge network.

    For example:

    1
    $ docker network create --driver bridge my-net

    or, since bridge is also the default:

    1
    $ docker network create my-net

    You can also specify the subnet, the IP address range, the gateway, and other options. See the Docker network create reference or the output of docker network create --help for details.

  • If you don’t need a user-defined brigde, you can use the docker network rm <networkName> command to remove a user-defined bridge network. If containers are currently connected to the network, disconnect them first.

    For example, if you have all containers disconnected from the my-net bridge, and you want to remove it:

    1
    $ docker network rm my-net

Connecting and Disconnecting from a user-defined bridge

  • You can specify the connection to a user-defined network via 2 ways:

    • with the --network flag when you create a container
    • with the docker network connect <networkName> <containerName> when you have a running container
    1. When you create a new container, you can specify one or more --network flags. This example connects a Nginx container to the my-net network. It also publishes port 80 in the container to port 8080 on the Docker host, so external clients can access that port. Any other container connected to the my-net network has access to all ports on the my-nginx container, and vice versa.

      1
      2
      3
      4
      $ docker create --name my-nginx \
      --network my-net \
      --publish 8080:80 \
      nginx:latest
    2. To connect a running container to an existing user-defined bridge, use the docker network connect command. The following command connects an already-running my-nginx container to an already-existing my-net network:

      1
      $ docker network connect my-net my-nginx

      Note:

      • If you need IPv6 support for Docker containers, you need to enable the option on the Docker daemon and reload its configuration, before creating any IPv6 networks or assigning containers IPv6 addresses.
        When you create your network, you can specify the --ipv6 flag to enable IPv6. You can’t selectively disable IPv6 support on the default bridge network.
  • To disconnect a running container from a user-defined bridge, use the docker network disconnect command. The following command disconnects the my-nginx container from the my-net network.

    1
    $ docker network disconnect my-net my-nginx

Enabling Forwarding from Containers to Outside World

By default, traffic from containers connected to the default bridge network is not forwarded to the outside world. To enable forwarding, you need to change two settings. These are not Docker commands and they affect the Docker host’s kernel.

  1. Configure the Linux kernel to allow IP forwarding.
    1
    $ sysctl net.ipv4.conf.all.forwarding=1
  2. Then change the policy for the iptables FORWARD policy from DROP to ACCEPT.
    1
    $ sudo iptables -P FORWARD ACCEPT
    These settings do not persist across a reboot, so you may need to add them to a start-up script.

Viewing and Configuring your user-defined bridge

  1. First, after you have created your user-defined bridge, you can use docker network ls to view the networks that docker has:

    1
    2
    3
    4
    5
    6
    7
    $ docker network ls

    NETWORK ID NAME DRIVER SCOPE
    9210b6312956 test-net bridge local
    17e324f45964 bridge bridge local
    6ed54d316334 host host local
    7092879f2cc8 none null local
  2. Then you can use docker inspect <network-id> to see details about your bridge:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    $ docker inspect test-net
    [
    {
    "Name": "test-net",
    "Id": "9210b6312956eb510fc1ad59f3ebc1ac14270fc27d44f3b98d0c71bcb5409d83",
    "Created": "2020-06-01T09:37:22.875610678+08:00",
    "Scope": "local",
    "Driver": "bridge",
    "EnableIPv6": false,
    "IPAM": {
    "Driver": "default",
    "Options": {},
    "Config": [
    {
    "Subnet": "172.18.0.0/16",
    "Gateway": "172.18.0.1"
    }
    ]
    },
    "Internal": false,
    "Attachable": false,
    "Ingress": false,
    "ConfigFrom": {
    "Network": ""
    },
    "ConfigOnly": false,
    "Containers": {},
    "Options": {},
    "Labels": {}
    }
    ]

    here you see that:

    • there are no containers attached to it: "Containers": {}
    • your Gateway is: "Gateway": "172.18.0.1"
  3. Now, to test your user-defined bridge, you can create 4 containers: the first two will be connected to the bridge you created, the third will only be connected to be default bridge, and the fourth connected to both:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    $ docker run -dit --name alpine1 --network test-net alpine ash

    $ docker run -dit --name alpine2 --network test-net alpine ash

    $ docker run -dit --name alpine3 alpine ash

    $ docker run -dit --name alpine4 --network test-net alpine ash

    $ docker network connect bridge alpine4

    Note:

    • Since you can only specify one network when you use the --network flag for docker run, if you need to connect to more than one network, you need to use docker network connect command with a running container.
  4. Now, if all the 4 containers are running properly, you can then inspect the bridge and the test-net to see that the containers are included properly in the "containers" section. You should also be able to see the IP address that each one has been assigned.

  5. Now, you can test their communication ability by connecting each one of them to the terminal (so that you can do the stdin and see the stdout in your terminal) with the docker container attach <container-id> command.

    On user-defined networks like test-net, containers can not only communicate by IP address, but can also resolve a container name to an IP address. This capability is called automatic service discovery. However, you will see that containers not connected to the same bridge cannot communicate with each other (both alpine1 and alpine2 cannot communicate with alpine3).

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    $ docker container attach alpine1

    # ping -c 2 alpine2

    PING alpine2 (172.18.0.3): 56 data bytes
    64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.085 ms
    64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.090 ms

    --- alpine2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.085/0.087/0.090 ms

    # ping -c 2 alpine4

    PING alpine4 (172.18.0.4): 56 data bytes
    64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.076 ms
    64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.091 ms

    --- alpine4 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.076/0.083/0.091 ms

    # ping -c 2 alpine1

    PING alpine1 (172.18.0.2): 56 data bytes
    64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.026 ms
    64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.054 ms

    --- alpine1 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.026/0.040/0.054 ms
    From alpine1, you should not be able to connect to alpine3 at all, since it is not on the alpine-net network.

    # ping -c 2 alpine3

    ping: bad address 'alpine3'

    Ctrl+p Ctrl+q

    To exit the attached terminal, use Ctrl+p then Ctrl+q.

  6. Finally, remember that alpine4 is connected to both the default bridge network and test-net. It should be able to reach all of the other containers. However, you will need to address alpine3 by its IP address, which is the behavior of a default bridge. Attach to it and run the tests.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    $ docker container attach alpine4

    # ping -c 2 alpine1

    PING alpine1 (172.18.0.2): 56 data bytes
    64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.074 ms
    64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.082 ms

    --- alpine1 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.074/0.078/0.082 ms

    # ping -c 2 alpine2

    PING alpine2 (172.18.0.3): 56 data bytes
    64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.075 ms
    64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.080 ms

    --- alpine2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.075/0.077/0.080 ms

    # ping -c 2 alpine3
    ping: bad address 'alpine3'

    # ping -c 2 172.17.0.2

    PING 172.17.0.2 (172.17.0.2): 56 data bytes
    64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.089 ms
    64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms

    --- 172.17.0.2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.075/0.082/0.089 ms

    # ping -c 2 alpine4

    PING alpine4 (172.18.0.4): 56 data bytes
    64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.033 ms
    64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.064 ms

    --- alpine4 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.033/0.048/0.064 ms
  7. Now, you have finished your testsings, and you can stop and remove/disconnect all containers and the test-net network.

    1
    2
    3
    4
    5
    $ docker container stop alpine1 alpine2 alpine3 alpine4

    $ docker container rm alpine1 alpine2 alpine3 alpine4

    $ docker network rm test-net

    or if you still need those containers:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    $ docker container stop alpine1 alpine2 alpine3 alpine4

    $ docker network disconnect test-net alpine1

    $ docker network disconnect test-net alpine2

    $ docker network disconnect test-net alpine4

    $ docker network rm test-net

Using host networks

Using overlay networks

The overlay network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely when encryption is enabled.

When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:

  • an overlay network called ingress, which handles control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
    • You can create user-defined overlay networks using docker network create, in the same way that you can create user-defined bridge networks. Remember that services or containers can be connected to more than one network at a time, and that services or containers can only communicate across networks they are each connected to.
  • a bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.

Creating an Overlay Network With Swarm

  1. First, there are two prerequisites you need to fulfill:

    • Firewall rules for Docker daemons using overlay networks
      You need the following ports open to traffic to and from each Docker host participating on an overlay network:

      • TCP port 2377 for cluster management communications
      • TCP and UDP port 7946 for communication among nodes
      • UDP port 4789 for overlay network traffic
    • Before you can create an overlay network, you need to either** initialize your Docker daemon as a swarm manager using docker swarm init or join it to an existing swarm using docker swarm join. Either of these creates the default ingress overlay network which is used by swarm services by default. You need to do this even if you never plan to use swarm services. **Afterward, you can create additional user-defined overlay networks.

  2. To create an overlay network for use with swarm services, use a command like the following:

    1
    $ docker network create -d overlay my-overlay

    To create an overlay network which can be used by swarm services or standalone containers to communicate with other standalone containers running on other Docker daemons, add the –attachable flag:

    1
    $ docker network create -d overlay --attachable my-attachable-overlay
  3. Encryption:

    All swarm service management traffic is encrypted by default, using the AES algorithm in GCM mode. Manager nodes in the swarm rotate the key used to encrypt gossip data every 12 hours.

    To encrypt application data as well, add --opt encrypted when creating the overlay network. This enables IPSEC encryption at the level of the vxlan. This encryption imposes a non-negligible performance penalty, so you should test this option before using it in production.

    So, for example, to encrypt your application data as well:

    1
    $ docker network create --opt encrypted --driver overlay --attachable my-attachable-multi-host-network
  4. Now, you can customized your default ingress network or your docker_gwbridge, and your user-defined overlay.

  5. Now, you have 4 options on how to use this overlay system:

    • Use the default overlay network, which demonstrates how to use the default overlay network that Docker sets up for you automatically when you initialize or join a swarm. This network is not the best choice for production systems.

    • Use user-defined overlay networks which shows how to create and use your own custom overlay networks, to connect services. This is recommended for services running in production.

    • Use an overlay network for standalone containers shows how to communicate between standalone containers on different Docker daemons using an overlay network.

    • Communicate between a container and a swarm service, which sets up communication between a standalone container and a swarm service, using an attachable overlay network. This is supported in Docker 17.06 and higher.

Use the default overlay network

In this example, you start an alpine service and examine the characteristics of the network from the point of view of the individual service containers.

  1. Prerequisites:

    This requires three physical or virtual Docker hosts which can all communicate with one another, all running new installations of Docker 17.03 or higher. This tutorial assumes that the three hosts are running on the same network with no firewall involved.

    These hosts will be referred to as manager, worker-1, and worker-2. The manager host will function as both a manager and a worker, which means it can both run service tasks and manage the swarm. worker-1 and worker-2 will function as workers only,

    Note:

    • To open those ports specified above, you can use the following command (on both worker and manager):
      1
      2
      3
      4
      5
      6
      firewall-cmd --add-port=22/tcp --permanent
      firewall-cmd --add-port=2376/tcp --permanent
      firewall-cmd --add-port=2377/tcp --permanent
      firewall-cmd --add-port=7946/tcp --permanent
      firewall-cmd --add-port=7946/udp --permanent
      firewall-cmd --add-port=4789/udp --permanent
    1
    2
    > Afterwards, reload the firewall:
    >

    firewall-cmd –reload

    1
    Then restart Docker.

    systemctl restart docker

    1
    For other means of doing the same thing, please visit [this link](https://www.digitalocean.com/community/tutorials/how-to-configure-the-linux-firewall-for-docker-swarm-on-ubuntu-16-04#method-2-%E2%80%94-opening-docker-swarm-ports-using-firewalld) 
  2. On manager, initialize the swarm. If the host only has one network interface, the --advertise-addr flag is optional.

    1
    $ docker swarm init --advertise-addr=<IP-ADDRESS-OF-MANAGER>

    Make a note of the text that is printed, as this contains the token that you will use to join worker-1 and worker-2 to the swarm. It is a good idea to store the token in a password manager.

  3. On worker-1, join the swarm. If the host only has one network interface, the --advertise-addr flag is optional.

    1
    2
    3
    $ docker swarm join --token <TOKEN> \
    --advertise-addr <IP-ADDRESS-OF-WORKER-1> \
    <IP-ADDRESS-OF-MANAGER>:2377
  4. On worker-2, join the swarm. If the host only has one network interface, the --advertise-addr flag is optional.

    1
    2
    3
    $ docker swarm join --token <TOKEN> \
    --advertise-addr <IP-ADDRESS-OF-WORKER-2> \
    <IP-ADDRESS-OF-MANAGER>:2377
  5. On manager, list all the nodes. This command can only be done from a manager.

    1
    2
    3
    4
    5
    6
    $ docker node ls

    ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
    d68ace5iraw6whp7llvgjpu48 * ip-172-31-34-146 Ready Active Leader
    nvp5rwavvb8lhdggo8fcf7plg ip-172-31-35-151 Ready Active
    ouvx2l7qfcxisoyms8mtkgahw ip-172-31-36-89 Ready Active
  6. You can also use the --filter flag to filter by role:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    $ docker node ls --filter role=manager

    ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
    d68ace5iraw6whp7llvgjpu48 * ip-172-31-34-146 Ready Active Leader

    $ docker node ls --filter role=worker

    ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
    nvp5rwavvb8lhdggo8fcf7plg ip-172-31-35-151 Ready Active
    ouvx2l7qfcxisoyms8mtkgahw ip-172-31-36-89 Ready Active
  7. List the Docker networks on manager, worker-1, and worker-2 and notice that each of them now has an overlay network called ingress and a bridge network called docker_gwbridge. Only the listing for manager is shown here:

    1
    2
    3
    4
    5
    6
    7
    8
    $ docker network ls

    NETWORK ID NAME DRIVER SCOPE
    495c570066be bridge bridge local
    961c6cae9945 docker_gwbridge bridge local
    ff35ceda3643 host host local
    trtnl4tqnc3n ingress overlay swarm
    c8357deec9cb none null local
  8. The docker_gwbridge connects the ingress network to the Docker host’s network interface so that traffic can flow to and from swarm managers and workers. If you create swarm services and do not specify a network, they are connected to the ingress network. It is recommended that you use separate overlay networks for each application or group of applications which will work together. In the next procedure, you will create two overlay networks and connect a service to each of them.

  9. CREATE THE SERVICES

    On manager, create a new overlay network called nginx-net:

    1
    $ docker network create -d overlay nginx-net

    You don’t need to create the overlay network on the other nodes, beacause it will be automatically created when one of those nodes starts running a service task which requires it.

    On manager, create a 5-replica Nginx service connected to nginx-net. The service will publish port 80 to the outside world. All of the service task containers can communicate with each other without opening any ports.

    Note: Services can only be created on a manager.

    1
    2
    3
    4
    5
    6
    $ docker service create \
    --name my-nginx \
    --publish target=80,published=80 \
    --replicas=5 \
    --network nginx-net \
    nginx

    The default publish mode of ingress, which is used when you do not specify a mode for the --publish flag, means that if you browse to port 80 on manager, worker-1, or worker-2, you will be connected to port 80 on one of the 5 service tasks, even if no tasks are currently running on the node you browse to. If you want to publish the port using host mode, you can add mode=host to the –publish output. However, you should also use --mode global instead of --replicas=5 in this case, since only one service task can bind a given port on a given node.

  10. Run docker service ls to monitor the progress of service bring-up, which may take a few seconds.

  11. Inspect the nginx-net network on manager, worker-1, and worker-2. Remember that you did not need to create it manually on worker-1 and worker-2 because Docker created it for you. The output will be long, but notice the Containers and Peers sections. Containers lists all service tasks (or standalone containers) connected to the overlay network from that host.

    From manager, inspect the service using docker service inspect my-nginx and notice the information about the ports and endpoints used by the service.

  12. Create a new network nginx-net-2, then update the service to use this network instead of nginx-net:

    1
    2
    3
    4
    5
    $ docker network create -d overlay nginx-net-2
    $ docker service update \
    --network-add nginx-net-2 \
    --network-rm nginx-net \
    my-nginx
  13. Run docker service ls to verify that the service has been updated and all tasks have been redeployed. Run docker network inspect nginx-net to verify that no containers are connected to it. Run the same command for nginx-net-2 and notice that all the service task containers are connected to it.

    Note: Even though overlay networks are automatically created on swarm worker nodes as needed, they are not automatically removed.

  14. Clean up the service and the networks. From manager, run the following commands. The manager will direct the workers to remove the networks automatically.

    1
    2
    $ docker service rm my-nginx
    $ docker network rm nginx-net nginx-net-2

Using a user-defined overlay

  1. First you need to create the user-defined overlay network.

    1
    $ docker network create -d overlay my-overlay
  2. Start a service using the overlay network and publishing port 80 to port 8080 on the Docker host.

    1
    2
    3
    4
    5
    6
    $ docker service create \
    --name my-nginx \
    --network my-overlay \
    --replicas 1 \
    --publish published=8080,target=80 \
    nginx:latest
  3. Run docker network inspect my-overlay and verify that the my-nginx service task is connected to it, by looking at the Containers section.

  4. Inspect the my-overlay network on manager, worker-1, and worker-2. Remember that you did not need to create it manually on worker-1 and worker-2 because Docker created it for you. The output will be long, but notice the Containers and Peers sections. Containers lists all service tasks (or standalone containers) connected to the overlay network from that host.

  5. Remove the service and the network on the manager after you see it works fine.

    1
    2
    3
    $ docker service rm my-nginx

    $ docker network rm my-overlay

This example refers to the two nodes in our swarm as host1 and host2. This example also uses Linux hosts, but the same commands work on Windows.

This example demonstrates DNS container discovery – specifically, how to communicate between standalone containers on different Docker daemons using an overlay network. In short, steps are:

  • On host1, initialize the node as a swarm (manager).
  • On host2, join the node to the swarm (worker).
  • On host1, create an attachable overlay network (test-net).
  • On host1, run an interactive alpine container (alpine1) on test-net.
  • On host2, run an interactive, and detached, alpine container (alpine2) on test-net.
  • On host1, from within a session of alpine1, ping alpine2.
  1. Set up the swarm.

    a. On host1, initialize a swarm (and if prompted, use --advertise-addr to specify the IP address for the interface that communicates with other hosts in the swarm, for instance, the private IP address on AWS).
    b. On host2, join the swarm as instructed above, usually in the form:

    1
    2
    $ docker swarm join --token <your_token> <your_ip_address>:2377
    This node joined a swarm as a worker.
  2. On host1, create an attachable overlay network called test-net:

    1
    2
    $ docker network create --driver=overlay --attachable test-net
    uqsof8phj3ak0rq9k86zta6ht

    Notice the returned NETWORK ID – you will see it again when you connect to it from host2.

  3. On host1, start an interactive (-it) container (alpine1) that connects to test-net:

    1
    2
    $ docker run -it --name alpine1 --network test-net alpine
    / #
  4. On host2, list the available networks – notice that test-net does not yet exist (because it is not currently required by any containers, though it is available. see next step):

    1
    2
    3
    4
    5
    6
    7
    $ docker network ls
    NETWORK ID NAME DRIVER SCOPE
    ec299350b504 bridge bridge local
    66e77d0d0e9a docker_gwbridge bridge local
    9f6ae26ccb82 host host local
    omvdxqrda80z ingress overlay swarm
    b65c952a4b2b none null local
  5. On host2, start a detached (-d) and interactive (-it) container (alpine2) that connects to test-net:

    1
    2
    $ docker run -dit --name alpine2 --network test-net alpine
    fb635f5ece59563e7b8b99556f816d24e6949a5f6a5b1fbd92ca244db17a4342

    where:

    • Automatic DNS container discovery only works with unique container names.
    • Being detached means the container is no longer listening to the command line where docker run was run. By design, containers started in detached mode exit when the root process used to run the container exits, unless you also specify the --rm option. If you use -d with --rm, the container is removed when it exits or when the daemon exits, whichever happens first.
  6. On host2, verify that test-net was created (and has the same NETWORK ID as test-net on host1):

    1
    2
    3
    4
    $ docker network ls
    NETWORK ID NAME DRIVER SCOPE
    ...
    uqsof8phj3ak test-net overlay swarm
  7. On host1, ping alpine2 within the interactive terminal of alpine1:

    1
    2
    3
    4
    5
    6
    7
    8
    / # ping -c 2 alpine2
    PING alpine2 (10.0.0.5): 56 data bytes
    64 bytes from 10.0.0.5: seq=0 ttl=64 time=0.600 ms
    64 bytes from 10.0.0.5: seq=1 ttl=64 time=0.555 ms

    --- alpine2 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max = 0.555/0.577/0.600 ms
  8. The two containers communicate with the overlay network connecting the two hosts. If you run another alpine container on host2 that is not detached, you can ping alpine1 from host2 (and here we add the remove option for automatic container cleanup):

    1
    2
    3
    $ docker run -it --rm --name alpine3 --network test-net alpine
    / # ping -c 2 alpine1
    / # exit
  9. On host1, close the alpine1 session (which also stops the container):

    1
    / # exit
  10. Clean up your containers and networks:

    You must stop and remove the containers on each host independently because Docker daemons operate independently and these are standalone containers. You only have to remove the network on host1 because when you stop alpine2 on host2, test-net disappears (as it is no longer required).

    a. On host2, stop alpine2, check that test-net was removed, then remove alpine2:

    1
    2
    3
    $ docker container stop alpine2
    $ docker network ls
    $ docker container rm alpine2

    b. On host1, remove alpine1 and test-net:

    1
    2
    $ docker container rm alpine1
    $ docker network rm test-net

Customizing your ingress network

Most users never need to configure the ingress network, but Docker 17.05 and higher allow you to do so. This can be useful if the automatically-chosen subnet conflicts with one that already exists on your network, or you need to customize other low-level network settings such as the MTU.

Customizing the ingress network involves removing and recreating it. This is usually done before you create any services in the swarm. If you have existing services which publish ports, those services need to be removed before you can remove the ingress network.

During the time that no ingress network exists, existing services which do not publish ports continue to function but are not load-balanced. This affects services which publish ports, such as a WordPress service which publishes port 80.

Inspect the ingress network using docker network inspect ingress, and remove any services whose containers are connected to it. These are services that publish ports, such as a WordPress service which publishes port 80. If all such services are not stopped, the next step fails.

  1. Remove the existing ingress network:

    1
    2
    3
    4
    5
    6
    7
    $ docker network rm ingress

    WARNING! Before removing the routing-mesh network, make sure all the nodes
    in your swarm run the same docker engine version. Otherwise, removal may not
    be effective and functionality of newly created ingress networks will be
    impaired.
    Are you sure you want to continue? [y/N]
  2. Create a new overlay network using the --ingress flag, along with the custom options you want to set. This example sets the MTU to 1200, sets the subnet to 10.11.0.0/16, and sets the gateway to 10.11.0.2.

    1
    2
    3
    4
    5
    6
    7
    $ docker network create \
    --driver overlay \
    --ingress \
    --subnet=10.11.0.0/16 \
    --gateway=10.11.0.2 \
    --opt com.docker.network.driver.mtu=1200 \
    my-ingress

    Note:

    • You can name your ingress network with a name of something other than ingress, but you can only have one ingress network. An attempt to create a second one fails.
  3. Restart the services that you stopped in the first step.

Customizing your docker_gwbridge network

docker_gwbridge connects the overlay networks (including the ingress network) to an individual Docker daemon’s physical network (swarm host). Docker creates it automatically when you initialize a swarm or join a Docker host to a swarm, but it is not a Docker device. It exists in the kernel of the Docker host. If you need to customize its settings, you must do so before joining the Docker host to the swarm, or after temporarily removing the host from the swarm.

  1. Stop Docker, and delete the existing docker_gwbridge interface.

    1
    2
    3
    $ sudo ip link set docker_gwbridge down

    $ sudo ip link del dev docker_gwbridge
  2. Start Docker. Do not join or initialize the swarm.

  3. Create or re-create the docker_gwbridge bridge manually with your custom settings, using the docker network create command. This example uses the subnet 10.11.0.0/16. For a full list of customizable options, see Bridge driver options.

    1
    2
    3
    4
    5
    6
    $ docker network create \
    --subnet 10.11.0.0/16 \
    --opt com.docker.network.bridge.name=docker_gwbridge \
    --opt com.docker.network.bridge.enable_icc=false \
    --opt com.docker.network.bridge.enable_ip_masquerade=true \
    docker_gwbridge
  4. Initialize or join the swarm. Since the bridge already exists, Docker does not create it with automatic settings.

Customizing your user-defined overlay network

Using mcvlan networks

Using none networks

Using MySQL Server in Docker

  1. You need to first setup your SQL server in docker via this Official Guide
  2. Then refer to MySQL Manual for further reference on using MySQL

Creating and Mounting a Volume to MySQL data

  1. First, to create a volume, you need to run:

    1
    $ docker volume create <my-vol>

    This will create a volume at your local directory /var/lib/docker/volumes/<my-vol>/_data

  2. You can list the volumes you have with:

    1
    $ docker volume ls

    At this point, if you see some other volumes that are temporary created and you don’t want them, you can use:

    1
    $ docker volume rm <my-vol>

    or use the prune keyword which will delete volumes that are not used by any containers:

    1
    $ docker volume prune
  3. Now, you can mount a volume to your container with either the -v or --mount option. To store mount volumes to directory where mysql stores its data, you need to mount to /var/opt/mssql. Examples below produce the same result (this example uses the image called mysql:latest and creates a container devtest).

    • --mount

      1
      2
      3
      4
      $ docker run -d \
      --name devtest \
      --mount source=myvol2,target=/var/lib/mysql \
      mysql:latest
    • -v

      1
      2
      3
      4
      $ docker run -d \
      --name devtest \
      -v myvol2:/var/lib/mysql \
      mysql:latest
    • To mount multiple volumes, you can do:

      1
      2
      3
      4
      5
      $ docker run -d \
      --name devtest \
      -v myvol2:/var/lib/mysql \
      -v myvol3:/var \
      mysql:latest

      Note:

      • If the name of the volume you specified does not exist, docker will create one with that name for you.
      • It is also allowed to mount multiple containers to the same volume.
  4. You can then use docker inspect devtest to verify that the volume was created and mounted correctly. Look for the Mounts section:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    "Mounts": [
    {
    "Type": "volume",
    "Name": "myvol2",
    "Source": "/var/lib/docker/volumes/myvol2/_data",
    "Destination": "/app",
    "Driver": "local",
    "Mode": "",
    "RW": true,
    "Propagation": ""
    }
    ],

    This shows that the mount is a volume, it shows the correct source and destination, and that the mount is read-write.

  5. If you want to remove a volume, you need to stop and remove the container, and then remove the volume. Note volume removal is a separate step.

    1
    2
    3
    4
    5
    $ docker container stop devtest

    $ docker container rm devtest

    $ docker volume rm myvol2

Use a Volume for Read-Only Purpose

For some development applications, the container needs to write into the bind mount so that changes are propagated back to the Docker host. At other times, the container only needs read access to the data. Remember that multiple containers can mount the same volume, and it can be mounted read-write for some of them and read-only for others, at the same time.

Suppose you want to access the data in the volume nginx-vol for the following examples.

The following modifies the one above but mounts the directory as a read-only volume, by adding readonly or ro to the (empty by default) list of options, after the mount point within the container. If multiple options are present, separate them by commas.

The --mount and -v examples have the same result.

  • --mount

    1
    2
    3
    4
    $ docker run -d \
    --name=nginxtest \
    --mount source=nginx-vol,destination=/usr/share/nginx/html,readonly \
    nginx:latest
  • -v

    1
    2
    3
    4
    $ docker run -d \
    --name=nginxtest \
    -v nginx-vol:/usr/share/nginx/html:ro \
    nginx:latest

Now you can use docker inspect nginxtest to verify that the readonly mount was created correctly. Look for the Mounts section, where you see "RW": false:

1
2
3
4
5
6
7
8
9
10
11
12
"Mounts": [
{
"Type": "volume",
"Name": "nginx-vol",
"Source": "/var/lib/docker/volumes/nginx-vol/_data",
"Destination": "/usr/share/nginx/html",
"Driver": "local",
"Mode": "",
"RW": false,
"Propagation": ""
}
],

Overview of Docker Compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Using Compose is basically a three-step process:

  1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
  2. Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
  3. Run docker-compose up and Compose starts and runs your entire app.

A docker-compose.yml looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
version: '2.0'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}

Installing Docker Compose

Docker Compose depends on the Docker engine to work properly. After you have installed the Docker engine, please refer to this link to install Docker Compose.

QuickStart with Docker Compose

For getting a sense of how docker compose works, please visit the this Docker Documentation.

Some important steps are shown here:

  • Step 5: Edit the Compose file to add a bind mount

    Edit docker-compose.yml in your project directory to add a bind mount for the web service:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    version: '3'
    services:
    web:
    build: .
    ports:
    - "5000:5000"
    volumes:
    - .:/code
    environment:
    FLASK_ENV: development
    redis:
    image: "redis:alpine"

    The new volumes key mounts the project directory (current directory) on the host to /code inside the container. Now, you need to re-build the image for it to take effect. Afterwards, by mounting a volume it is allowing you to modify the code on the fly, without having to rebuild the image. The environment key sets the FLASK_ENV environment variable, which tells flask run to run in development mode and reload the code on change. This mode should only be used in development.

  • Step 6: Re-build and run the app with Compose

    From your project directory, type docker-compose up to build the app with the updated Compose file, and run it.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    $ docker-compose up
    Creating network "composetest_default" with the default driver
    Creating composetest_web_1 ...
    Creating composetest_redis_1 ...
    Creating composetest_web_1
    Creating composetest_redis_1 ... done
    Attaching to composetest_web_1, composetest_redis_1
    web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
    ...