IMPORTANT:
Some of the content here is a personal summary/abbreviation of contents on the Offical Docker Guide. Feel free to refer to the official site if you think some of the sections written here are not clear.
Docker Intro
Docker provides the ability to package and run an application in a loosely isolated environment called a container
, which separate your applications from your infrastructure. The isolation and security allow you to run many containers simultaneously on a given host.
The use of container
s to deploy applications is called containerization. Containers are not new, but their use for easily deploying applications is.
Docker provides tooling and a platform to manage the lifecycle of your container
s:
- Develop your application and its supporting components using
container
s. - The
container
becomes the unit for distributing and testing your application. - When you’re ready, deploy your application into your production environment, as a
container
or an orchestrated service. This works the same whether your production environment is a local data center, a cloud provider, or a hybrid of the two.
Docker Basics
Docker is mainly composed of the following elements (the first three is referred as Docker Engine by the official site):
- A server (
docker daemon
) which is a type of long-running program called a daemon process (thedockerd
command). - A command line interface (CLI) client that is used by most users (the
docker
command). - A REST API which specifies interfaces that CLI programs can use to talk to the daemon and instruct it what to do.
- A Docker registry, which stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default.
So we see that Docker uses a client-server architecture. The docker client
talks to the docker daemon
, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client
and daemon
can run on the same system, or you can connect a Docker client to a remote Docker daemon.
Docker Architecture
The Docker
daemon
- The Docker
daemon
(dockerd
) listens for Docker API requests and manages Docker objects such asimages
,container
s, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.
- The Docker
The Docker
client
- The Docker
client
(docker
) is the primary way that many Docker users interact with Docker. When you use commands such asdocker run
, the client sends these commands todockerd
, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.
- The Docker
Docker registries
- A Docker registry stores Docker
image
s. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry. If you use Docker Datacenter (DDC), it includes Docker Trusted Registry (DTR).
- A Docker registry stores Docker
Docker objects
- When you use Docker, you are creating and using
images
,container
s, networks, volumes, plugins, and other objects. The section below is a brief overview of some of those objects.
- When you use Docker, you are creating and using
Docker Objects
IMAGE
- An
image
is a read-only template with instructions for creating a Dockercontainer
. Often, animage
is based on anotherimage
, with some additional customization. For example, you may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application run.
- An
CONTAINER
A container is a runnable instance of an
image
. You can create, start, stop, move, or delete acontainer
using the Docker API or CLI. You can connect acontainer
to one or more networks, attach storage to it, or even create a new image based on its current state.Fundementally, a
container
is nothing but a running process, with some added encapsulation features applied to it in order to keep it isolated from the host and from other containers. One of the most important aspects of container isolation is that each container interacts with its own private filesystem.By default, a container is relatively well isolated from other
container
s and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.A
container
is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.
SERVICE
Services allow you to scale
container
s across multiple Dockerdaemons
, which all work together as a swarm with multiple managers and workers. Each member of a swarm is a Dockerdaemon
, and all the daemons communicate using the Docker API. Aservice
allows you to define states, such as the number of replicas of the service that must be available at any given time.By default, the service is load-balanced across all worker nodes. To the consumer, the Docker service appears to be a single application. Docker Engine supports swarm mode in Docker 1.12 and higher.
Installing Docker
Please follow the Official Guide.
Basic Workflow
In general, the development workflow for containerized applications looks like this:
- Create a Docker
image
containing the components of your application - Test those components
- Assemble your containers using the
image
file and supporting infrastructure into a complete application - Test, share, and deploy your complete containerized application.
Quickstart for Building an Image and a Container
First, we could use an existing project to demonstrate some concepts mentioned above.
Run:
1
2git clone https://github.com/dockersamples/node-bulletin-board
cd node-bulletin-board/bulletin-board-appThen you will see that there is a file called
Dockerfile
.Dockerfile
s describe how to build animage
and assemble acontainer
with a private filesystem, and can also contain some metadata describing how to run acontainer
based on this image.The bulletin board app
Dockerfile
looks like this:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20# Use the official image as a parent image.
FROM node:current-slim
# Set the working directory.
WORKDIR /usr/src/app
# Copy the file from your host to your current location.
COPY package.json .
# Run the command inside your image filesystem.
RUN npm install
# Inform Docker that the container is listening on the specified port at runtime.
EXPOSE 8080
# Run the specified command within the container.
CMD [ "npm", "start" ]
# Copy the rest of your app's source code from your host to your image filesystem.
COPY . .You can think of these
Dockerfile
commands as a step-by-step recipe on how to build up yourimage
.Make sure you’re in the directory
node-bulletin-board/bulletin-board-app
in a terminal or PowerShell using the cd command. Let’s build your bulletin boardimage
:1
docker build --tag bulletinboard:1.0 .
You’ll see Docker step through each instruction in your
Dockerfile
, building up yourimage
as it goes:Start FROM the pre-existing
node:current-slim
image. This is an official image, built by the node.js vendors and validated by Docker to be a high-quality image containing the Node.js Long Term Support (LTS) interpreter and basic dependencies.Use WORKDIR to specify that all subsequent actions should be taken from the directory /usr/src/app in your image filesystem (never the host’s filesystem).
COPY the file
package.json
from your host to the present location (.
) in your image (so in this case, to /usr/src/app/package.json)RUN the command
npm install
inside your image filesystem (which will readpackage.json
to determine your app’s node dependencies, and install them)COPY in the rest of your app’s source code from your host to your image filesystem.
If successful, the build process should end with a message
Successfully tagged bulletinboard:1.0
.The steps above built up the filesystem of our image, but there are other lines in your
Dockerfile
.The CMD directive is the first example of specifying some metadata in your
image
that describes how to run a container based on this image. In this case, it’s saying that the containerized process that thisimage
is meant to support isnpm start
.The EXPOSE
8080
informs Docker that the container is listening on port8080
at runtime.
Now you have an
image
and you can start acontainer
based on theimage
:1
docker run --publish 8000:8080 --detach --name bb bulletinboard:1.0
This has a couple of common flags:
--publish
asks Docker to forward traffic incoming on the host’s port 8000, to the container’s port 8080. Containers have their own private set of ports, so if you want to reach one from the network, you have to forward traffic to it in this way. Otherwise, firewall rules will prevent all network traffic from reaching your container, as a default security posture.--detach
asks Docker to run this container in the background.--name
specifies a name with which you can refer to your container in subsequent commands, in this casebb
.
Now, you can visit the application in a browser at
localhost:8000
. At this step, it would be the time to run unit tests, for example. You would normally do everything you could to ensure your container works the way you expected.Once you’re satisfied that your bulletin board container works correctly, you can delete it:
1
docker rm --force bb
The
--force
option stops a running container, so it can be removed. If you stop the container running withdocker stop bb
first, then you do not need to use--force
to remove it.
At this point, you’ve successfully built an image
, performed a simple containerization of an application using that image
, and confirmed that your app runs successfully in its container.
Quickstart for Sharing Images on Docker Hub
At this point, you’ve built a containerized application. The final step is to share your images on a registry
like Docker Hub, so they can be easily downloaded and run on any destination machine. (first you need to have a Docker Hub account, if needed help, follow this guide)
Once you have an account, you need to create a repository. At this point, you can only specify a repository name, for example,
bulletin
, and click create.Now you are ready to share your
image
on Docker Hub, but there’s one thing you must do first: images must be namespaced correctly to share on Docker Hub. Specifically, you must name images like<Docker ID>/<Repository Name>:<tag>
.For example, if your username is
abc
and the name of your created repository isbulletin
, then you need to tag your image to be:abc/bulletin:<whateverVersionHere>
. This is because, when you push in the next step, Docker will look up specifically at locationabc/bulletin
in Docker Hub. If you spelled something wrongly, it will not go to the correct location.Finally, push your
image
to Docker Hub. For example, like this:1
docker push abc/bulletin:1.0
Now that your image is available on Docker Hub, you’ll be able to run it anywhere. By moving images around in this way, you no longer need to install any dependencies except Docker on the machines you want to run your software on. The dependencies of containerized applications are completely encapsulated and isolated within your images, which you can share using Docker Hub as described above.
However, there is one thing to keep in mind: at the moment, you’ve only pushed your image
to Docker Hub; what about your Dockerfile
? A crucial best practice is to keep these in version control (e.g. using git
), perhaps alongside your source code for your application. You can add a link or note in your Docker Hub repository description indicating where these files can be found, preserving the record not only of how your image was built, but how it’s meant to be run as a full application.
Starting and Stopping Containers
To create a new
container
from animage
and start it, usedocker run
:1
docker run [options] <image> [command] [argument]
Usually, you would want to specify a name for your container, which means you add the
--name
option. For example:1
docker run ––name=Ubuntu-Test ubuntu:14.04
where
ubuntu:14.04
would be theubuntu
image with version/tag14.04
. However, if you do not define a name for your newly created container, thedeamon
will generate a random string name, which you can later check with thedocker ps
command (see the section Viewing and Removing Containers)However, this would only start a
container
. If you would like to interact with it, you need to add the–i
and–t
options to enter the interative mode. For example, to enter the Ubuntu bash in the ubuntu container created in the above example, you could run:1
docker run –it ––name=Ubuntu_Test ubuntu:14.04
Instead of using
-i
or-t
options, use theattach
command to connect to a running container as well:1
docker attach container_id
(To exit the container, you can type
EXIT
)To stop a
container
, you could use thedocker stop
command:1
docker stop [option] <container_id>
By default, you get a 10 second grace period if you run
docker stop
. This means that thestop
command instructs thecontainer
to stop services after that 10 seconds period. You can also use the--time
option to define a different grace period expressed in seconds, for example:1
docker stop --time=20 <container_id>
If you want to immediately kill a docker
container
without waiting for the grace period to end, use:1
docker kill [option] <container_id>
Additionally, a useful command is to stop (or kill) all running containers:
1
docker stop $(docker ps –a –q)
If you want to
kill
thosecontainer
s, just replace thestop
withkill
.
Viewing and Removing Unwanted Images
To view the
image
s, you could use the command:1
docker images
to provide you with a list of images that you have created. For example, they might look like this:
1
2
3
4
5REPOSITORY TAG IMAGE ID CREATED SIZE
bulletinboard 1.0 2cd431e8ee37 3 hours ago 182MB
mysql 8.0 30f937e841c8 4 days ago 541MB
node current-slim 8ec3841e41bb 4 days ago 165MB
ubuntu latest 1d622ef86b13 4 weeks ago 73.9MBTo remove an
image
, you need to specify both the name and the version (tag
) in the format<name>:<version>
. For example, to remove thebulletinboard
image, you could do:1
docker image rm -f bulletinboard:1.0
However, if it is at the
latest
version/tag, then you can just specify the name:1
docker image rm -f ubuntu
Note:
- The
-f
or--force
option is only necessay when you have thecontainer
running. If it is not running, it can be removed just by specifyingdocker image rm
- The
Viewing and Removing Containers
To list all running containers, you can use the command:
1
docker ps
which, for example, could gives you:
1
2CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a22b3d54d5f4 mysql/mysql-server:latest "/entrypoint.sh mysq…" 18 seconds ago Up 17 seconds (health: starting) 3306/tcp, 33060/tcp test-sql-serverHowever, if you would like to see also stopped
container
s, you could use the-a
or--all
flag:1
docker ps -a
Some of the other useful options include:
Option Default Description --filter
,-f
- Filter output based on conditions provided --format
- Pretty-print containers using a Go template --last
,-n
-1 Show n last created containers (includes all states) --latest
,-l
- Show the latest created container (includes all states) --no-trunc
- Don’t truncate output --quiet
,-q
- Only display numeric IDs The
--filter
could be quite useful to selectcontainer
s you want to see, for example, if you only want to seecontainer
s with name having the substringtest
, you could run:1
docker ps --filter "name=test"
For a full list of examples using
--format
, please refer to the Docker Doc hereOn the other hand, to remove unwanted
container
s, you could use the command:1
docker container rm <container-id>
where the
<container-id>
could be seen if you run thedocker ps
command shown above. If the container is running, you could also force remove it with the--force
or-f
option.For other commands/options related to
docker container -rm
please refer to the Docker Doc here
Copying Files to and from a Container
To copy local files to a container, you can use teh command:
1
docker cp <localPath>/<fromFile> <container-id>:<pathInContainer>/<toFile>
For example:
1
docker cp foo.txt test-container:~/foo.txt
Then you can see the file
foo.txt
in your container at the pathroot/foo.txt
Similarily, to copy a file from a container to your local machine, you can use:
1
docker cp <container-id>:<pathInContainer>/<fromFile> <localPath>/<toFile>
Configuring your Docker Daemon
To configure the Docker daemon using a JSON file, create a file at /etc/docker/daemon.json
on Linux systems, or C:\ProgramData\docker\config\daemon.json
on Windows. On MacOS go to the whale in the taskbar > Preferences > Daemon > Advanced
.
For example, if you want your Docker daemon to run in debug mode
, uses TLS
, and listens for traffic routed to 192.168.59.3
on port 2376
, you would have the following configuration in your daemon.json
( You can learn what configuration options are available in the dockerd reference docs):
1 | { |
However, you could also manually start a daemon with the same configuration in your command line:
1 | dockerd --debug \ |
Note:
- If you use a
daemon.json
file and also pass options to thedockerd
command manually or using start-up scripts, and these options conflict, Docker fails to start with an error such as:
1 | unable to configure the Docker daemon with file /etc/docker/daemon.json: |
If you see an error similar to this one and you are starting the daemon manually with flags, you may need to adjust your flags or the daemon.json to remove the conflict.
This means if you want to enter the debug mode, you can set the debug
key to true
in the daemon.json
file. This method works for every Docker platform and is recommended.
If the file is empty, add the following:
1
2
3{
"debug": true
}If the file already contains JSON, just add the key
"debug": true
, being careful to add a comma to the end of the line if it is not the last line before the closing bracket. Also verify that if thelog-level
key is set, it is set to eitherinfo
ordebug
.info
is the default, and possible values aredebug
,info
,warn
,error
,fatal
.Send a HUP signal to the daemon to cause it to reload its configuration.
On Linux hosts, use the following command.
1
$ sudo kill -SIGHUP $(pidof dockerd)
On Windows hosts, restart Docker.
Note:
- Instead of following this procedure, you can also
stop
the Dockerdaemon
andrestart
it manually with thedebug flag -D
. However, this may result in Docker restarting with a different environment than the one the hosts’ startup scripts create, and this may make debugging more difficult.
Starting your Docker Daemon Manually
Once Docker is installed, you need to start the Docker daemon. Most Linux distributions use systemctl
to start services. If you do not have systemctl
, use the service
command.
systemctl
:1
$ sudo systemctl start docker
service
:1
$ sudo service docker start
Starting your Containers Automatically
Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker restarts. Restart policies ensure that linked containers are started in the correct order. Docker recommends that you use restart policies, and avoid using process managers to start containers.
To configure the restart policy for a container, use the --restart
flag when using the docker run
command. The value of the --restart
flag can be any of the following:
Flag | Description |
---|---|
no |
Do not automatically restart the container. (the default) |
on-failure |
Restart the container if it exits due to an error, which manifests as a non-zero exit code. |
always |
Always restart the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted. (See the second bullet listed in restart policy details) |
unless-stopped |
Similar to always, except that when the container is stopped (manually or otherwise), it is not restarted even after Docker daemon restarts. |
The following example starts a Redis container and configures it to always restart unless it is explicitly stopped or Docker is restarted.
1 | $ docker run -d --restart unless-stopped redis |
Note:
- A restart policy only takes effect after a container starts successfully (when you run the command specifying the restart policy). In this case, starting successfully means that the container is up for at least 10 seconds and Docker has started monitoring it. This prevents a container which does not start at all from going into a restart loop.
- If you manually stop a container, its restart policy is ignored until the Docker daemon restarts or the container is manually restarted. This is another attempt to prevent a restart loop.
- Restart policies only apply to containers. Restart policies for swarm services are configured differently. See the flags related to service restart.
Keep Containers Alive during Daemon Downtime
By default, when the Docker daemon terminates, it shuts down running containers. Starting with Docker Engine 1.12, you can configure the daemon so that containers remain running if the daemon becomes unavailable. This functionality is called live restore
.
There are two ways to enable the live restore
setting to keep containers alive when the daemon becomes unavailable. Only do one of the following.
Add the configuration to the daemon configuration file. On Linux, this defaults to
/etc/docker/daemon.json
. On Docker Desktop for Mac or Docker Desktop for Windows, select the Docker icon from the task bar, then click Preferences -> Daemon -> Advanced.Use the following JSON to enable live-restore.
1
2
3{
"live-restore": true
}Then, restart the Docker daemon. On Linux, you can avoid a restart (and avoid any downtime for your containers) by reloading the Docker daemon. If you use
systemd
, then use the commandsystemctl reload docker
. Otherwise, send a SIGHUP signal to the dockerd process.
The other way is to use the
--live-restore
flag when you start the dockerd process manually with the--live-restore
flag. This approach is not recommended because it does not set up the environment that systemd or another process manager would use when starting the Docker process. This can cause unexpected behavior.
Note:
- The live restore option only works to restore containers if the daemon options, such as bridge IP addresses and graph driver, did not change. If any of these daemon-level configuration options have changed, the live restore may not work and you may need to manually start the containers.
- If the daemon is down for a long time, running containers may fill up the FIFO log the daemon normally reads. A full log blocks containers from logging more data. The default buffer size is 64K. If the buffers fill, you must restart the Docker daemon to flush them.
On Linux, you can modify the kernel’s buffer size by changing
/proc/sys/fs/pipe-max-size
. You cannot modify the buffer size on Docker Desktop for Mac or Docker Desktop for Windows.
- The live restore option only pertains to standalone containers, and not to swarm services. Swarm services are managed by swarm managers. If swarm managers are not available, swarm services continue to run on worker nodes but cannot be managed until enough swarm managers are available to maintain a quorum.
Running Multiple Services in a Container
This involves writing scripts and editing the Dockerfile. For details please refer to this link.
Viewing Runtime metrics for a Container
There are a couple of ways where you can view the current runtime metrics for your container.
You can use the
docker stats
command to live stream a container’s runtime metrics. The command supports CPU, memory usage, memory limit, and network IO metrics.The following is a sample output from the docker stats command:
1
2
3
4
5$ docker stats redis1 redis2
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O
redis1 0.07% 796 KB / 64 MB 1.21% 788 B / 648 B 3.568 MB / 512 KB
redis2 0.07% 2.746 MB / 64 MB 4.29% 1.266 KB /Using Linux Control Groups
Q: What is a Linux Control Group?
Truth be told, certain software applications in the wild may need to be controlled or limited—at least for the sake of stability and, to some degree, security. Far too often, a bug or just bad code can disrupt an entire machine and potentially cripple an entire ecosystem. Fortunately, a way exists to keep those same applications in check. Control groups (cgroups) is a kernel feature that limits, accounts for and isolates the CPU, memory, disk I/O and network’s usage of one or more processes.
The primary design goal for cgroups was to provide a unified interface to manage processes or whole operating-system-level virtualization, including Linux Containers, or LXC (a topic I plan to revisit in more detail in a follow-up article). The cgroups framework provides the following:
Resource limiting: a group can be configured not to exceed a specified memory limit or use more than the desired amount of processors or be limited to specific peripheral devices.
Prioritization: one or more groups may be configured to utilize fewer or more CPUs or disk I/O throughput.
Accounting: a group’s resource usage is monitored and measured.
Control: groups of processes can be frozen or stopped and restarted.
For more details, please visit this site.
Now, for each container, one cgroup is created in each hierarchy. On older systems with older versions of the LXC userland tools, the name of the cgroup is the name of the container. With more recent versions of the LXC tools, the cgroup is
lxc/<container_name>
.
For Docker containers using cgroups, the container name is the full ID or long ID of the container. If a container shows up as
ae836c95b4c3
indocker ps
, its long ID might be something likeae836c95b4c3c9e9179e0e91015512da89fdec91612f63cebae57df9a5444c79
. You can look it up withdocker inspect
ordocker ps --no-trunc
.Putting everything together to look at the memory metrics for a Docker container, take a look at
/sys/fs/cgroup/memory/docker/<longid>/
, where for each subsystem (memory, CPU, and block I/O), one or more pseudo-files exist and contain statistics:- **MEMORY METRICS: `MEMORY.STAT`** - **CPU metrics: `cpuacct.stat`** - **Network metrics** those are not directly exposed by the control group, but information can be gathered from other sources. For details, please follow the link below.
For a detailed explaination of what you see in all those files, please refer to the Offical Docker Documentation
View Logs for a Particular Container or Service
The docker logs
command shows information logged by a running container. The docker service logs
command shows information logged by all containers participating in a service. The information that is logged and the format of the log depends almost entirely on the container’s endpoint command.
By default, docker logs
or docker service logs
shows the command’s output just as it would appear if you ran the command interactively in a terminal. UNIX and Linux commands typically open three I/O streams when they run, called STDIN
, STDOUT
, and STDERR
.
STDIN
is the command’s input stream (i.e. user input), which may include input from the keyboard or input from another command.STDOUT
is usually a command’s normal outputSTDERR
is typically used to output error messages.
By default, docker logs
shows the command’s STDOUT
and STDERR
.
However, in some cases, docker logs
may not show useful information unless you take additional steps.
- If you use a logging driver which sends logs to a file, an external host, a database, or another logging back-end, docker logs may not show useful information.
- In this case, your logs are processed in other ways and you may choose not to use docker logs (continue to the section Configure logging drivers).
- If your image runs a non-interactive process such as a web server or a database, that application may send its output to log files instead of STDOUT and STDERR.
- In this case, the official
nginx
image shows one workaround, and the officialApache httpd
image shows another.- The official nginx image creates a symbolic link from
/var/log/nginx/access.log
to/dev/stdout
, and creates another symbolic link from/var/log/nginx/error.log
to/dev/stderr
, overwriting the log files and causing logs to be sent to the relevant special device instead. See this Dockerfile. - The official httpd driver changes the httpd application’s configuration to write its normal output directly to /proc/self/fd/1 (which is
STDOUT
) and its errors to /proc/self/fd/2 (which isSTDERR
). See this Dockerfile.
- The official nginx image creates a symbolic link from
- In this case, the official
Configuring Logging Drivers
Docker includes multiple logging mechanisms to help you get information from running containers and services. These mechanisms are called logging drivers (I interpret them as driving your logs to a specific place with a specific format).
By default, Docker uses the json
file to log all the outputs. The file of a container’s logs can be found in (if you use the default log format which is json
):
1 | /var/lib/docker/containers/<container id>/<container id>-json.log |
This will contain the same log output as you will see if you run docker logs <container-id>
. If you do not specify a logging driver, the default is json-file. Thus, the default output for commands such as docker inspect <CONTAINER>
is also JSON
.
To configure the Docker daemon from the default to a specific logging driver, set the value of log-driver
to the name of the logging driver in the daemon.json
file, which is located in /etc/docker/
on Linux hosts or C:\ProgramData\docker\config\
on Windows server hosts.The following example explicitly sets the default logging driver to syslog
:
1 | { |
If the logging driver has configurable options, you can set them in the daemon.json
file as a JSON
object with the key log-opts
. The following example sets two configurable options on the json-file logging driver:
1 | { |
and you can also specify directly for a particular container (this will overwrite the JSON file):
1 | $ docker run -d --name=<container-name> \ |
where:
max-size
- The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (k=kb, m=mb, or g=gb). Defaults to -1 (unlimited).
- Example:
--log-opt max-size=10m
max-file
- The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. Only effective when max-size is also set. A positive integer. Defaults to 1.
- Example:
--log-opt max-file=3
labels
- Applies when starting the Docker daemon. A comma-separated list of logging-related labels this daemon accepts. Used for advanced log tag options.
- Example:
--log-opt labels=production_status,geo
env
- Applies when starting the Docker daemon. A comma-separated list of logging-related environment variables this daemon accepts. Used for advanced log tag options.
- Example:
--log-opt env=os,customer
For more details on the JSON File logging driver, please visit this link
Note:
log-opts
configuration options in thedaemon.json
configuration file must be provided as strings. Boolean and numeric values (such as the value for max-file in the example above) must therefore be enclosed in quotes ("
).
Now, To find the current default logging driver for the Docker daemon, run docker info
and search for Logging Driver. You can use the following command on Linux, macOS, or PowerShell on Windows:
1 | $ docker info --format '{{.LoggingDriver}}' |
To find the current logging driver for a running container, if the daemon is using the json-file logging driver, run the following docker inspect command, substituting the container name or ID for <CONTAINER>
:
1 | $ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <CONTAINER> |
Use environment variables or labels with logging drivers
Some logging drivers add the value of a container’s --env|-e
or –label flags to the container’s logs. This example starts a container using the Docker daemon’s default logging driver (let’s assume json-file) but sets the environment variable os=ubuntu.
1 | $ docker run -dit --label production_status=testing -e os=ubuntu alpine sh |
If the logging driver supports it, this adds additional fields to the logging output. The following output is generated by the json-file logging driver:
1 | "attrs":{"production_status":"testing","os":"ubuntu"} |
Supported Logging Drivers
The following logging drivers are supported. If you are using logging driver plugins, you may see more options.
Driver | Description |
---|---|
none |
No logs are available for the container and docker logs does not return any output. |
local |
Logs are stored in a custom format designed for minimal overhead. |
json-file |
The logs are formatted as JSON. The default logging driver for Docker. |
syslog |
Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine. |
journald |
Writes log messages to journald. The journald daemon must be running on the host machine. |
gelf |
Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash. |
fluentd |
Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine. |
awslogs |
Writes log messages to Amazon CloudWatch Logs. |
splunk |
Writes log messages to splunk using the HTTP Event Collector. |
etwlogs |
Writes log messages as Event Tracing for Windows (ETW) events. Only available on Windows platforms. |
gcplogs |
Writes log messages to Google Cloud Platform (GCP) Logging. |
logentries |
Writes log messages to Rapid7 Logentries. |
However, there are limitations for using logging drivers in general as well:
- Users of Docker Enterprise can make use of “dual logging”, which enables you to use the docker logs command for any logging driver. Refer to reading logs when using remote logging drivers for information about using docker logs to read container logs locally for many third party logging solutions, including:
syslog
gelf
fluentd
awslogs
splunk
etwlogs
gcplogs
Logentries
- When using Docker Community Engine, the docker logs command is only available on the following drivers:
localjson-file
journald
- Reading log information requires decompressing rotated log files, which causes a temporary increase in disk usage (until the log entries from the rotated files are read) and an increased CPU usage while decompressing.
- The capacity of the host storage where the Docker data directory resides determines the maximum size of the log file information.
Using a Logging Driver Plugin
Docker logging plugins allow you to extend and customize Docker’s logging capabilities beyond those of the built-in logging drivers. A logging service provider can implement their own plugins and make them available on Docker Hub, or a private registry.
First, you need to install the logging driver plugin
- To install a logging driver plugin, use
docker plugin install <org/image>
, using the information provided by the plugin developer.
- To install a logging driver plugin, use
You can list all installed plugins using
docker plugin ls
, and you can inspect a specific plugin usingdocker inspect
.If you wan to configure the plugin as the default logging driver:
- After the plugin is installed, you can configure the Docker daemon to use it as the default by setting the plugin’s name as the value of the
log-driver
key in thedaemon.json
, as detailed in the previous section. If the logging driver supports additional options, you can set those as the values of thelog-opts
array in the same file.
- After the plugin is installed, you can configure the Docker daemon to use it as the default by setting the plugin’s name as the value of the
If you want to configure a container to use the plugin as the logging driver:
- After the plugin is installed, you can configure a container to use the plugin as its logging driver by specifying the
--log-driver
flag todocker run
, as detailed in the previous section. If the logging driver supports additional options, you can specify them using one or more--log-opt
flags with the option name as the key and the option value as the value.
- After the plugin is installed, you can configure a container to use the plugin as its logging driver by specifying the
Customizing the Logging Driver Tag
The tag log option specifies how to format a tag that identifies the container’s log messages. By default, the system uses the first 12 characters of the container ID. To override this behavior, specify a tag option:
1 | $ docker run --log-driver=fluentd --log-opt fluentd-address=myhost.local:24224 --log-opt tag="testTag" |
Often, you would like to use a special template markup: --log-opt tag=".ImageName/.Name/.ID" value yields syslog log lines like:
1 | Aug 7 18:33:19 HOSTNAME hello-world/foobar/5790672ab6a0[9103]: Hello from Docker. |
Markup | Description |
---|---|
{{.ID}} | The first 12 characters of the container ID. |
{{.FullID}} | The full container ID. |
{{.Name}} | The container name. |
{{.ImageID}} | The first 12 characters of the container’s image ID. |
{{.ImageFullID}} | The container’s full image ID. |
{{.ImageName}} | The name of the image used by the container. |
{{.DaemonName}} | The name of the docker program (docker). |
Dokcer Network Overview
One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. Docker containers and services do not even need to be aware that they are deployed on Docker, or whether their peers are also Docker workloads or not. Whether your Docker hosts run Linux, Windows, or a mix of the two, you can use Docker to manage them in a platform-agnostic way.
However. this topic does not go into OS-specific details about how Docker networks work, so you will not find information about how Docker manipulates iptables
rules on Linux or how it manipulates routing rules on Windows servers, and you will not find detailed information about how Docker forms and encapsulates packets or handles encryption.
Network Drivers
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:
bridge
:- The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks are usually used when your applications run in standalone containers that need to communicate.
- This is usually the best choice when you need multiple containers to communicate on the same Docker host.
- For more details, see the bridge networks section.
host
:- For standalone containers, this will remove network isolation between the container and the Docker host, and use the host’s networking directly.
host
is only available for swarm services onDocker 17.06
and higher. - This is the best choice when the network stack should not be isolated from the Docker host, but you want other aspects of the container to be isolated.
- For more details, see the host networks section
- For standalone containers, this will remove network isolation between the container and the Docker host, and use the host’s networking directly.
overlay
:- Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons. This strategy removes the need to do OS-level routing between these containers.
- This is the best choice when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.
- This is the best choice when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.
- For more details, see the overlay networks section
macvlan
:- Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses. Using the macvlan driver is sometimes the best choice when dealing with legacy applications that expect to be directly connected to the physical network, rather than routed through the Docker host’s network stack.
- For more details, see the mcvlan networks section
none
:- For this container, disable all networking. Usually used in conjunction with a custom network driver.
none
is not available for swarm services. - For more details, see the none networks section
- For this container, disable all networking. Usually used in conjunction with a custom network driver.
Network plugins
:- You can install and use third-party network plugins with Docker. These plugins are available from Docker Hub or from third-party vendors. See the vendor’s documentation for installing and using a given network plugin.
Using bridge
networks
In terms of Docker, a bridge
network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network.
The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. However, bridge
networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network.
Now, for bridge networks, we can have either the default bridge
network or the user-defined bridge
network.
When you start Docker, a default bridge
network (also called bridge) is created automatically, and newly-started containers connect to it unless otherwise specified. You can also create user-defined custom bridge
networks. User-defined bridge networks are superior to the default bridge network.
User-defined bridges
provide automatic DNS resolution between containers.Containers on the
default
bridge network can only access each other by IP addresses, unless you use the--link
option, which is considered legacy. On auser-defined bridge
network, containers can resolve each other by name or alias.Imagine an application with a web front-end and a database back-end. If you call your containers
web
anddb
, theweb
container can connect to thedb
container atdb
, no matter which Docker host the application stack is running on.If you run the same application stack on the
default
bridge network, you need to manually create links between the containers (using the legacy--link
flag). These links need to be created in both directions, so you can see this gets complex with more than two containers which need to communicate. Alternatively, you can manipulate the/etc/hosts
files within the containers, but this creates problems that are difficult to debug.User-defined bridges
provide better isolation.All containers without a
--network
specified, are attached to the default bridge network. This can be a risk, as unrelated stacks/services/containers are then able to communicate.Using a
user-defined network
provides a scoped network in which only containers attached to that network are able to communicate.If you use a
user-defined bridge
, during a container’s lifetime, you can connect or disconnect it from user-defined networks on the fly. To remove a container from thedefault
bridge network, you need to stop the container and recreate it with different network options.Each
user-defined network
creates a configurable bridge.If your containers use the
default
bridge network, you can configure it, but all the containers use the same settings, such asMTU
andiptables
rules. In addition, configuring the default bridge network happens outside of Docker itself, and requires a restart of Docker.User-defined bridge
networks are created and configured usingdocker network create
. If different groups of applications have different network requirements, you can configure eachuser-defined bridge
separately, as you create it.Linked containers on the
default bridge
network share environment variables.Originally, the only way to share environment variables between two containers was to link them using the
--link
flag. This type of variable sharing is not possible withuser-defined networks
. However, there are superior ways to share environment variables. A few ideas:Multiple containers can mount a file or directory containing the shared information, using a Docker
volume
.Multiple containers can be started together using
docker-compose
and the compose file can define the shared variables.You can use
swarm
services instead of standalone containers, and take advantage of shared secrets and configs.
In general, containers connected to the same user-defined bridge
network effectively expose all ports to each other. For a port to be accessible to containers or non-Docker hosts on different networks, that port must be published using the -p
or --publish
flag.
Creating and Removing a user-defined bridge
You can use the
docker network create <networkName>
command to create auser-defined bridge
network.For example:
1
$ docker network create --driver bridge my-net
or, since
bridge
is also the default:1
$ docker network create my-net
You can also specify the subnet, the IP address range, the gateway, and other options. See the Docker network create reference or the output of
docker network create --help
for details.If you don’t need a
user-defined brigde
, you can use thedocker network rm <networkName>
command to remove auser-defined bridge
network. If containers are currently connected to the network, disconnect them first.For example, if you have all containers disconnected from the
my-net
bridge, and you want to remove it:1
$ docker network rm my-net
Connecting and Disconnecting from a user-defined bridge
You can specify the connection to a
user-defined network
via 2 ways:- with the
--network
flag when you create a container - with the
docker network connect <networkName> <containerName>
when you have a running container
When you create a new container, you can specify one or more
--network
flags. This example connects aNginx
container to themy-net
network. It also publishes port80
in the container to port8080
on the Docker host, so external clients can access that port. Any other container connected to themy-net
network has access to all ports on themy-nginx
container, and vice versa.1
2
3
4$ docker create --name my-nginx \
--network my-net \
--publish 8080:80 \
nginx:latestTo connect a running container to an existing user-defined bridge, use the docker network connect command. The following command connects an already-running my-nginx container to an already-existing my-net network:
1
$ docker network connect my-net my-nginx
Note:
- If you need
IPv6
support for Docker containers, you need to enable the option on the Docker daemon and reload its configuration, before creating anyIPv6
networks or assigning containers IPv6 addresses.
When you create your network, you can specify the--ipv6
flag to enable IPv6. You can’t selectively disable IPv6 support on the default bridge network.
- If you need
- with the
To disconnect a running container from a
user-defined bridge
, use thedocker network disconnect
command. The following command disconnects themy-nginx
container from themy-net
network.1
$ docker network disconnect my-net my-nginx
Enabling Forwarding from Containers to Outside World
By default, traffic from containers connected to the default
bridge network is not forwarded to the outside world. To enable forwarding, you need to change two settings. These are not Docker commands and they affect the Docker host’s kernel.
- Configure the Linux kernel to allow IP forwarding.
1
$ sysctl net.ipv4.conf.all.forwarding=1
- Then change the policy for the iptables FORWARD policy from
DROP
toACCEPT
.These settings do not persist across a reboot, so you may need to add them to a start-up script.1
$ sudo iptables -P FORWARD ACCEPT
Viewing and Configuring your user-defined bridge
First, after you have created your
user-defined bridge
, you can usedocker network ls
to view the networks that docker has:1
2
3
4
5
6
7$ docker network ls
NETWORK ID NAME DRIVER SCOPE
9210b6312956 test-net bridge local
17e324f45964 bridge bridge local
6ed54d316334 host host local
7092879f2cc8 none null localThen you can use
docker inspect <network-id>
to see details about yourbridge
:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31$ docker inspect test-net
[
{
"Name": "test-net",
"Id": "9210b6312956eb510fc1ad59f3ebc1ac14270fc27d44f3b98d0c71bcb5409d83",
"Created": "2020-06-01T09:37:22.875610678+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]here you see that:
- there are no containers attached to it:
"Containers": {}
- your Gateway is:
"Gateway": "172.18.0.1"
- there are no containers attached to it:
Now, to test your
user-defined bridge
, you can create 4 containers: the first two will be connected to the bridge you created, the third will only be connected to be default bridge, and the fourth connected to both:1
2
3
4
5
6
7
8
9$ docker run -dit --name alpine1 --network test-net alpine ash
$ docker run -dit --name alpine2 --network test-net alpine ash
$ docker run -dit --name alpine3 alpine ash
$ docker run -dit --name alpine4 --network test-net alpine ash
$ docker network connect bridge alpine4Note:
- Since you can only specify one network when you use the
--network
flag fordocker run
, if you need to connect to more than one network, you need to usedocker network connect
command with a running container.
- Since you can only specify one network when you use the
Now, if all the 4 containers are running properly, you can then inspect the
bridge
and thetest-net
to see that the containers are included properly in the"containers"
section. You should also be able to see the IP address that each one has been assigned.Now, you can test their communication ability by connecting each one of them to the terminal (so that you can do the
stdin
and see thestdout
in your terminal) with thedocker container attach <container-id>
command.On
user-defined network
s liketest-net
, containers can not only communicate by IP address, but can also resolve a container name to an IP address. This capability is called automatic service discovery. However, you will see that containers not connected to the samebridge
cannot communicate with each other (bothalpine1
andalpine2
cannot communicate withalpine3
).1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38$ docker container attach alpine1
# ping -c 2 alpine2
PING alpine2 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.085 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.090 ms
--- alpine2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.085/0.087/0.090 ms
# ping -c 2 alpine4
PING alpine4 (172.18.0.4): 56 data bytes
64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.076 ms
64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.091 ms
--- alpine4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.076/0.083/0.091 ms
# ping -c 2 alpine1
PING alpine1 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.026 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.054 ms
--- alpine1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.026/0.040/0.054 ms
From alpine1, you should not be able to connect to alpine3 at all, since it is not on the alpine-net network.
# ping -c 2 alpine3
ping: bad address 'alpine3'
Ctrl+p Ctrl+qTo exit the attached terminal, use
Ctrl+p
thenCtrl+q
.Finally, remember that
alpine4
is connected to both thedefault
bridge network andtest-net
. It should be able to reach all of the other containers. However, you will need to addressalpine3
by its IP address, which is the behavior of a defaultbridge
. Attach to it and run the tests.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44$ docker container attach alpine4
# ping -c 2 alpine1
PING alpine1 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.074 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.082 ms
--- alpine1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.074/0.078/0.082 ms
# ping -c 2 alpine2
PING alpine2 (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.075 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.080 ms
--- alpine2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.075/0.077/0.080 ms
# ping -c 2 alpine3
ping: bad address 'alpine3'
# ping -c 2 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.089 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms
--- 172.17.0.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.075/0.082/0.089 ms
# ping -c 2 alpine4
PING alpine4 (172.18.0.4): 56 data bytes
64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.033 ms
64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.064 ms
--- alpine4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.033/0.048/0.064 msNow, you have finished your testsings, and you can stop and remove/disconnect all containers and the
test-net
network.1
2
3
4
5$ docker container stop alpine1 alpine2 alpine3 alpine4
$ docker container rm alpine1 alpine2 alpine3 alpine4
$ docker network rm test-netor if you still need those containers:
1
2
3
4
5
6
7
8
9$ docker container stop alpine1 alpine2 alpine3 alpine4
$ docker network disconnect test-net alpine1
$ docker network disconnect test-net alpine2
$ docker network disconnect test-net alpine4
$ docker network rm test-net
Using host
networks
Using overlay
networks
The overlay
network driver creates a distributed network among multiple Docker daemon hosts. This network sits on top of (overlays) the host-specific networks, allowing containers connected to it (including swarm service containers) to communicate securely when encryption is enabled.
When you initialize a swarm
or join a Docker host to an existing swarm
, two new networks are created on that Docker host:
- an
overlay
network calledingress
, which handles control and data traffic related to swarm services. When you create aswarm
service and do not connect it to auser-defined overlay
network, it connects to theingress
network by default.- You can create
user-defined overlay
networks usingdocker network create
, in the same way that you can createuser-defined bridge
networks. Remember that services or containers can be connected to more than one network at a time, and that services or containers can only communicate across networks they are each connected to.
- You can create
- a
bridge
network calleddocker_gwbridge
, which connects the individual Docker daemon to the other daemons participating in the swarm.
Creating an Overlay Network With Swarm
First, there are two prerequisites you need to fulfill:
Firewall rules for Docker daemons using overlay networks
You need the following ports open to traffic to and from each Docker host participating on an overlay network:- TCP port 2377 for cluster management communications
- TCP and UDP port 7946 for communication among nodes
- UDP port 4789 for overlay network traffic
Before you can create an
overlay
network, you need to either** initialize your Docker daemon as a swarm manager usingdocker swarm init
or join it to an existing swarm usingdocker swarm join
. Either of these creates the defaultingress
overlay network which is used by swarm services by default. You need to do this even if you never plan to use swarm services. **Afterward, you can create additionaluser-defined overlay
networks.
To create an
overlay
network for use withswarm
services, use a command like the following:1
$ docker network create -d overlay my-overlay
To create an overlay network which can be used by swarm services or standalone containers to communicate with other standalone containers running on other Docker daemons, add the –attachable flag:
1
$ docker network create -d overlay --attachable my-attachable-overlay
Encryption:
All swarm service management traffic is encrypted by default, using the AES algorithm in GCM mode. Manager nodes in the swarm rotate the key used to encrypt gossip data every 12 hours.
To encrypt application data as well, add
--opt
encrypted when creating theoverlay
network. This enables IPSEC encryption at the level of the vxlan. This encryption imposes a non-negligible performance penalty, so you should test this option before using it in production.So, for example, to encrypt your application data as well:
1
$ docker network create --opt encrypted --driver overlay --attachable my-attachable-multi-host-network
Now, you can customized your default
ingress
network or yourdocker_gwbridge
, and youruser-defined overlay
.Now, you have 4 options on how to use this
overlay
system:Use the default
overlay
network, which demonstrates how to use the default overlay network that Docker sets up for you automatically when you initialize or join a swarm. This network is not the best choice for production systems.Use
user-defined overlay
networks which shows how to create and use your own custom overlay networks, to connect services. This is recommended for services running in production.Use an
overlay
network for standalone containers shows how to communicate between standalone containers on different Docker daemons using an overlay network.Communicate between a container and a swarm service, which sets up communication between a standalone container and a swarm service, using an
attachable
overlay network. This is supported in Docker 17.06 and higher.
Use the default overlay
network
In this example, you start an alpin
e service and examine the characteristics of the network from the point of view of the individual service containers.
Prerequisites:
This requires three physical or virtual Docker hosts which can all communicate with one another, all running new installations of Docker 17.03 or higher. This tutorial assumes that the three hosts are running on the same network with no firewall involved.
These hosts will be referred to as
manager
,worker-1
, andworker-2
. The manager host will function as both amanager
and aworker
, which means it can both run service tasks and manage the swarm.worker-1
andworker-2
will function as workers only,Note:
- To open those ports specified above, you can use the following command (on both worker and manager):
1
2
3
4
5
6firewall-cmd --add-port=22/tcp --permanent
firewall-cmd --add-port=2376/tcp --permanent
firewall-cmd --add-port=2377/tcp --permanent
firewall-cmd --add-port=7946/tcp --permanent
firewall-cmd --add-port=7946/udp --permanent
firewall-cmd --add-port=4789/udp --permanent
1
2> Afterwards, reload the firewall:
>firewall-cmd –reload
1
Then restart Docker.
systemctl restart docker
1
For other means of doing the same thing, please visit [this link](https://www.digitalocean.com/community/tutorials/how-to-configure-the-linux-firewall-for-docker-swarm-on-ubuntu-16-04#method-2-%E2%80%94-opening-docker-swarm-ports-using-firewalld)
- To open those ports specified above, you can use the following command (on both worker and manager):
On
manager
, initialize the swarm. If the host only has one network interface, the--advertise-addr
flag is optional.1
$ docker swarm init --advertise-addr=<IP-ADDRESS-OF-MANAGER>
Make a note of the text that is printed, as this contains the token that you will use to join
worker-1
andworker-2
to the swarm. It is a good idea to store the token in a password manager.On
worker-1
, join the swarm. If the host only has one network interface, the--advertise-addr
flag is optional.1
2
3$ docker swarm join --token <TOKEN> \
--advertise-addr <IP-ADDRESS-OF-WORKER-1> \
<IP-ADDRESS-OF-MANAGER>:2377On
worker-2
, join the swarm. If the host only has one network interface, the--advertise-addr
flag is optional.1
2
3$ docker swarm join --token <TOKEN> \
--advertise-addr <IP-ADDRESS-OF-WORKER-2> \
<IP-ADDRESS-OF-MANAGER>:2377On
manager
, list all the nodes. This command can only be done from a manager.1
2
3
4
5
6$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
d68ace5iraw6whp7llvgjpu48 * ip-172-31-34-146 Ready Active Leader
nvp5rwavvb8lhdggo8fcf7plg ip-172-31-35-151 Ready Active
ouvx2l7qfcxisoyms8mtkgahw ip-172-31-36-89 Ready ActiveYou can also use the
--filter
flag to filter by role:1
2
3
4
5
6
7
8
9
10$ docker node ls --filter role=manager
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
d68ace5iraw6whp7llvgjpu48 * ip-172-31-34-146 Ready Active Leader
$ docker node ls --filter role=worker
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
nvp5rwavvb8lhdggo8fcf7plg ip-172-31-35-151 Ready Active
ouvx2l7qfcxisoyms8mtkgahw ip-172-31-36-89 Ready ActiveList the Docker networks on
manager
,worker-1
, andworker-2
and notice that each of them now has an overlay network calledingress
and abridge
network calleddocker_gwbridge
. Only the listing for manager is shown here:1
2
3
4
5
6
7
8$ docker network ls
NETWORK ID NAME DRIVER SCOPE
495c570066be bridge bridge local
961c6cae9945 docker_gwbridge bridge local
ff35ceda3643 host host local
trtnl4tqnc3n ingress overlay swarm
c8357deec9cb none null localThe
docker_gwbridge
connects theingress
network to the Docker host’s network interface so that traffic can flow to and from swarm managers and workers. If you createswarm
services and do not specify a network, they are connected to theingress
network. It is recommended that you use separate overlay networks for each application or group of applications which will work together. In the next procedure, you will create two overlay networks and connect a service to each of them.CREATE THE SERVICES
On
manager
, create a newoverlay
network callednginx-net
:1
$ docker network create -d overlay nginx-net
You don’t need to create the
overlay
network on the other nodes, beacause it will be automatically created when one of those nodes starts running a service task which requires it.On
manager
, create a 5-replica Nginx service connected tonginx-net
. The service will publish port 80 to the outside world. All of the service task containers can communicate with each other without opening any ports.Note: Services can only be created on a manager.
1
2
3
4
5
6$ docker service create \
--name my-nginx \
--publish target=80,published=80 \
--replicas=5 \
--network nginx-net \
nginxThe default publish mode of
ingress
, which is used when you do not specify a mode for the--publish
flag, means that if you browse to port 80 onmanager
,worker-1
, orworker-2
, you will be connected to port 80 on one of the 5 service tasks, even if no tasks are currently running on the node you browse to. If you want to publish the port using host mode, you can add mode=host to the –publish output. However, you should also use--mode global
instead of--replicas=5
in this case, since only one service task can bind a given port on a given node.Run
docker service ls
to monitor the progress of service bring-up, which may take a few seconds.Inspect the
nginx-net
network onmanager
,worker-1
, andworker-2
. Remember that you did not need to create it manually onworker-1
andworker-2
because Docker created it for you. The output will be long, but notice theContainers
andPeers
sections. Containers lists all service tasks (or standalone containers) connected to the overlay network from that host.From manager, inspect the service using
docker service inspect my-nginx
and notice the information about the ports and endpoints used by the service.Create a new network
nginx-net-2
, then update the service to use this network instead ofnginx-net
:1
2
3
4
5$ docker network create -d overlay nginx-net-2
$ docker service update \
--network-add nginx-net-2 \
--network-rm nginx-net \
my-nginxRun
docker service ls
to verify that the service has been updated and all tasks have been redeployed. Rundocker network inspect nginx-net
to verify that no containers are connected to it. Run the same command fornginx-net-2
and notice that all the service task containers are connected to it.Note: Even though overlay networks are automatically created on swarm worker nodes as needed, they are not automatically removed.
Clean up the service and the networks. From
manager
, run the following commands. The manager will direct the workers to remove the networks automatically.1
2$ docker service rm my-nginx
$ docker network rm nginx-net nginx-net-2
Using a user-defined overlay
First you need to create the
user-defined overlay
network.1
$ docker network create -d overlay my-overlay
Start a service using the
overlay
network and publishing port80
to port8080
on the Docker host.1
2
3
4
5
6$ docker service create \
--name my-nginx \
--network my-overlay \
--replicas 1 \
--publish published=8080,target=80 \
nginx:latestRun
docker network inspect my-overlay
and verify that themy-nginx
service task is connected to it, by looking at the Containers section.Inspect the
my-overlay
network onmanager
,worker-1
, andworker-2
. Remember that you did not need to create it manually onworker-1
andworker-2
because Docker created it for you. The output will be long, but notice theContainers
andPeers
sections. Containers lists all service tasks (or standalone containers) connected to the overlay network from that host.Remove the service and the network on the
manager
after you see it works fine.1
2
3$ docker service rm my-nginx
$ docker network rm my-overlay
Using an overlay
network for standalone containers (recommended)
This example refers to the two nodes in our swarm as host1
and host2
. This example also uses Linux hosts, but the same commands work on Windows.
This example demonstrates DNS container discovery – specifically, how to communicate between standalone containers on different Docker daemons using an overlay
network. In short, steps are:
- On
host1
, initialize the node as a swarm (manager
). - On
host2
, join the node to the swarm (worker
). - On
host1
, create an attachable overlay network (test-net
). - On
host1
, run an interactive alpine container (alpine1
) ontest-net
. - On
host2
, run an interactive, and detached, alpine container (alpine2
) ontest-net
. - On
host1
, from within a session ofalpine1
, pingalpine2
.
Set up the swarm.
a. On
host1
, initialize a swarm (and if prompted, use--advertise-addr
to specify the IP address for the interface that communicates with other hosts in the swarm, for instance, the private IP address on AWS).
b. Onhost2
, join the swarm as instructed above, usually in the form:1
2$ docker swarm join --token <your_token> <your_ip_address>:2377
This node joined a swarm as a worker.On
host1
, create an attachable overlay network calledtest-net
:1
2$ docker network create --driver=overlay --attachable test-net
uqsof8phj3ak0rq9k86zta6htNotice the returned NETWORK ID – you will see it again when you connect to it from
host2
.On
host1
, start an interactive (-it
) container (alpine1
) that connects totest-net
:1
2$ docker run -it --name alpine1 --network test-net alpine
/ #On
host2
, list the available networks – notice thattest-net
does not yet exist (because it is not currently required by any containers, though it is available. see next step):1
2
3
4
5
6
7$ docker network ls
NETWORK ID NAME DRIVER SCOPE
ec299350b504 bridge bridge local
66e77d0d0e9a docker_gwbridge bridge local
9f6ae26ccb82 host host local
omvdxqrda80z ingress overlay swarm
b65c952a4b2b none null localOn
host2
, start a detached (-d
) and interactive (-it
) container (alpine2) that connects to test-net:1
2$ docker run -dit --name alpine2 --network test-net alpine
fb635f5ece59563e7b8b99556f816d24e6949a5f6a5b1fbd92ca244db17a4342where:
- Automatic DNS container discovery only works with unique container names.
- Being
detached
means the container is no longer listening to the command line wheredocker run
was run. By design, containers started in detached mode exit when the root process used to run the container exits, unless you also specify the--rm
option. If you use-d
with--rm
, the container is removed when it exits or when the daemon exits, whichever happens first.
On
host2
, verify thattest-net
was created (and has the same NETWORK ID as test-net on host1):1
2
3
4$ docker network ls
NETWORK ID NAME DRIVER SCOPE
...
uqsof8phj3ak test-net overlay swarmOn
host1
, pingalpine2
within the interactive terminal of alpine1:1
2
3
4
5
6
7
8/ # ping -c 2 alpine2
PING alpine2 (10.0.0.5): 56 data bytes
64 bytes from 10.0.0.5: seq=0 ttl=64 time=0.600 ms
64 bytes from 10.0.0.5: seq=1 ttl=64 time=0.555 ms
--- alpine2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.555/0.577/0.600 msThe two containers communicate with the
overlay
network connecting the two hosts. If you run anotheralpine
container onhost2
that is not detached, you can pingalpine1
fromhost2
(and here we add the remove option for automatic container cleanup):1
2
3$ docker run -it --rm --name alpine3 --network test-net alpine
/ # ping -c 2 alpine1
/ # exitOn host1, close the alpine1 session (which also stops the container):
1
/ # exit
Clean up your containers and networks:
You must stop and remove the containers on each host independently because Docker daemons operate independently and these are standalone containers. You only have to remove the network on
host1
because when you stopalpine2
onhost2
,test-net
disappears (as it is no longer required).a. On
host2
, stopalpine2
, check thattest-net
was removed, then removealpine2
:1
2
3$ docker container stop alpine2
$ docker network ls
$ docker container rm alpine2b. On
host1
, removealpine1
andtest-net
:1
2$ docker container rm alpine1
$ docker network rm test-net
Customizing your ingress
network
Most users never need to configure the ingress
network, but Docker 17.05 and higher allow you to do so. This can be useful if the automatically-chosen subnet conflicts with one that already exists on your network, or you need to customize other low-level network settings such as the MTU.
Customizing the ingress network involves removing and recreating it. This is usually done before you create any services in the swarm. If you have existing services which publish ports, those services need to be removed before you can remove the ingress network.
During the time that no ingress
network exists, existing services which do not publish ports continue to function but are not load-balanced. This affects services which publish ports, such as a WordPress service which publishes port 80.
Inspect the ingress
network using docker network inspect ingress
, and remove any services whose containers are connected to it. These are services that publish ports, such as a WordPress service which publishes port 80. If all such services are not stopped, the next step fails.
Remove the existing ingress network:
1
2
3
4
5
6
7$ docker network rm ingress
WARNING! Before removing the routing-mesh network, make sure all the nodes
in your swarm run the same docker engine version. Otherwise, removal may not
be effective and functionality of newly created ingress networks will be
impaired.
Are you sure you want to continue? [y/N]Create a new overlay network using the
--ingress
flag, along with the custom options you want to set. This example sets theMTU
to1200
, sets thesubnet
to10.11.0.0/16
, and sets thegateway
to10.11.0.2
.1
2
3
4
5
6
7$ docker network create \
--driver overlay \
--ingress \
--subnet=10.11.0.0/16 \
--gateway=10.11.0.2 \
--opt com.docker.network.driver.mtu=1200 \
my-ingressNote:
- You can name your
ingress
network with a name of something other thaningress
, but you can only have oneingress
network. An attempt to create a second one fails.
- You can name your
Restart the services that you stopped in the first step.
Customizing your docker_gwbridge
network
docker_gwbridge
connects the overlay networks (including the ingress
network) to an individual Docker daemon’s physical network (swarm
host). Docker creates it automatically when you initialize a swarm
or join a Docker host to a swarm
, but it is not a Docker device. It exists in the kernel of the Docker host. If you need to customize its settings, you must do so before joining the Docker host to the swarm, or after temporarily removing the host from the swarm.
Stop Docker, and delete the existing docker_gwbridge interface.
1
2
3$ sudo ip link set docker_gwbridge down
$ sudo ip link del dev docker_gwbridgeStart Docker. Do not join or initialize the swarm.
Create or re-create the
docker_gwbridge
bridge manually with your custom settings, using thedocker network create
command. This example uses thesubnet
10.11.0.0/16
. For a full list of customizable options, see Bridge driver options.1
2
3
4
5
6$ docker network create \
--subnet 10.11.0.0/16 \
--opt com.docker.network.bridge.name=docker_gwbridge \
--opt com.docker.network.bridge.enable_icc=false \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
docker_gwbridgeInitialize or join the
swarm
. Since the bridge already exists, Docker does not create it with automatic settings.
Customizing your user-defined overlay
network
Using mcvlan
networks
Using none
networks
Using MySQL Server in Docker
- You need to first setup your SQL server in docker via this Official Guide
- Then refer to MySQL Manual for further reference on using MySQL
Creating and Mounting a Volume to MySQL data
First, to create a volume, you need to run:
1
$ docker volume create <my-vol>
This will create a volume at your local directory
/var/lib/docker/volumes/<my-vol>/_data
You can list the volumes you have with:
1
$ docker volume ls
At this point, if you see some other volumes that are temporary created and you don’t want them, you can use:
1
$ docker volume rm <my-vol>
or use the
prune
keyword which will delete volumes that are not used by any containers:1
$ docker volume prune
Now, you can mount a volume to your container with either the
-v
or--mount
option. To store mount volumes to directory wheremysql
stores its data, you need to mount to/var/opt/mssql
. Examples below produce the same result (this example uses the image calledmysql:latest
and creates a containerdevtest
).--mount
1
2
3
4$ docker run -d \
--name devtest \
--mount source=myvol2,target=/var/lib/mysql \
mysql:latest-v
1
2
3
4$ docker run -d \
--name devtest \
-v myvol2:/var/lib/mysql \
mysql:latestTo mount multiple volumes, you can do:
1
2
3
4
5$ docker run -d \
--name devtest \
-v myvol2:/var/lib/mysql \
-v myvol3:/var \
mysql:latestNote:
- If the name of the volume you specified does not exist,
docker
will create one with that name for you. - It is also allowed to mount multiple containers to the same volume.
- If the name of the volume you specified does not exist,
You can then use
docker inspect devtest
to verify that the volume was created and mounted correctly. Look for the Mounts section:1
2
3
4
5
6
7
8
9
10
11
12"Mounts": [
{
"Type": "volume",
"Name": "myvol2",
"Source": "/var/lib/docker/volumes/myvol2/_data",
"Destination": "/app",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],This shows that the mount is a volume, it shows the correct source and destination, and that the mount is read-write.
If you want to remove a volume, you need to stop and remove the container, and then remove the volume. Note volume removal is a separate step.
1
2
3
4
5$ docker container stop devtest
$ docker container rm devtest
$ docker volume rm myvol2
Use a Volume for Read-Only Purpose
For some development applications, the container needs to write into the bind mount so that changes are propagated back to the Docker host. At other times, the container only needs read access to the data. Remember that multiple containers can mount the same volume, and it can be mounted read-write for some of them and read-only for others, at the same time.
Suppose you want to access the data in the volume nginx-vol
for the following examples.
The following modifies the one above but mounts the directory as a read-only volume, by adding readonly
or ro
to the (empty by default) list of options, after the mount point within the container. If multiple options are present, separate them by commas.
The --mount
and -v
examples have the same result.
--mount
1
2
3
4$ docker run -d \
--name=nginxtest \
--mount source=nginx-vol,destination=/usr/share/nginx/html,readonly \
nginx:latest-v
1
2
3
4$ docker run -d \
--name=nginxtest \
-v nginx-vol:/usr/share/nginx/html:ro \
nginx:latest
Now you can use docker inspect nginxtest
to verify that the readonly mount was created correctly. Look for the Mounts section, where you see "RW": false
:
1 | "Mounts": [ |
Overview of Docker Compose
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML
file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Using Compose is basically a three-step process:
- Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
- Define the services that make up your app in
docker-compose.yml
so they can be run together in an isolated environment. - Run
docker-compose up
and Compose starts and runs your entire app.
A docker-compose.yml
looks like this:
1 | version: '2.0' |
Installing Docker Compose
Docker Compose depends on the Docker engine to work properly. After you have installed the Docker engine, please refer to this link to install Docker Compose.
QuickStart with Docker Compose
For getting a sense of how docker compose
works, please visit the this Docker Documentation.
Some important steps are shown here:
Step 5: Edit the Compose file to add a bind mount
Edit docker-compose.yml in your project directory to add a bind mount for the web service:
1
2
3
4
5
6
7
8
9
10
11
12version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
environment:
FLASK_ENV: development
redis:
image: "redis:alpine"The new volumes key mounts the project directory (current directory) on the host to
/code
inside the container. Now, you need to re-build the image for it to take effect. Afterwards, by mounting a volume it is allowing you to modify the code on the fly, without having to rebuild the image. The environment key sets theFLASK_ENV
environment variable, which tells flask run to run in development mode and reload the code on change. This mode should only be used in development.Step 6: Re-build and run the app with Compose
From your project directory, type docker-compose up to build the app with the updated Compose file, and run it.
1
2
3
4
5
6
7
8
9$ docker-compose up
Creating network "composetest_default" with the default driver
Creating composetest_web_1 ...
Creating composetest_redis_1 ...
Creating composetest_web_1
Creating composetest_redis_1 ... done
Attaching to composetest_web_1, composetest_redis_1
web_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
...