
Here is what happens Docker adds a bridge to the Linux OS named ‘docker0‘ and that bridge is an isolated network defined in software.
Docker network interface name install#
For docker service create, docker would create an my-eth and my-other-eth in each task (container) backing the service. Once you install Docker in Linux, a ‘default’ networking configuration is applied.
Docker network interface name drivers#
With the above setup, my guess is that the host network is visible from C2, and I suppose this is the reason why Docker automatically prevents us from unintentionally exposing the host network to non-host-specified containers. docker service create \ -network namemy-network,ifacemy-eth \ -network namemy-other-network,ifacemy-other-eth \ nginx. Specify the Name of a Network to the HNS Service Applies to all network drivers Ordinarily, when you create a container network using docker network create, the network name that you provide is used by the Docker service but not by the HNS service. When you run the following command in your console, Docker returns a JSON object describing the bridge network (including information regarding which containers run on the network, the options set, and listing the subnet. Run ifconfig on the Linux host to view the bridge network. A second container, let us say C2, is connected to Br1. All Docker installations represent the docker0 network with bridge Docker connects to bridge by default.Hypothetically, C1 would be connected to the host network (-net=host) and a Docker bridge network Br1 (-net=Br1). I will try to illustrate the reason with an example: Docker does not allow to connect a container to the host network and any other Docker bridge network at the same time.
