Effectively, good day everybody. I’m again to proceed my exploration of default Docker networking. If you happen to haven’t already learn (or it has been some time because you learn) Exploring Default Docker Networking Half 1 I’d suggest checking it out. In that put up, I clarify what “Default Docker Networking” means, then placed on my headlamp and climbing gear as I’m going deep down into the layer 1 (bodily) and layer 2 (information hyperlink) facets of container networking and the way Linux networking ideas like digital ethernet hyperlinks and bridges present the “magic.”
On this put up, I’m going to proceed the exploration into layer 3 (community), shining the sunshine on how container community visitors is routed, NATed, and filtered. So end a protein bar, take a drink out of your canteen, and let’s shed some mild on this subsequent a part of our journey!
Our default Docker bridge community exploration map
Half 1 ended with a community topology drawing of the container networking mentioned all through the put up. I’ve expanded that topology to incorporate particulars past the Linux bridge, veths, and containers we mentioned and explored in that put up by including community particulars and knowledge from outdoors the layer 2 house of the containers we’ll use to discover at present.

The additions to this topology drawing embody:
- A fourth container known as net has been added. This container is operating an internet server on port 80 that has been “printed” (made out there outdoors the container community) on port 81.
- The Linux host’s main community hyperlink, ens160, with its IP handle of 172.16.211.128 has been added.
- The community processing layer from Linux that gives routing, filtering, and different community capabilities has been added.
- Two further hosts within the lab community outdoors of the Linux host operating the containers have been proven, together with primary connectivity indications.
A ping at midnight…
We’re going to begin with a easy check to find out whether or not we will ping from one container to the first IP handle from the Linux host internet hosting the container. Particularly, from C1 to ens160’s IP handle of 172.16.211.128.
root@c1:/# ping 172.16.211.128 PING 172.16.211.128 (172.16.211.128) 56(84) bytes of knowledge. 64 bytes from 172.16.211.128: icmp_seq=1 ttl=64 time=1.80 ms 64 bytes from 172.16.211.128: icmp_seq=2 ttl=64 time=0.051 ms 64 bytes from 172.16.211.128: icmp_seq=3 ttl=64 time=0.092 ms ^C --- 172.16.211.128 ping statistics --- 3 packets transmitted, 3 acquired, 0% packet loss, time 2021ms rtt min/avg/max/mdev = 0.051/0.646/1.796/0.813 ms
The success might be not all that stunning — and even really feel that satisfying. I imply, we community engineers ping between two hosts on a regular basis, proper? So, why hassle? Let me break it down for you.
First up, keep in mind that the container and the Linux host are on 2 completely different layer 2 networks. The container is on 172.17.0.0/16, and the host is on 172.16.211.0/24. We might assume such a visitors would contain routing. So let’s examine the routing desk on the concerned gadgets.
The container’s desk beneath exhibits us that the default route is used to succeed in the 172.16.211.0/24 community.
root@c1:/# ip route default by way of 172.17.0.1 dev eth0 172.17.0.0/16 dev eth0 proto kernel scope hyperlink src 172.17.0.2
And the host’s desk exhibits that there’s really a route for 172.17.0.0/16 by the docker0 interface.
root@expert-cws:~# ip route default by way of 172.16.211.2 dev ens160 proto dhcp src 172.16.211.128 metric 100 172.16.211.0/24 dev ens160 proto kernel scope hyperlink src 172.16.211.128 172.16.211.2 dev ens160 proto dhcp scope hyperlink src 172.16.211.128 metric 100 172.17.0.0/16 dev docker0 proto kernel scope hyperlink src 172.17.0.1
Nothing appears out of the odd, however don’t depart simply but. I swear there’s a level to this a part of the exploration.
Let’s take a look at the packets themselves utilizing the Linux software tcpdump to watch the icmp visitors on ens160.
root@expert-cws:~# tcpdump -n -i ens160 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens160, link-type EN10MB (Ethernet), seize measurement 262144 bytes
I began up the seize after which issued one other ping from C1 to the Linux host. However I didn’t see any packets captured on interface ens160 regardless of the pings being profitable. See, I instructed you it’d get attention-grabbing. 🙂
Let’s change our seize to the docker0 interface as an alternative.
root@expert-cws:~# tcpdump -n -i docker0 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on docker0, link-type EN10MB (Ethernet), seize measurement 262144 bytes 14:51:53.427337 IP 172.17.0.2 > 172.16.211.128: ICMP echo request, id 38, seq 1, size 64 14:51:53.427373 IP 172.16.211.128 > 172.17.0.2: ICMP echo reply, id 38, seq 1, size 64
Okay, that appears higher. However, why are we seeing the visitors on the docker0 interface when the vacation spot was the handle assigned to the ens160 interface? Effectively, as a result of the visitors by no means really reaches the ens160 interface. The networking stack inside the Linux system processes the visitors, and since it’s all inside to the system, there isn’t any want for the visitors to make it to the community hyperlink/adapter.
Then why does it present up on the docker0 interface in any respect? Why can’t the networking stack simply course of it instantly and depart all “interfaces” out of it? That is due to the community isolation that’s used as a part of Docker networking. Recall again to Half 1; how once we ran ip hyperlink from inside the container, we solely noticed the container interface and never the opposite interfaces from the host. And once we ran the command from the host, we did NOT see the container interfaces within the checklist. We solely noticed the host facet of the veth pair. As a reminder, right here is the command from the container host.
root@expert-cws:~# ip hyperlink 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 hyperlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 hyperlink/ether 00:0c:29:75:99:27 brd ff:ff:ff:ff:ff:ff 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default hyperlink/ether 02:42:9a:0c:8a:ee brd ff:ff:ff:ff:ff:ff 97: vethb192fa8@if96: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue grasp docker0 state UP mode DEFAULT group default hyperlink/ether 46:10:b9:df:52:8b brd ff:ff:ff:ff:ff:ff link-netnsid 0 99: veth055569e@if98: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue grasp docker0 state UP mode DEFAULT group default hyperlink/ether 52:07:4f:3e:11:c6 brd ff:ff:ff:ff:ff:ff link-netnsid 1 101: veth3a3ee0b@if100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue grasp docker0 state UP mode DEFAULT group default hyperlink/ether 9e:51:13:75:53:52 brd ff:ff:ff:ff:ff:ff link-netnsid 2 105: vethd8a9fa5@if104: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue grasp docker0 state UP mode DEFAULT group default hyperlink/ether d2:86:8f:ab:75:0b brd ff:ff:ff:ff:ff:ff link-netnsid 3
The blue hyperlink quantity 97 represents the host facet of the veth that connects to hyperlink quantity 96 on C1. However the place is hyperlink quantity 96 within the checklist?
Linux community namespaces enter the story
The reply is that hyperlink quantity 96, the eth0 interface for C1, is in a unique community namespace from the default one from the opposite hyperlinks on the host.
Linux namespaces are an abstraction inside Linux that permit system assets to be remoted from one another. Namespaces may be arrange for a lot of various kinds of assets together with processes, mount factors, and networks. Actually, these namespaces are key to how Docker containers run as remoted cases from one another and the host they run on.
We will view the community namespaces on our host with the checklist namespaces command.
root@expert-cws:~# lsns --type=internet NS TYPE NPROCS PID USER NETNSID NSFS COMMAND 4026531992 internet 375 1 root unassigned /run/docker/netns/default /sbin/init maybe-ubiquity 4026532622 internet 1 81590 uuidd unassigned /usr/sbin/uuidd --socket-a 4026532675 internet 1 1090 rtkit unassigned /usr/libexec/rtkit-daemon 4026532749 internet 2 134263 skilled unassigned /usr/share/code/code --typ 4026532808 internet 1 267673 root 0 /run/docker/netns/74fa6636a15f bash 4026532872 internet 1 267755 root 1 /run/docker/netns/e12672b07df8 bash 4026532921 internet 6 133573 skilled unassigned /decide/google/chrome/chrome 4026532976 internet 1 133575 skilled unassigned /decide/google/chrome/nacl_he 4026533050 internet 1 267840 root 2 /run/docker/netns/5cab1255c9ae bash 4026533115 internet 1 268958 root 3 /run/docker/netns/c54dcb1bd674 /bin/bash
Every of the entries within the checklist coloured blue represents one of many 4 containers that we’re operating now, with the PID column figuring out the precise course of tied to the distinctive container. We will decide the PID for a container by inspecting it.
root@expert-cws:~# docker examine c1 | jq .[0].State.Pid 267673
And with that, we will now run the command to view the community hyperlinks from inside the container’s community namespace.
root@expert-cws:~# nsenter -t 267673 -n ip hyperlink 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 hyperlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 96: eth0@if97: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default hyperlink/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
And BAM! There we’ve hyperlink quantity 96.
And this brings us again to the query: why didn’t the host community stack course of the ping instantly from the container? Why can we see the visitors on the docker0 interface? As a result of the networking “stack” is actually the community namespace. And the container’s community namespace the place the ping originated is completely different from the default community namespace the place the IP handle for the ens160 interface resides. It’s the digital ethernet “cable” that permits visitors from the container namespace to succeed in the default namespace, by the docker0 interface. And as soon as the visitors arrives within the docker0 interface, the networking stack can now course of the request and ship the reply, all by the docker0 interface.
Pinging past the gates… er host
So we’ve now seen how community isolation is completed with Linux namespaces and the impression on the interfaces concerned within the community processing of visitors. For our subsequent check, let’s ship visitors outdoors of the Linux host the place our containers are operating, and ship a ping to a different Host01 from the community topology.
root@c1:/# ping -c 1 172.16.211.1 PING 172.16.211.1 (172.16.211.1) 56(84) bytes of knowledge. 64 bytes from 172.16.211.1: icmp_seq=1 ttl=63 time=0.271 ms --- 172.16.211.1 ping statistics --- 1 packets transmitted, 1 acquired, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms
I really despatched a single ping packet from the container, and we will see that it was profitable. Earlier than sending the ping I began up a packet seize on each the docker0 and ens160 interfaces to seize the visitors alongside the way in which and evaluate the variations because the visitors arrived within the default community namespace from the container and because it was despatched out from the host in direction of its vacation spot (in addition to the return journey).
# Seize on the docker0 interface root@expert-cws:~# tcpdump -n -i docker0 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on docker0, link-type EN10MB (Ethernet), seize measurement 262144 bytes 17:11:13.047823 IP 172.17.0.2 > 172.16.211.1: ICMP echo request, id 41, seq 1, size 64 17:11:13.048061 IP 172.16.211.1 > 172.17.0.2: ICMP echo reply, id 41, seq 1, size 64 # Seize on the ens160 interface root@expert-cws:~# tcpdump -n -i ens160 icmp tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens160, link-type EN10MB (Ethernet), seize measurement 262144 bytes 17:11:13.047856 IP 172.16.211.128 > 172.16.211.1: ICMP echo request, id 41, seq 1, size 64 17:11:13.048024 IP 172.16.211.1 > 172.16.211.128: ICMP echo reply, id 41, seq 1, size 64
Check out the output above. The blue strains are the echo requests despatched from the container, and the inexperienced strains are the echo replies from the opposite host. The daring purple addresses within the requests signify the supply addresses from the container, and the daring orange addresses point out the vacation spot addresses for the reply packets. On the docker0 captures, the addresses proven are the IP addresses assigned to the C1 interface — this could be anticipated. Nevertheless, on the ens160 seize, the addresses have been translated to the IP handle of the Linux host machine’s ens160 interface.
That’s proper, our outdated pal Community Deal with Translation (NAT) exhibits up in container networking as nicely. Actually, so does NAT’s very helpful cousin PAT (Port Deal with Translation), however I’m getting forward of myself.
Getting into the Docker networking story… iptables!
The networks created by Docker to help a bridge-type community are constructed to be personal and to leverage IP handle house that’s NOT reachable from outdoors the Docker-managed community. Nevertheless, many companies deployed and managed with Docker do require connectivity past the small variety of containers making up the service and operating on the host. Docker leverages the identical community idea used elsewhere to unravel this drawback, Community (and Port) Deal with Translation (NAT/PAT). And much like how we’ve seen Docker leveraging Linux parts like bridges and namespaces, Docker makes use of iptables to carry out the handle translation and filtering concerned right here as nicely.
Earlier than we begin how iptables are concerned in these visitors flows, I wished to provide a fast caveat. Community visitors processing and movement by the underbelly of Linux is a sophisticated subject, and iptables is each a robust and complex software. I plan to interrupt down the subject right here within the weblog to explain and clarify what is going on below the hood of Docker networking in as easy and clear a means as attainable. However an intensive exploration of iptables and Linux networking could be worthy of a number of weblog posts on their very own.
With iptables, guidelines are created and utilized to the processing of community visitors as it’s dealt with by Linux. These guidelines are utilized at completely different factors within the processing of visitors to perform various completely different duties. Guidelines may be utilized:
- Earlier than any routing resolution is made (PREROUTING).
- As visitors destined for the native host arrives (INPUT).
- As visitors created by the native host is shipped (OUTPUT).
- As visitors “passing by” the native host is processed (FORWARD).
- After the routing resolution is made (POSTROUTING).
And the principles which might be written can do various issues to the visitors.
- Site visitors may be blocked/denied.
- Site visitors may be allowed/permitted.
- Site visitors may be redirected elsewhere.
- Site visitors can have its supply or vacation spot addresses modified (NAT/PAT).
Guidelines are added to one of many “tables” that iptables manages. The 2 tables value mentioning now are the filter and the nat tables. The filter desk creates guidelines primarily involved with whether or not visitors is allowed or blocked, whereas the nat desk has guidelines associated to deal with translation. Let’s take a look at the nat desk and see if we will discover what brought on the interpretation of the ICMP visitors from our instance.
root@expert-cws:~# iptables -L -v -t nat
Chain PREROUTING (coverage ACCEPT 251 packets, 67083 bytes)
pkts bytes goal prot decide in out supply vacation spot
18 1292 DOCKER all -- any any anyplace anyplace ADDRTYPE match dst-type LOCAL
Chain INPUT (coverage ACCEPT 247 packets, 66747 bytes)
pkts bytes goal prot decide in out supply vacation spot
Chain OUTPUT (coverage ACCEPT 19468 packets, 1264K bytes)
pkts bytes goal prot decide in out supply vacation spot
3 252 DOCKER all -- any any anyplace !localhost/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (coverage ACCEPT 19472 packets, 1264K bytes)
pkts bytes goal prot decide in out supply vacation spot
31 1925 MASQUERADE all -- any !docker0 172.17.0.0/16 anyplace
Chain DOCKER (2 references)
pkts bytes goal prot decide in out supply vacation spot
7 588 RETURN all -- docker0 any anyplace anyplace
Take a look at the rule within the POSTROUTING desk coloured blue. That is the rule that brought on the interpretation we noticed. Now, let’s break down the components of the rule which might be used to match visitors to course of.
- protocol = all
- Match visitors of any protocol sort
- in = any / out = !docker0
- Match visitors coming IN any interface and going OUT any interface apart from docker0
- Site visitors going OUT docker0 could be despatched in direction of a container
- supply = anyplace / vacation spot = anyplace
- Match visitors from or to any handle
The “goal = MASQUERADE” half describes the motion this rule will take. You could be extra accustomed to the actions like DROP or ACCEPT that present up on the filter desk, however the NAT desk was a unique set of targets that point out the kind of translation that can happen. MASQUERADE is a kind of supply handle translation (SNAT) that interprets the supply community handle of the visitors to the handle assigned to the interface the visitors has been routed OUT.
Think about the echo request despatched from the container towards this rule.
- An echo request matches the “all protocol.”
- The packet got here within the docker0 interface (in = any) and will probably be going out the ens160 (out = !docker0).
- The supply and vacation spot are definitely “anyplace.”
When the visitors was processed towards this rule the MASQUERADE goal/motion was taken to SNAT the supply handle to the IP handle of the ens160 interface — which is strictly what we noticed occur.
Look out! There’s a net (server) forward!
To date we’ve used some ICMP visitors with ping to take a look at how containers can attain exterior networks and hosts. However, what about when a container is operating a service like an internet server that’s designed to be out there to exterior customers? Let’s finish our dialogue with this instance. So as to get began, we’re going to want an internet server.
There’s a multitude of net servers that may be run as Docker containers, however for our exploration right here, I’m going to maintain it quite simple and use the HTTP server that’s included with Python and the usual “python:3” Docker picture maintained by the Python Software program Basis and Docker.
# Begin the container within the background root@expert-cws:~# docker run -tid --rm --name net --hostname net -p 172.16.211.128:81:80 python:3 /bin/bash # Connect to the operating container root@expert-cws:~# docker connect net # Begin a primary net server root@net:/# python -m http.server 80 Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ...
The “docker run” command needs to be acquainted from once we ran instructions in Half 1, however there’s a new possibility included. We have to “publish” the container’s ports which must be made out there to exterior hosts. A container can haven’t any ports printed, or many dozens of ports relying on the distinctive wants of that service.
Within the command above, I’m publishing port 80 from the container to port 81 on the host server’s IP handle of 172.16.211.128.
If I had left off the IP handle to publish the service to, Docker would have made the online server out there on any/all IP addresses on the underlying host. Leaving off an express IP handle for publishing a service is widespread, nonetheless I discover being express a greater technique. That is considerably of a private choice in software design.
I can now try and entry the online server from Host01.
Glorious, by looking to the IP handle of the Linux host on port 81 I’m greeted with a direct itemizing from the container the place the Python net server is operating.
Tracing the online visitors with packets and tables
Let’s end our exploration at present by inspecting the visitors for the incoming net visitors and the interpretation guidelines that join issues collectively.
We have to change up our packet seize instructions to seize the online visitors on each the ens160 and docker0 interfaces. As visitors arrives on the Linux host it is going to be destined to tcp port 81 and translated to tcp port 80, earlier than it’s despatched out to the container.
# Seize visitors from the Linux host interface root@expert-cws:~# tcpdump -n -i ens160 'tcp port 81' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on ens160, link-type EN10MB (Ethernet), seize measurement 262144 bytes 18:34:59.147085 IP 172.16.211.1.64534 > 172.16.211.128.81: Flags [SEW], seq 3761281905, win 65535, choices [mss 1460,nop,wscale 6,nop,nop,TS val 1727838954 ecr 0,sackOK,eol], size 0 18:34:59.147191 IP 172.16.211.128.81 > 172.16.211.1.64534: Flags [S.E], seq 3294439992, ack 3761281906, win 65160, choices [mss 1460,sackOK,TS val 3650251894 ecr 1727838954,nop,wscale 7], size 0 . . # Seize visitors being despatched to the containers root@expert-cws:~# tcpdump -n -i docker0 'tcp port 80' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on docker0, link-type EN10MB (Ethernet), seize measurement 262144 bytes 18:34:59.147133 IP 172.16.211.1.64534 > 172.17.0.5.80: Flags [SEW], seq 3761281905, win 65535, choices [mss 1460,nop,wscale 6,nop,nop,TS val 1727838954 ecr 0,sackOK,eol], size 0 18:34:59.147178 IP 172.17.0.5.80 > 172.16.211.1.64534: Flags [S.E], seq 3294439992, ack 3761281906, win 65160, choices [mss 1460,sackOK,TS val 3650251894 ecr 1727838954,nop,wscale 7], size 0 . .
I’ve restricted the output within the put up above to simply the beginning of the request the place we will see the interpretation at work.
Within the output above, the blue strains signify the preliminary request packet from the online browser to the server, and the inexperienced strains are the primary packet despatched to determine the session. By wanting on the daring purple and orange addresses, you’ll be able to see the vacation spot handle translation (DNAT) at work within the communications. The supply addresses are left unchanged, and actually, within the beneath logs from the container, you’ll be able to see the IP handle from Host01.
root@net:/# python -m http.server 80 Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ... 172.16.211.1 - - [07/Sep/2022 18:30:07] "GET / HTTP/1.1" 200 -
We will as soon as once more take a look at the NAT desk utilizing iptables and discover the rule that gives this habits.
root@expert-cws:~# iptables -L -v -t nat -n Chain PREROUTING (coverage ACCEPT 42 packets, 4004 bytes) pkts bytes goal prot decide in out supply vacation spot 38 2712 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL Chain INPUT (coverage ACCEPT 37 packets, 3584 bytes) pkts bytes goal prot decide in out supply vacation spot Chain OUTPUT (coverage ACCEPT 15523 packets, 1010K bytes) pkts bytes goal prot decide in out supply vacation spot 17 1092 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (coverage ACCEPT 15530 packets, 1010K bytes) pkts bytes goal prot decide in out supply vacation spot 42 2849 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0 0 0 MASQUERADE tcp -- * * 172.17.0.5 172.17.0.5 tcp dpt:80 Chain DOCKER (2 references) pkts bytes goal prot decide in out supply vacation spot 14 1176 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0 7 448 DNAT tcp -- !docker0 * 0.0.0.0/0 172.16.211.128 tcp dpt:81 to:172.17.0.5:80
The rule in blue has the goal set to DNAT together with a vacation spot of 172.16.211.128 and translation of “tcp dpt:81 to:172.17.0.5:80“. The rule is utilized each in the course of the PREROUTING and OUTPUT phases of community processing through the use of the power inside iptables to TARGET one other chain within the hyperlink.
A fast cease on the filter desk
There may be one last cease in our exploration of the visitors flows I need to make earlier than ending up. To date our iptables instructions have focused the NAT desk (-t nat). Let’s check out the filter desk the place the ACCEPT/DROP guidelines are discovered.
root@expert-cws:~# iptables -L -v -t filter -n
Chain INPUT (coverage ACCEPT 231K packets, 26M bytes)
pkts bytes goal prot decide in out supply vacation spot
Chain FORWARD (coverage DROP 0 packets, 0 bytes)
pkts bytes goal prot decide in out supply vacation spot
141 15676 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
141 15676 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
39461 118M ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
13 952 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
30852 1266K ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
6 504 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (coverage ACCEPT 215K packets, 18M bytes)
pkts bytes goal prot decide in out supply vacation spot
Chain DOCKER (1 references)
pkts bytes goal prot decide in out supply vacation spot
7 448 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.5 tcp dpt:80
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes goal prot decide in out supply vacation spot
30852 1266K DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
70326 120M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
pkts bytes goal prot decide in out supply vacation spot
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
30852 1266K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes goal prot decide in out supply vacation spot
70326 120M RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
A lot of the DOCKER-related facets discovered within the filter desk are there to make sure the community isolation of containers. Nevertheless the rule in blue that I’ve indicated above is essential to how companies are uncovered from a container to the surface world. This rule will ACCEPT tcp port 80 visitors destined for 172.17.0.5 (the online container) that arrives on any interface apart from docker0 and goes out interface docker0. This rule makes use of the container’s precise IP handle and port as a result of the filtering occurs after the DNAT from the NAT desk.
The sunshine on the finish of the default Docker networking journey
And so we discover ourselves on the finish of this exploration of the default Docker Networking. Wanting across the group, I’m glad to see that we didn’t lose anybody alongside the way in which, however I do know it was an in depth one. And also you won’t imagine me, however even after one other 3,500 phrases on the subject of Docker networking (a complete of over 7,000 between Components 1 and a pair of), there’s lots extra to discover on the subject. Overlay networks, how DNS works for containers, customized community plugins, and (gasp) Kubernetes networking are all on the market so that you can discover!
My objective for this brief sequence was to assist in giving a basis on which you’ll be able to proceed to construct your information round container networking and to make the subject much less mysterious or daunting for community engineers new to it. It may be very simple to grow to be intimidated when working by container introductions that “simply work” however don’t clarify “why they work” or “how they work.” If I did my job proper, the magic isn’t so magical anymore.
Listed here are just a few hyperlinks to different assets value testing for extra data on the subject.
- In Season 2 of NetDevOps Dwell, Matt Johnson joined me to do a deep dive into container networking. His session was improbable, and I reviewed it when preparing for this put up. I extremely suggest it as one other nice useful resource.
- The Docker documentation on networking is excellent. I referenced it very often when placing this put up collectively.
- The person pages for Linux namespaces and iptables are glorious assets about these essential applied sciences that allow Docker networking
- And take a look at the person web page for tcpdump if you happen to’d love to do extra packet capturing
And as all the time, please let me know what you considered this put up within the feedback or over on Twitter. What ought to I “discover” subsequent right here on the weblog? Thanks for studying!
Comply with Cisco Studying & Certifications
Twitter | Fb | LinkedIn | Instagram
Use #CiscoCert to affix the dialog.
Share: