Quite a while back I started messing around with Docker, after the forum software. I use elsewhere started making builds for it and then eventually made Docker the only supported deployment method. So I started messing about with Docker fairly early, although I didn’t delve in to the depths of it, I did dabble with making my own containers for certain things. I got used to the way the networking worked, and was happy to proxy the web applications I wanted to grant public access via nginx, or opening ports on the box for those applications that weren’t web based. Somewhere along the line, stupidly blindly upgrading without reading release notes (damn you apt-get upgrade), the networking changed a bit. Something called docker-proxy came in to being (if it wasn’t there from the start, I’m not sure), and my iptables rules were being messed with. I discovered via a security alert from my host provider that all of my containers published ports had been exposed to the internet.
After a lot of trial and error, I managed to fix the issue and get Docker working the way I wanted it to work, that is:
- Allow the containers to communicate with each other,
- Open a port on the host to allow communication
- Proxy http apps via nginx
The way to achieve this, was to modify /etc/defaults/docker and add in –iptables=false to the startup options. This prevents docker from messing around with iptables too much. It will still add some rules in to the FORWARD chain to allow containers to communicate with each other, so I can live with that.
The last thing to do was to add the following 4 iptables rules:
iptables -t nat -A POSTROUTING -s 172.17.0.0/16 \! -o docker0 -j MASQUERADE iptables -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i docker0 \! -o docker0 -j ACCEPT iptables -A FORWARD -i docker0 -o docker0 -j ACCEPT
Now everything works the way I expect it to.