I've had a pihole on my home network for years. I set it up on a pi3b natively and it ticked along doing its thing, until it didn't. The sd card wore out and became read-only. It took me a while to discover that that was the issue, as it still worked, but you couldn't make any changes. Well, you could - everything seemed to work, but nothing was actually being written. Fortunately I had a teleporter backup of the pihole. I quickly spun up a vm with a pihole docker instance and loaded the data from backup. I then pointed all the machines at the new IP 1. In the mean time, I ordered m2 sata SSDs and USB enclosures, and worked out how to get my two raspberry pi's to boot off a USB attached SSD.2

When I re-setup the pi on the new SSD, I decided that the convenience and ease of the docker setup was ideal. I ran both instances of pihole but kept them separate initially. I eventually found a script to sync the piholes via git. At a later date, I found gravity-sync, which is just brilliant.

I configured all my hosts to use the pi as primary dns, and the pihole vm as secondary. This worked brilliantly until a few months later, I noticed a weird issue where internet access seemed a bit slow. It took a short while to open up an initial connection to a site, but after that things were fine. Investigating, I discovered the pi wasn't responding to ping. The SSD had died. I RMA'd the SSD and ordered a replacement. They're fairly cheap and the RMA process will take a while. In the mean time, I configured everything in the network to point to the pihole vm as primary.

While waiting for the new SSD to arrive, I was thinking about load balancing. Previously, the pi was answering queries from most of the user devices, while the vm was answering queries mostly for my internal servers/services. NGINX could do the load balancing and I was already using a few instances for various things, and I am also using Nginx Proxy Manager to handle just about everything. I figured I could probably use it to do the balancing as well.

You can't use Nginx Proxy Manager to do load balancing. Yet3. Their interface doesn't allow for it.4

I set up a new nginx docker instance using the official alpine slim container, as I'm only going to use it for load balancing, I don't need bells and whistles. The image is ~11mb. I created a file called stream.conf and included it in nginx.conf.

stream.conf:

stream {
        upstream dns_servers {
                random;
                server 192.168.0.2:53;
                server 192.168.0.3:53;
        }
        server {
                listen 53;
                proxy_pass dns_servers;
        }
        server {
                listen 53 udp;
                proxy_pass dns_servers;
        }
}

I also made sure to pass port 53 tcp & udp to docker.

I've now configured the load balancer as primary DNS, with one of the pihole instances as secondary 5 So far it's working really well. I did notice some delays here and there, but I solved that, see footnote 4.

One drawback of this method is that the piholes only see queries coming from the load balancer, they don't see the actual source of the query. This is not a huge issue for me. I've also configured my firewall to capture direct external DNS query attempts and redirect them to the load balancer with some source and destination NAT thrown in.

Photo by Christophe Hautier on Unsplash


  1. A quick summary of the setup: pihole container with a cloudflared container to tunnel DNS over HTTPS, both containers with their own local network IPs via macvlan docker network. The only drawback is that the docker host can't communicate with the two containers. There are workarounds for that, but I didn't bother. 

  2. For the pi3b, it was a matter of writing a utility image to an sd card and booting off it. For the pi4b, nothing additional needed to be done. 

  3. This issue has been ongoing since 2019, and still no sign of v3 although there are recent commits to the branch, so it's not abandoned. 

  4. I just discovered you CAN get it working if you put the necessary config directives in to a conf file in the stream config directory. HOWEVER, there may not be enough worker_connections configured, in fact the directive doesn't seem to be used in any of the container's nginx config files which means it will default to either 512 or 1024 which is definitely not enough for a load balancer for DNS. I noticed my load balancer logs complaining about insufficient workers while I was testing this, so I had to increase it. 

  5. And I've set the other pihole as tertiary for those systems that support 3 DNS server entries. I should also note that on the pihole docker hosts themselves, I've set the other pihole as primary DNS with an external DNS server as secondary. This gets around the problem of the pihole docker host not being able to communicate with its own macvlan networked containers. 

Previous Post