When running self-hosted web based services at home, there is the reality that access to these services (at least some of the time) will be from the outside and some of the time (probably most of the time) from the inside.
This means addressing requires a dual setup, one that will work inside, and one that will work outside.
This duality stems from the fact that I'm relying on IPv4 addressing and NAT, which means the external IP and the internal IP will be different.
An internal DNS service is needed to resolve the internal addressing, while the outside world will most likely connect to from a single point.
Before I had a properly configured DNS server, if everything was on DHCP, things would sometimes work, but then when they didn't figuring out why not wasn't easy.
When static IP was involved, it was a nightmare to make changes, as the IP addresse will be littered all over my configuration files and scripts (on multiple hosts).
Loving the ease of use of my new dockerized home-server realm, I was looking for something that I could easily start up as a docker container, and potentailly duplicate, and that would also be easy to configure as well as having some good logging/auditing trails to help figure out situations where things didn't work as expected.
Searching around, I eventually chose AdGuard Home, a self-hosted web-based DNS service that helps out filtering ads, as well as provide some DNS centric security controls.
The main appeal for me (other than the ad-block and security) was the (relatively) easy DNS rewrite rules. These allowed static assignment of IPs to hostnames, but also easy wildcard-hostname matching defintion with the option to not only alias (aka CNAME) to an existing domain name, but also make it easily ignore internal rules when needing the hostname to resolve from the outside.
Choosing where to run the DNS Server
A DNS Server is a critical part of a network, and if it isn't functional, all basic operation are practically moot. The choice of placing a DNS server hosted in a docker container has it's set of risks and complications, as with any shared resource.
One option is to NOT use containers, but intead host the AdGuard Home on a raspberry pi. I must say it does have some appearl, the idea of a single service housed by a single piece of hardware (maybe two) - having the physical separation and function (if something doesn't work, disconnect-and-reconnect the power).
But I'm a software dude, and multiple raspberry pis mean mutliple power supplies, multiple network cables, heat management etc. - all hardware concerns I don't really want to worry about more than I have.
I already run 2 seperate (physical) linux servers at home, they are underutilized as such, and that's where my affection to docker started.
So I looked at the concerns of running AdGuard on one (or both) of my serviers in a Docker container, and what would be the problems I need to address.
What can go wrong ?!
Networking is probably the most important aspect of this setup. The docker container must be available on the LAN in such a way that the router, just like any other machine could access it.
Also, I might want to run multiple instances of the service, either on separate servers, or on the same one, but with different subnet addresses (because the answers might neet to be different for diffrent subnets).
This is where I dove deep into Linux virtual networking, specifically about
ipvlan docker networks.
The way I see
macvlan, in extremely oversimplified terms, it's a 'virtual NIC' with it's own MAC address attached at the same point as the physical NIC it connects through. Allowing a container to 'reside on the LAN' the physical NIC is attached to. A separate MAC address, for example, allows an external DHCP server to assign an IP address on the LAN. But mainly it allows complete separation of traffic between the host and the container.
One complication it introduces is the requirement of using the physical NIC in promescious mode, which means overloading the system with all traffic visibile to the NIC.
ipvlan takes the concept a step further, utilizing the same MAC address as the physical NIC, but assigning an IP address that resides on the LAN, but is separate from the host's IP.
This virtualization allows us to specify a static (or random) LAN IP address to the container, and have it communicate with the other machines on the network.
No need to specify specific ports, the machine is (almost) getting a 'first class citizen' status on the network, yet shares all other resources with the Linux host.
IMPORTANT note: Communication with the host OS isn't possible on this interface, to achieve that a bridge (or host) docker network will be needed for that.
ipvlan is superior to
bridge networks for a service like DNS, because DNS needs to always expose itself on port
53. And some Linux servers come pre-baked with an internal DNS resolver listening on
127.0.0.53:53, so having a completely different address to bind to free us from the concerns of what is already in place.
It also allows us to run mulitple instances on the same port, but with different IP addresses.
So learning all about the networking options, I setup to configure my take on an Internal DNS Server
How I set it up...
At this point I'm switching to specifics, as I think the theory and mind though was explained above.
Given the following plan:
# Domain Name (Visible from the outside): example.com # Internal domain suffix (only inside): lan # # IP Address and hostnames and whether they are accessible internally or externally # IP Address hostname network accessible 192.168.0.1 gw LAN internal 192.168.0.101 docker_host LAN (eno0 interface) internal # docker_host is hosting the following services *:80/443 proxy bridge 192.168.0.53 dns1 ipvlan 192.168.0.153 dns2 ipvlan
With the setup listed above, the GW will assign it's default DNS server to
dns2 as it's backup. For the sake of this article both DNS will run on the same machine, but it would be recommended to run those on separate servers.
First you'll need to setup an
ipvlan network, it will be the same network for both services, since they will both have an IP address on the same LAN.
Now with the following
docker-compose yaml file we can define both dns servers:
NOTE: this docker-compose contains a commented out network definition that you may want to choose. However I found that defining an external
ipvlan network allows you to use the shared lan with multiple docker-compose stacks.
Because these 2 servers are going to be practically the same except for their IP addresses and data stores, I'll define the docker-compoase with 2 files.
And here is the
networks: dns: ### c ### driver: bridge ipam: driver: default config: - subnet: 10.53.53.0/24 - gateway: 10.53.53.254 lan: external: true ### d ### name: lan ## A Docker-Compose implementatin you may choose to use ## instead of an externally defined one. # driver: ipvlan # driver_opts: # parent: eno0 # ipam: # config: # - gateway: 192.168.0.254 # subnet: 192.168.0.0/24 # driver: default services: dns1: ### e ### hostname: dns1 extends: file: adguard.docker-compose.base.yaml service: adguard networks: lan: ipv4_address: 192.168.0.53 dns: ipv4_address: 10.53.53.1 volumes: - ./dns1/work:/opt/adguardhome/work:rw - ./dns1/conf:/opt/adguardhome/conf:rw dns2: ### e ### hostname: dns2 extends: file: adguard.docker-compose.base.yaml service: adguard networks: lan: ipv4_address: 192.168.0.53 dns: ipv4_address: 10.53.53.2 volumes: - ./dns2/work:/opt/adguardhome/work:rw - ./dns2/conf:/opt/adguardhome/conf:rw
It's already a long post and theres a lot going on in the file, so in this final stretch, I'll breakdown what is going on in the configuration.
Defining the template
### a ### in the
Here the adguard docker container basics are defined, everything that both servers will share is layed out there.
Here is what's defined:
- Manual DNS definitions - matching those defined from within AdGuard Home (later on). Reasoning here is to prevent circular DNS queries, as this DNS server is going to be serving the Linux host as well.
- Always restart the container, we don't want DNS to be down ever.
### b ###is optoinal, uncomment in case you will want to disable IPv6.
- Some additional docker housekeeping values (memory limiting, time synchronization with host, etc..) most common configuration can go here.
The Bridge Netwrok
As mentioned eariler, a bridge network will be required for the Linux host to be able to communicate with the containers, I chose to make mine statitc IPs as well. This will allow me to specify the DNS server as the primary of the host.
10.53.53.0/24 as it's not in Docker's default bridge network assignments, and doesn't overlap anything on my small network, you might want to choose a different range.
dns2 are defined pretty much the same (see sections
### e ###). They both
adguard service defined in
Where they differ is in their storage. The
conf volumes should not be shared between the containers. (There is a solutino to sync them properly though, but I have yet to walk that path)
And of course, their IP addresses need to be different.
There are some additional steps that need to happen outside, in the router, and the host but I don't want to stetch this longer than I already have. I feel that what I presented here was the hardest (for me) to investigate and get right.
Running a critical service in a network, and running it correctly is no small feat, but AdGuard Home truly makes it easy, but there are some steps that are important to perform correctly, and those are what I lay out here.
ipvlan is a very powerful tool, which most of the time isn't needed, but for a DNS server is absolutley a MUST.
I no longer feel lost in my own network.