Den mån 30 sep. 2024 kl 10:57 skrev Luca Di Gregorio <luc...@gmail.com>:
> I'm trying to figure out how to configure a redundant dhcp server.

If you stick to static entries for your hosts on the network, you can
just set up N+1 dhcp daemons that answer the same (because their conf
would be the same) and the clients don't mind getting another answer a
few milliseconds later, since they are already configured by then, so
they drop the redundant answer.

> In dhcpd(8) I see that the options -y and -Y implement "synchronisation of the
> lease allocations to a number of dhcpd daemons", anyway, in dhcpd.conf(5)
> I can't find anywhere any statement that sets the dhcp server as 'primary'
> or 'backup'.

I think any daemon with -Y will send out updates as it acts on dhcp requests.
Any daemon with -y will listen for such updates and if received, will
update its lease file with the info learned from the sender.

Who of those two that hears a request first and manages to send out an
update would be a race I guess, but if they were carp'ed, then the
carp backup would have a fully consistent view as it starts serving
dhcp requests after the main carp node disappears. The first failover
would be quick and nice, falling back again perhaps should be done
after each dhcp client has done a renewal so that the current dhcpd
has sent updates back again before moving the service to the normal
node.

> Is a carp-like mechanism put in place with synchronisation?
> That is, in carp I can see the "state master" or "state backup",
> I would expect something similar for dhcpd's in synchronisation mode.

for the parallel to carped firewalls, this is more like pfsync than
the carp part.

> Or, dhcrelay(8) must be used for redundancy? If yes, how?

As the number of networks grows, I would certainly think about having
a small net somewhere with N+1 dhcpds that sync each other, and then
for all the dynamic networks, they would relay dhcp requests to this
"pool" of dhcp servers.

-- 
May the most significant bit of your life be positive.

Reply via email to