On Wed, Dec 27, 2023 at 06:04:22PM +0000, Guilherme Janczak wrote:

> On Wed, Dec 27, 2023 at 09:54:14AM +0100, Otto Moerbeek wrote:
> > On Mon, Dec 18, 2023 at 04:42:09PM +0000, Guilherme Janczak wrote:
> > 
> > > On Mon, Dec 18, 2023 at 07:31:37AM +0100, Otto Moerbeek wrote:
> > > > On Sun, Dec 17, 2023 at 05:37:50PM +0100, Otto Moerbeek wrote:
> > > >
> > > > > On Fri, Dec 15, 2023 at 02:54:19PM +0100, Otto Moerbeek wrote:
> > > > >
> > > > > > On Sun, Dec 10, 2023 at 09:57:08AM +0100, Otto Moerbeek wrote:
> > > > > >
> > > > > > > On Fri, Dec 01, 2023 at 09:18:32PM +0000, 
> > > > > > > guilherme.janc...@yandex.com wrote:
> > > > > > >
> > > > > > > > >Synopsis:      Repeated NTP peers in OpenNTPD
> > > > > > > > >Category:      user
> > > > > > > > >Environment:
> > > > > > > >         System      : OpenBSD 7.4
> > > > > > > >         Details     : OpenBSD 7.4 (GENERIC.MP) #0: Sun Oct 22 
> > > > > > > > 12:13:42 MDT 2023
> > > > > > > >                          
> > > > > > > > r...@syspatch-74-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> > > > > > > >
> > > > > > > >         Architecture: OpenBSD.amd64
> > > > > > > >         Machine     : amd64
> > > > > > > > >Description:
> > > > > > > >         If the same address/domain is specified multiple times 
> > > > > > > > in
> > > > > > > >         OpenNTPD's configuration file, or if multiple domains 
> > > > > > > > resolve
> > > > > > > >         to the same IP address, OpenNTPD will treat the same IP 
> > > > > > > > address
> > > > > > > >         as if it was multiple peers.
> > > > > > > > >How-To-Repeat:
> > > > > > > >         This can be tested by appending `server 127.0.0.1` 
> > > > > > > > multiple
> > > > > > > >         times to the configuration file.
> > > > > > > >
> > > > > > > >         Alternatively, assuming a default OpenNTPD 
> > > > > > > > configuration file
> > > > > > > >         from OpenBSD 7.4, the following entries can be added to
> > > > > > > >         /etc/hosts:
> > > > > > > >         127.0.0.1       time.cloudflare.com
> > > > > > > >         127.0.0.1       pool.ntp.org
> > > > > > > >
> > > > > > > >         I noticed this bug using the default 7.4 configuration 
> > > > > > > > file. It
> > > > > > > >         can happen because time.cloudflare.com is part of 
> > > > > > > > pool.ntp.org:
> > > > > > > >         https://www.ntppool.org/scores/162.159.200.1
> > > > > > > >         https://www.ntppool.org/scores/162.159.200.123
> > > > > > > > >Fix:
> > > > > > > >         Removing the `server time.cloudflare.com` line from the
> > > > > > > >         configuration file is a simple fix the user can make, 
> > > > > > > > but
> > > > > > > >         OpenNTPD should check if an IP address it tries to add 
> > > > > > > > to the
> > > > > > > >         list of peers is already a peer, and ignore it if so. 
> > > > > > > > If a
> > > > > > > >         server is added with the `server` (not `servers`) 
> > > > > > > > keyword in the
> > > > > > > >         configuration file, OpenNTPD should try the next IP the 
> > > > > > > > domain
> > > > > > > >         resolves to if applicable.
> > > > > > > >
> > > > > > >
> > > > > > > Thanks for the report, I'll take a look.
> > > > > > >
> > > > > > >   -Otto
> > > > > > >
> > > > > >
> > > > > > Due to verious reasons this is all a bit complicated, I did not 
> > > > > > find a
> > > > > > nice solution yet. Some patience required.
> > > > > >
> > > > > >     -Otto
> > > > > >
> > > > >
> > > > > Found some more time. Try this. The approach is: we assume the
> > > > > addresses of a pool (from the "servers" keyword) vary over time. So if
> > > > > one of the pool addresses is in the address list of a peer ("server")
> > > > > we skip it ad try to re-resolve the pool, lookin for a new address, as
> > > > > we already did earlier when a pool member doe snot respond.
> > > > >
> > > > > I decided to not implement "the move to abother address of the list"
> > > > > in the "server" case. The address logic is already complex enough.
> > > > >
> > > > >       -Otto
> > > > >
> > > > > Index: client.c
> > > > > ===================================================================
> > > > > RCS file: /home/cvs/src/usr.sbin/ntpd/client.c,v
> > > > > diff -u -p -r1.117 client.c
> > > > > --- client.c  24 Mar 2022 07:37:19 -0000      1.117
> > > > > +++ client.c  17 Dec 2023 09:13:16 -0000
> > > > > @@ -108,6 +108,7 @@ client_nextaddr(struct ntp_peer *p)
> > > > >               return (-1);
> > > > >
> > > > >       if (p->addr_head.a == NULL) {
> > > > > +             log_debug("kicking query for %s",  p->addr_head.name);
> > > > >               priv_dns(IMSG_HOST_DNS, p->addr_head.name, p->id);
> > > > >               p->state = STATE_DNS_INPROGRESS;
> > > > >               return (-1);
> > > > > @@ -145,7 +146,15 @@ client_query(struct ntp_peer *p)
> > > > >       }
> > > > >
> > > > >       if (p->state < STATE_DNS_DONE || p->addr == NULL)
> > > > > -             return (-1);
> > > > > +             return (0);
> > > > > +
> > > > > +     /* if we're a pool member and a peer has taken our address, 
> > > > > signal */
> > > > > +     if (p->addr_head.pool != 0 && addr_already_used(&p->addr->ss)) {
> > > > > +             log_debug("pool addr %s used by peer, will reresolve 
> > > > > pool",
> > > > > +                 log_sockaddr((struct sockaddr *)&p->addr->ss));
> > > > > +             p->senderrors++;
> > > > > +             return (0);
> > > > > +     }
> > > > >
> > > > >       if (p->query.fd == -1) {
> > > > >               struct sockaddr *sa = (struct sockaddr *)&p->addr->ss;
> > > > > Index: ntp.c
> > > > > ===================================================================
> > > > > RCS file: /home/cvs/src/usr.sbin/ntpd/ntp.c,v
> > > > > diff -u -p -r1.171 ntp.c
> > > > > --- ntp.c     6 Dec 2023 15:51:53 -0000       1.171
> > > > > +++ ntp.c     17 Dec 2023 09:13:16 -0000
> > > > > @@ -54,8 +54,6 @@ int ntp_dispatch_imsg(void);
> > > > >  int  ntp_dispatch_imsg_dns(void);
> > > > >  void peer_add(struct ntp_peer *);
> > > > >  void peer_remove(struct ntp_peer *);
> > > > > -int  inpool(struct sockaddr_storage *,
> > > > > -         struct sockaddr_storage[MAX_SERVERS_DNS], size_t);
> > > > >
> > > > >  void
> > > > >  ntp_sighdlr(int sig)
> > > > > @@ -524,23 +522,52 @@ ntp_dispatch_imsg(void)
> > > > >       return (0);
> > > > >  }
> > > > >
> > > > > -int
> > > > > +static int
> > > > > +addr_equal(struct sockaddr_storage *a, struct sockaddr_storage *b)
> > > > > +{
> > > > > +     if (a->ss_family != b->ss_family)
> > > > > +             return 0;
> > > > > +     if (a->ss_family == AF_INET) {
> > > > > +             if (((struct sockaddr_in *)a)->sin_addr.s_addr ==
> > > > > +                 ((struct sockaddr_in *)b)->sin_addr.s_addr)
> > > > > +                     return 1;
> > > > > +     } else if (memcmp(&((struct sockaddr_in6 *)a)->sin6_addr,
> > > > > +         &((struct sockaddr_in6 *)b)->sin6_addr,
> > > > > +         sizeof(struct sockaddr_in6)) == 0) {
> > > > > +             return 1;
> > > > > +     }
> > > > > +     return 0;
> > > > > +}
> > > > > +
> > > > > +static int
> > > > >  inpool(struct sockaddr_storage *a,
> > > > >      struct sockaddr_storage old[MAX_SERVERS_DNS], size_t n)
> > > > >  {
> > > > >       size_t i;
> > > > >
> > > > >       for (i = 0; i < n; i++) {
> > > > > -             if (a->ss_family != old[i].ss_family)
> > > > > +             if (addr_equal(a, &old[i]))
> > > > > +                     return 1;
> > > > > +     }
> > > > > +     return 0;
> > > > > +}
> > > > > +
> > > > > +/* check to see af an address is already listed by a "server" peer
> > > > > +   in that case, it does not get added to a pool */
> > > > > +int
> > > > > +addr_already_used(struct sockaddr_storage *a)
> > > > > +{
> > > > > +     struct ntp_peer *peer;
> > > > > +     struct ntp_addr *addr;
> > > > > +
> > > > > +     TAILQ_FOREACH(peer, &conf->ntp_peers, entry) {
> > > > > +             if (peer->addr_head.pool != 0)
> > > > >                       continue;
> > > > > -             if (a->ss_family == AF_INET) {
> > > > > -                     if (((struct sockaddr_in *)a)->sin_addr.s_addr 
> > > > > ==
> > > > > -                         ((struct sockaddr_in 
> > > > > *)&old[i])->sin_addr.s_addr)
> > > > > +             addr = peer->addr_head.a;
> > > > > +             while (addr != NULL) {
> > > > > +                     if (addr_equal(a, &addr->ss))
> > > > >                               return 1;
> > > > > -             } else if (memcmp(&((struct sockaddr_in6 
> > > > > *)a)->sin6_addr,
> > > > > -                 &((struct sockaddr_in6 *)&old[i])->sin6_addr,
> > > > > -                 sizeof(struct sockaddr_in6)) == 0) {
> > > > > -                     return 1;
> > > > > +                     addr = addr->next;
> > > > >               }
> > > > >       }
> > > > >       return 0;
> > > > > @@ -628,8 +655,11 @@ ntp_dispatch_imsg_dns(void)
> > > > >                                               free(h);
> > > > >                                               continue;
> > > > >                                       }
> > > > > -                                     if (inpool(&h->ss, existing,
> > > > > -                                         n)) {
> > > > > +                                     if (addr_already_used(&h->ss) ||
> > > > > +                                         inpool(&h->ss, existing, 
> > > > > n)) {
> > > > > +                                             log_debug("skipping 
> > > > > address %s for %s",
> > > > > +                                                 
> > > > > log_sockaddr((struct sockaddr *)
> > > > > +                                                 &h->ss), 
> > > > > peer->addr_head.name);
> > > > >                                               free(h);
> > > > >                                               continue;
> > > > >                                       }
> > > > > @@ -660,8 +690,11 @@ ntp_dispatch_imsg_dns(void)
> > > > >                       }
> > > > >                       if (dlen != 0)
> > > > >                               fatalx("IMSG_HOST_DNS: dlen != 0");
> > > > > -                     if (peer->addr_head.pool)
> > > > > -                             peer_remove(peer);
> > > > > +                     if (peer->addr_head.pool) {
> > > > > +                             if (peercount > addrcount)
> > > > > +                                     peer_remove(peer);
> > > > > +                             peer->state = STATE_NONE;
> > > > > +                     }
> > > > >                       else
> > > > >                               client_addr_init(peer);
> > > > >                       break;
> > > > > Index: ntpd.h
> > > > > ===================================================================
> > > > > RCS file: /home/cvs/src/usr.sbin/ntpd/ntpd.h,v
> > > > > diff -u -p -r1.152 ntpd.h
> > > > > --- ntpd.h    27 Nov 2022 13:19:00 -0000      1.152
> > > > > +++ ntpd.h    17 Dec 2023 09:13:16 -0000
> > > > > @@ -371,6 +371,7 @@ int       server_dispatch(int, struct ntpd_con
> > > > >  int  client_peer_init(struct ntp_peer *);
> > > > >  int  client_addr_init(struct ntp_peer *);
> > > > >  int  client_nextaddr(struct ntp_peer *);
> > > > > +int  addr_already_used(struct sockaddr_storage *);
> > > > >  int  client_query(struct ntp_peer *);
> > > > >  int  client_dispatch(struct ntp_peer *, u_int8_t, u_int8_t);
> > > > >  void client_log_error(struct ntp_peer *, const char *, int);
> > > > >
> > > >
> > > > Previous diff had a write after free.
> > > >
> > > >         -Otto
> > > >
> > > > Index: client.c
> > > > ===================================================================
> > > > RCS file: /home/cvs/src/usr.sbin/ntpd/client.c,v
> > > > diff -u -p -r1.117 client.c
> > > > --- client.c    24 Mar 2022 07:37:19 -0000      1.117
> > > > +++ client.c    18 Dec 2023 06:20:13 -0000
> > > > @@ -108,6 +108,7 @@ client_nextaddr(struct ntp_peer *p)
> > > >                 return (-1);
> > > >
> > > >         if (p->addr_head.a == NULL) {
> > > > +               log_debug("kicking query for %s",  p->addr_head.name);
> > > >                 priv_dns(IMSG_HOST_DNS, p->addr_head.name, p->id);
> > > >                 p->state = STATE_DNS_INPROGRESS;
> > > >                 return (-1);
> > > > @@ -145,7 +146,15 @@ client_query(struct ntp_peer *p)
> > > >         }
> > > >
> > > >         if (p->state < STATE_DNS_DONE || p->addr == NULL)
> > > > -               return (-1);
> > > > +               return (0);
> > > > +
> > > > +       /* if we're a pool member and a peer has taken our address, 
> > > > signal */
> > > > +       if (p->addr_head.pool != 0 && addr_already_used(&p->addr->ss)) {
> > > > +               log_debug("pool addr %s used by peer, will reresolve 
> > > > pool",
> > > > +                   log_sockaddr((struct sockaddr *)&p->addr->ss));
> > > > +               p->senderrors++;
> > > > +               return (0);
> > > > +       }
> > > >
> > > >         if (p->query.fd == -1) {
> > > >                 struct sockaddr *sa = (struct sockaddr *)&p->addr->ss;
> > > > Index: ntp.c
> > > > ===================================================================
> > > > RCS file: /home/cvs/src/usr.sbin/ntpd/ntp.c,v
> > > > diff -u -p -r1.171 ntp.c
> > > > --- ntp.c       6 Dec 2023 15:51:53 -0000       1.171
> > > > +++ ntp.c       18 Dec 2023 06:20:13 -0000
> > > > @@ -54,8 +54,6 @@ int   ntp_dispatch_imsg(void);
> > > >  int    ntp_dispatch_imsg_dns(void);
> > > >  void   peer_add(struct ntp_peer *);
> > > >  void   peer_remove(struct ntp_peer *);
> > > > -int    inpool(struct sockaddr_storage *,
> > > > -           struct sockaddr_storage[MAX_SERVERS_DNS], size_t);
> > > >
> > > >  void
> > > >  ntp_sighdlr(int sig)
> > > > @@ -524,23 +522,52 @@ ntp_dispatch_imsg(void)
> > > >         return (0);
> > > >  }
> > > >
> > > > -int
> > > > +static int
> > > > +addr_equal(struct sockaddr_storage *a, struct sockaddr_storage *b)
> > > > +{
> > > > +       if (a->ss_family != b->ss_family)
> > > > +               return 0;
> > > > +       if (a->ss_family == AF_INET) {
> > > > +               if (((struct sockaddr_in *)a)->sin_addr.s_addr ==
> > > > +                   ((struct sockaddr_in *)b)->sin_addr.s_addr)
> > > > +                       return 1;
> > > > +       } else if (memcmp(&((struct sockaddr_in6 *)a)->sin6_addr,
> > > > +           &((struct sockaddr_in6 *)b)->sin6_addr,
> > > > +           sizeof(struct sockaddr_in6)) == 0) {
> > > > +               return 1;
> > > > +       }
> > > > +       return 0;
> > > > +}
> > > > +
> > > > +static int
> > > >  inpool(struct sockaddr_storage *a,
> > > >      struct sockaddr_storage old[MAX_SERVERS_DNS], size_t n)
> > > >  {
> > > >         size_t i;
> > > >
> > > >         for (i = 0; i < n; i++) {
> > > > -               if (a->ss_family != old[i].ss_family)
> > > > +               if (addr_equal(a, &old[i]))
> > > > +                       return 1;
> > > > +       }
> > > > +       return 0;
> > > > +}
> > > > +
> > > > +/* check to see af an address is already listed by a "server" peer
> > > > +   in that case, it does not get added to a pool */
> > > > +int
> > > > +addr_already_used(struct sockaddr_storage *a)
> > > > +{
> > > > +       struct ntp_peer *peer;
> > > > +       struct ntp_addr *addr;
> > > > +
> > > > +       TAILQ_FOREACH(peer, &conf->ntp_peers, entry) {
> > > > +               if (peer->addr_head.pool != 0)
> > > >                         continue;
> > > > -               if (a->ss_family == AF_INET) {
> > > > -                       if (((struct sockaddr_in *)a)->sin_addr.s_addr 
> > > > ==
> > > > -                           ((struct sockaddr_in 
> > > > *)&old[i])->sin_addr.s_addr)
> > > > +               addr = peer->addr_head.a;
> > > > +               while (addr != NULL) {
> > > > +                       if (addr_equal(a, &addr->ss))
> > > >                                 return 1;
> > > > -               } else if (memcmp(&((struct sockaddr_in6 
> > > > *)a)->sin6_addr,
> > > > -                   &((struct sockaddr_in6 *)&old[i])->sin6_addr,
> > > > -                   sizeof(struct sockaddr_in6)) == 0) {
> > > > -                       return 1;
> > > > +                       addr = addr->next;
> > > >                 }
> > > >         }
> > > >         return 0;
> > > > @@ -628,8 +655,11 @@ ntp_dispatch_imsg_dns(void)
> > > >                                                 free(h);
> > > >                                                 continue;
> > > >                                         }
> > > > -                                       if (inpool(&h->ss, existing,
> > > > -                                           n)) {
> > > > +                                       if (addr_already_used(&h->ss) ||
> > > > +                                           inpool(&h->ss, existing, 
> > > > n)) {
> > > > +                                               log_debug("skipping 
> > > > address %s for %s",
> > > > +                                                   
> > > > log_sockaddr((struct sockaddr *)
> > > > +                                                   &h->ss), 
> > > > peer->addr_head.name);
> > > >                                                 free(h);
> > > >                                                 continue;
> > > >                                         }
> > > > @@ -660,8 +690,12 @@ ntp_dispatch_imsg_dns(void)
> > > >                         }
> > > >                         if (dlen != 0)
> > > >                                 fatalx("IMSG_HOST_DNS: dlen != 0");
> > > > -                       if (peer->addr_head.pool)
> > > > -                               peer_remove(peer);
> > > > +                       if (peer->addr_head.pool) {
> > > > +                               if (peercount > addrcount)
> > > > +                                       peer_remove(peer);
> > > > +                               else
> > > > +                                       peer->state = STATE_NONE;
> > > > +                       }
> > > >                         else
> > > >                                 client_addr_init(peer);
> > > >                         break;
> > > > Index: ntpd.h
> > > > ===================================================================
> > > > RCS file: /home/cvs/src/usr.sbin/ntpd/ntpd.h,v
> > > > diff -u -p -r1.152 ntpd.h
> > > > --- ntpd.h      27 Nov 2022 13:19:00 -0000      1.152
> > > > +++ ntpd.h      18 Dec 2023 06:20:13 -0000
> > > > @@ -371,6 +371,7 @@ int server_dispatch(int, struct ntpd_con
> > > >  int    client_peer_init(struct ntp_peer *);
> > > >  int    client_addr_init(struct ntp_peer *);
> > > >  int    client_nextaddr(struct ntp_peer *);
> > > > +int    addr_already_used(struct sockaddr_storage *);
> > > >  int    client_query(struct ntp_peer *);
> > > >  int    client_dispatch(struct ntp_peer *, u_int8_t, u_int8_t);
> > > >  void   client_log_error(struct ntp_peer *, const char *, int);
> > > >
> > > 
> > > That solves the bug with the default config file. I can see how a
> > > repeated peer is marked as "not resolved," and how ntpd tries to find a
> > > new unique IP every poll interval, but as you mentioned, it only checks
> > > if a pool address matches a peer address.
> > > 
> > > Testing with this config file:
> > > ```
> > > servers br.pool.ntp.org
> > > servers time.cloudflare.com
> > > 
> > > constraint from "9.9.9.9"              # quad9 v4 without DNS
> > > constraint from "2620:fe::fe"          # quad9 v6 without DNS
> > > constraints from "www.google.com"      # intentionally not 8.8.8.8
> > > ```
> > > 
> > > I still see repeated peers (162.159.200.1 and 162.159.200.123):
> > > ```
> > > $ ntpctl -s all
> > > 8/8 peers valid, constraint offset -2s, clock unsynced
> > > 
> > > peer
> > >    wt tl st  next  poll          offset       delay      jitter
> > > 162.159.200.1 from pool br.pool.ntp.org
> > >     1 10  3   28s   33s         0.147ms     9.531ms     0.246ms
> > > 168.181.125.82 from pool br.pool.ntp.org
> > >     1 10  2   27s   34s        11.375ms    38.489ms     0.436ms
> > > 162.159.200.123 from pool br.pool.ntp.org
> > >     1 10  3    6s   34s        -0.585ms    11.994ms     0.351ms
> > > 138.99.160.244 from pool br.pool.ntp.org
> > >     1 10  3    1s   32s         1.017ms    18.735ms     1.252ms
> > > 2606:4700:f1::123 from pool time.cloudflare.com
> > >     1 10  3   30s   34s         1.029ms    14.623ms     3.945ms
> > > 2606:4700:f1::1 from pool time.cloudflare.com
> > >     1 10  3   30s   33s        -0.617ms    11.538ms     0.381ms
> > > 162.159.200.1 from pool time.cloudflare.com
> > >     1 10  3    2s   32s        -1.867ms    21.497ms     6.333ms
> > > 162.159.200.123 from pool time.cloudflare.com
> > >     1 10  3    5s   32s         2.403ms    14.003ms     6.336ms
> > > ```
> > > 
> > > Trying multiple pool.ntp.org regions helps find one currently serving a
> > > time.cloudflare.com address.
> > > 
> > > Also, I can't seem to get the debug messages. They don't show up with
> > > `ntpd -v` or and `ntpd -dv`.
> > > 
> > 
> > New version that fixes most (though not all) cases.
> > 
> > The case that is not solved is: if there is overlap in addresses, a
> > fixed pool may never be able to use a slot, as another pool may
> > "squat" that address. The pool with fixed addresses will then not be
> > able to use that address. I think that's acceptable.
> > 
> >     -Otto
> > 
> > Index: client.c
> > ===================================================================
> > RCS file: /home/cvs/src/usr.sbin/ntpd/client.c,v
> > diff -u -p -r1.118 client.c
> > --- client.c        20 Dec 2023 15:36:36 -0000      1.118
> > +++ client.c        26 Dec 2023 17:09:06 -0000
> > @@ -108,6 +108,7 @@ client_nextaddr(struct ntp_peer *p)
> >             return (-1);
> >  
> >     if (p->addr_head.a == NULL) {
> > +           log_debug("kicking query for %s",  p->addr_head.name);
> >             priv_dns(IMSG_HOST_DNS, p->addr_head.name, p->id);
> >             p->state = STATE_DNS_INPROGRESS;
> >             return (-1);
> > @@ -145,7 +146,15 @@ client_query(struct ntp_peer *p)
> >     }
> >  
> >     if (p->state < STATE_DNS_DONE || p->addr == NULL)
> > -           return (-1);
> > +           return (0);
> > +
> > +   /* if we're a pool member and a peer has taken our address, signal */
> > +   if (p->addr_head.pool != 0 && addr_already_used(&p->addr->ss, 
> > p->addr_head.pool)) {
> > +           log_debug("pool addr %s used by peer, will reresolve pool",
> > +               log_sockaddr((struct sockaddr *)&p->addr->ss));
> > +           p->senderrors++;
> > +           return (0);
> > +   }
> >  
> >     if (p->query.fd == -1) {
> >             struct sockaddr *sa = (struct sockaddr *)&p->addr->ss;
> > Index: ntp.c
> > ===================================================================
> > RCS file: /home/cvs/src/usr.sbin/ntpd/ntp.c,v
> > diff -u -p -r1.172 ntp.c
> > --- ntp.c   20 Dec 2023 15:36:36 -0000      1.172
> > +++ ntp.c   26 Dec 2023 17:09:06 -0000
> > @@ -54,8 +54,6 @@ int       ntp_dispatch_imsg(void);
> >  int        ntp_dispatch_imsg_dns(void);
> >  void       peer_add(struct ntp_peer *);
> >  void       peer_remove(struct ntp_peer *);
> > -int        inpool(struct sockaddr_storage *,
> > -       struct sockaddr_storage[MAX_SERVERS_DNS], size_t);
> >  
> >  void
> >  ntp_sighdlr(int sig)
> > @@ -519,23 +517,52 @@ ntp_dispatch_imsg(void)
> >     return (0);
> >  }
> >  
> > -int
> > +static int
> > +addr_equal(struct sockaddr_storage *a, struct sockaddr_storage *b)
> > +{
> > +   if (a->ss_family != b->ss_family)
> > +           return 0;
> > +   if (a->ss_family == AF_INET) {
> > +           if (((struct sockaddr_in *)a)->sin_addr.s_addr ==
> > +               ((struct sockaddr_in *)b)->sin_addr.s_addr)
> > +                   return 1;
> > +   } else if (memcmp(&((struct sockaddr_in6 *)a)->sin6_addr,
> > +       &((struct sockaddr_in6 *)b)->sin6_addr,
> > +       sizeof(struct sockaddr_in6)) == 0) {
> > +           return 1;
> > +   }
> > +   return 0;
> > +}
> > +
> > +static int
> >  inpool(struct sockaddr_storage *a,
> >      struct sockaddr_storage old[MAX_SERVERS_DNS], size_t n)
> >  {
> >     size_t i;
> >  
> >     for (i = 0; i < n; i++) {
> > -           if (a->ss_family != old[i].ss_family)
> > +           if (addr_equal(a, &old[i]))
> > +                   return 1;
> > +   }
> > +   return 0;
> > +}
> > +
> > +/* check to see af an address is already listed
> > +   in that case, it does not get added to a pool */
> > +int
> > +addr_already_used(struct sockaddr_storage *a, int skippool)
> > +{
> > +   struct ntp_peer *peer;
> > +   struct ntp_addr *addr;
> > +
> > +   TAILQ_FOREACH(peer, &conf->ntp_peers, entry) {
> > +           if (peer->addr_head.pool == skippool)
> >                     continue;
> > -           if (a->ss_family == AF_INET) {
> > -                   if (((struct sockaddr_in *)a)->sin_addr.s_addr ==
> > -                       ((struct sockaddr_in *)&old[i])->sin_addr.s_addr)
> > +           addr = peer->addr_head.a;
> > +           while (addr != NULL) {
> > +                   if (addr_equal(a, &addr->ss))
> >                             return 1;
> > -           } else if (memcmp(&((struct sockaddr_in6 *)a)->sin6_addr,
> > -               &((struct sockaddr_in6 *)&old[i])->sin6_addr,
> > -               sizeof(struct sockaddr_in6)) == 0) {
> > -                   return 1;
> > +                   addr = addr->next;
> >             }
> >     }
> >     return 0;
> > @@ -623,8 +650,11 @@ ntp_dispatch_imsg_dns(void)
> >                                             free(h);
> >                                             continue;
> >                                     }
> > -                                   if (inpool(&h->ss, existing,
> > -                                       n)) {
> > +                                   if (addr_already_used(&h->ss, 
> > peer->addr_head.pool) ||
> > +                                       inpool(&h->ss, existing, n)) {
> > +                                           log_debug("skipping address %s 
> > for %s",
> > +                                               log_sockaddr((struct 
> > sockaddr *)
> > +                                               &h->ss), 
> > peer->addr_head.name);
> >                                             free(h);
> >                                             continue;
> >                                     }
> > @@ -654,8 +684,12 @@ ntp_dispatch_imsg_dns(void)
> >                     }
> >                     if (dlen != 0)
> >                             fatalx("IMSG_HOST_DNS: dlen != 0");
> > -                   if (peer->addr_head.pool)
> > -                           peer_remove(peer);
> > +                   if (peer->addr_head.pool) {
> > +                           if (peercount > addrcount)
> > +                                   peer_remove(peer);
> > +                           else
> > +                                   peer->state = STATE_NONE;
> > +                   }
> >                     else
> >                             client_addr_init(peer);
> >                     break;
> > Index: ntpd.h
> > ===================================================================
> > RCS file: /home/cvs/src/usr.sbin/ntpd/ntpd.h,v
> > diff -u -p -r1.153 ntpd.h
> > --- ntpd.h  20 Dec 2023 15:36:36 -0000      1.153
> > +++ ntpd.h  26 Dec 2023 17:09:06 -0000
> > @@ -371,6 +371,7 @@ int     server_dispatch(int, struct ntpd_con
> >  int        client_peer_init(struct ntp_peer *);
> >  int        client_addr_init(struct ntp_peer *);
> >  int        client_nextaddr(struct ntp_peer *);
> > +int        addr_already_used(struct sockaddr_storage *, int);
> >  int        client_query(struct ntp_peer *);
> >  int        client_dispatch(struct ntp_peer *, u_int8_t, u_int8_t);
> >  void       client_log_error(struct ntp_peer *, const char *, int);
> > 
> 
> 
> With 2 `server` that resolve to the same IP, I still get repeated peers.
> 
> Config file:
> ```
> server br.pool.ntp.org
> server time.cloudflare.com
> ```
> 
> jan-z87-obsd# ntpd -dvv
> ntp engine ready
> kicking query for br.pool.ntp.org
> kicking query for time.cloudflare.com
> Reset constraint info
> trying to resolve br.pool.ntp.org
> resolve br.pool.ntp.org done: 4
> trying to resolve time.cloudflare.com
> resolve time.cloudflare.com done: 4
> reply from 162.159.200.123: offset 0.004251 delay 0.018725, next query 5s
> reply from 162.159.200.123: offset 0.004367 delay 0.019038, next query 8s
> reply from 162.159.200.123: offset 0.000310 delay 0.010170, next query 8s
> reply from 162.159.200.123: offset 0.000472 delay 0.010892, next query 8s
> reply from 162.159.200.123: offset 0.000274 delay 0.009892, next query 6s
> reply from 162.159.200.123: offset 0.000536 delay 0.010931, next query 6s
> peer 162.159.200.123 now valid
> reply from 162.159.200.123: offset 0.000302 delay 0.009911, next query 9s
> peer 162.159.200.123 now valid
> reply from 162.159.200.123: offset 0.000538 delay 0.010939, next query 7s
> reply from 162.159.200.123: offset 0.000601 delay 0.010461, next query 6s
> reply from 162.159.200.123: offset 0.000588 delay 0.010966, next query 5s
> 
> 
> $ ntpctl -s all
> 2/2 peers valid, clock unsynced
> 
> peer
>    wt tl st  next  poll          offset       delay      jitter
> 162.159.200.123 br.pool.ntp.org
>     1  8  3   16s   31s         1.143ms    12.211ms     3.838ms
> 162.159.200.123 time.cloudflare.com
>     1  8  3   17s   31s         1.015ms    11.525ms     3.961ms
> 
> 

Righ, but I don't expect to have time to look into that case soon
though.  As the current diff is an improvement over the existing
situation, I still like to commit it.

        -Otto

Reply via email to