On 17/07/19(Wed) 17:34, Alexander Bluhm wrote:
> On Tue, Jul 16, 2019 at 09:01:24PM -0300, Martin Pieuchot wrote:
> > On 16/07/19(Tue) 22:45, Alexander Bluhm wrote:
> > > Hi,
> > >
> > > Convert struct pkpcb malloc(9) to pool_get(9).  PCB for pfkey is
> > > only used in process context, so pass PR_WAITOK to pool_init(9).
> > > The possible sleep in pool_put(9) should not hurt, as pfkeyv2_detach()
> > > is only called by soclose(9).
> >
> > In that case should we pass PR_RWLOCK as well to pool_init(9)?
> 
> I ran full regress with pool_init( PR_WAITOK|PR_RWLOCK ) on i386.
> There is no difference.  I think both ways work.  Is there an
> advantage to use rw_lock instead of mutex?  It is not in a hot path.
> If you like, I cange it to PR_RWLOCK.  I have the same diff for the
> routing socket.

The ipl of the pool is IPL_NONE.  This is logical because this code is
only used in process context.  However as soon as pool_put/get(9) are
used with *and* without KERNEL_LOCK() a deadlock is possible due to any
interrupt grabbing the KERNEL_LOCK().  There's two way to avoid this:
raise the ipl to IPL_MPFLOOR or PR_RWLOCK.

For both routing and pfkey, the socket lock is currently the
KERNEL_LOCK().  After analyse I don't see how pr_attach() can be moved
out from the lock without pr_detach().  So it should not really matter.

I'd say put it without the flag :)  ok mpi@ 

> > > Index: net/pfkeyv2.c
> > > ===================================================================
> > > RCS file: /data/mirror/openbsd/cvs/src/sys/net/pfkeyv2.c,v
> > > retrieving revision 1.197
> > > diff -u -p -r1.197 pfkeyv2.c
> > > --- net/pfkeyv2.c 4 Feb 2019 21:40:52 -0000       1.197
> > > +++ net/pfkeyv2.c 13 Jul 2019 13:59:19 -0000
> > > @@ -129,6 +129,7 @@ extern struct pool ipsec_policy_pool;
> > >
> > >  extern struct radix_node_head **spd_tables;
> > >
> > > +struct pool pkpcb_pool;
> > >  #define PFKEY_MSG_MAXSZ 4096
> > >  const struct sockaddr pfkey_addr = { 2, PF_KEY, };
> > >  struct domain pfkeydomain;
> > > @@ -251,6 +252,8 @@ pfkey_init(void)
> > >   srpl_rc_init(&pkptable.pkp_rc, keycb_ref, keycb_unref, NULL);
> > >   rw_init(&pkptable.pkp_lk, "pfkey");
> > >   SRPL_INIT(&pkptable.pkp_list);
> > > + pool_init(&pkpcb_pool, sizeof(struct pkpcb), 0,
> > > +     IPL_NONE, PR_WAITOK, "pkpcb", NULL);
> > >  }
> > >
> > >
> > > @@ -266,13 +269,13 @@ pfkeyv2_attach(struct socket *so, int pr
> > >   if ((so->so_state & SS_PRIV) == 0)
> > >           return EACCES;
> > >
> > > - kp = malloc(sizeof(struct pkpcb), M_PCB, M_WAITOK | M_ZERO);
> > > + kp = pool_get(&pkpcb_pool, PR_WAITOK|PR_ZERO);
> > >   so->so_pcb = kp;
> > >   refcnt_init(&kp->kcb_refcnt);
> > >
> > >   error = soreserve(so, PFKEYSNDQ, PFKEYRCVQ);
> > >   if (error) {
> > > -         free(kp, M_PCB, sizeof(struct pkpcb));
> > > +         pool_put(&pkpcb_pool, kp);
> > >           return (error);
> > >   }
> > >
> > > @@ -326,7 +329,7 @@ pfkeyv2_detach(struct socket *so)
> > >
> > >   so->so_pcb = NULL;
> > >   KASSERT((so->so_state & SS_NOFDREF) == 0);
> > > - free(kp, M_PCB, sizeof(struct pkpcb));
> > > + pool_put(&pkpcb_pool, kp);
> > >
> > >   return (0);
> > >  }
> > >
> 

Reply via email to