Re: pf generic packet delay
* Martin Pieuchot [2018-02-23 10:04]: > On 23/02/18(Fri) 04:08, Henning Brauer wrote: > > * Martin Pieuchot [2018-02-21 09:37]: > > > On 21/02/18(Wed) 02:37, Henning Brauer wrote: > > > I'd suggest moving the pool allocation and the function in net/pf*.c > > > and only have a function call under #if NPF > 0. > > worth discussing, but imo that part doesn't really have all that much > > to do with pf. > It is only used by pf(4). All the code is under #if NPF > 0, so it *is* > related to PF. We have been reducing the #ifdef maze through the years > for maintainability reasons. I don't see the point for reintroducing > them for taste. you're jumping the gun from seeing #ifdef NPF - that is really used here to keep stuff out of the ramdisks. Unfortunately NPF has become quite a kitchensink ifdef for that purpose. The delay functionality is network stack and not pf in my book, with pf being one of the possible triggers. But anyway, this is a corner case and I don't care too much and code can easily be moved later. It actually has the welcome sideeffect that the pool limit can be adjusted easily. > If you can easily avoid using ph_cookie, then please do it. Otherwise > you're putting maintenance burden by writing fragile code that will > subtly break when somebody will start using ph_cookie for something else. I don't see that in this case but anyway, not worth splitting hair over. Not that I had much anyway. Here's the HND-MUC version. Index: sys/sys/mbuf.h === RCS file: /cvs/src/sys/sys/mbuf.h,v retrieving revision 1.235 diff -u -p -r1.235 mbuf.h --- sys/sys/mbuf.h 11 Feb 2018 00:24:13 - 1.235 +++ sys/sys/mbuf.h 11 Feb 2018 04:47:44 - @@ -100,10 +100,11 @@ struct pkthdr_pf { struct inpcb*inp; /* connected pcb for outgoing packet */ u_int32_tqid; /* queue id */ u_int16_ttag; /* tag id */ + u_int16_tdelay; /* delay packet by X ms */ u_int8_t flags; u_int8_t routed; u_int8_t prio; - u_int8_t pad[3]; + u_int8_t pad; }; /* pkthdr_pf.flags */ Index: sys/net/if.c === RCS file: /cvs/src/sys/net/if.c,v retrieving revision 1.549 diff -u -p -r1.549 if.c --- sys/net/if.c20 Mar 2018 08:58:19 - 1.549 +++ sys/net/if.c1 Apr 2018 13:26:42 - @@ -681,6 +681,11 @@ if_enqueue(struct ifnet *ifp, struct mbu struct ifqueue *ifq; int error; +#if NPF > 0 + if (m->m_pkthdr.pf.delay > 0) + return (pf_delay_pkt(m, ifp->if_index)); +#endif + #if NBRIDGE > 0 if (ifp->if_bridgeport && (m->m_flags & M_PROTO1) == 0) { KERNEL_LOCK(); Index: sys/net/pf.c === RCS file: /cvs/src/sys/net/pf.c,v retrieving revision 1.1063 diff -u -p -r1.1063 pf.c --- sys/net/pf.c6 Mar 2018 17:35:53 - 1.1063 +++ sys/net/pf.c17 Mar 2018 21:55:18 - @@ -161,7 +161,7 @@ struct pf_test_ctx { struct pool pf_src_tree_pl, pf_rule_pl, pf_queue_pl; struct pool pf_state_pl, pf_state_key_pl, pf_state_item_pl; -struct pool pf_rule_item_pl, pf_sn_item_pl; +struct pool pf_rule_item_pl, pf_sn_item_pl, pf_pktdelay_pl; voidpf_add_threshold(struct pf_threshold *); int pf_check_threshold(struct pf_threshold *); @@ -258,6 +258,7 @@ void pf_state_key_link_inpcb(struct p struct inpcb *); voidpf_state_key_unlink_inpcb(struct pf_state_key *); voidpf_inpcb_unlink_state_key(struct inpcb *); +voidpf_pktenqueue_delayed(void *); #if NPFLOG > 0 voidpf_log_matches(struct pf_pdesc *, struct pf_rule *, @@ -273,7 +274,8 @@ struct pf_pool_limit pf_pool_limits[PF_L { &pf_src_tree_pl, PFSNODE_HIWAT, PFSNODE_HIWAT }, { &pf_frent_pl, PFFRAG_FRENT_HIWAT, PFFRAG_FRENT_HIWAT }, { &pfr_ktable_pl, PFR_KTABLE_HIWAT, PFR_KTABLE_HIWAT }, - { &pfr_kentry_pl, PFR_KENTRY_HIWAT, PFR_KENTRY_HIWAT } + { &pfr_kentry_pl, PFR_KENTRY_HIWAT, PFR_KENTRY_HIWAT }, + { &pf_pktdelay_pl, PF_PKTDELAY_MAXPKTS, PF_PKTDELAY_MAXPKTS } }; #define STATE_LOOKUP(i, k, d, s, m)\ @@ -3488,6 +3490,8 @@ pf_rule_to_actions(struct pf_rule *r, st a->set_prio[0] = r->set_prio[0]; a->set_prio[1] = r->set_prio[1]; } + if (r->rule_flag & PFRULE_SETDELAY) + a->delay = r->delay; } #define PF_TEST_ATTRIB(t, a) \ @@ -3983,6 +3987,7 @@ pf_create_state(struct pf_pdesc *pd, str #endif /* NPFSYNC > 0 */ s->set_prio[0] = act->set_prio[0];
Re: pf generic packet delay
On 23/02/18(Fri) 04:08, Henning Brauer wrote: > * Martin Pieuchot [2018-02-21 09:37]: > > On 21/02/18(Wed) 02:37, Henning Brauer wrote: > > I'd suggest moving the pool allocation and the function in net/pf*.c > > and only have a function call under #if NPF > 0. > > worth discussing, but imo that part doesn't really have all that much > to do with pf. It is only used by pf(4). All the code is under #if NPF > 0, so it *is* related to PF. We have been reducing the #ifdef maze through the years for maintainability reasons. I don't see the point for reintroducing them for taste. > > I'd suggest defining your own structure containing a timeout and a mbuf > > pointer instead of abusing ph_cookie. Since you're already allocating > > something it doesn't matter much and you're code won't be broken by > > a future refactoring. > > dlg pointed me to ph_cookie, I was about to use my own structure. > "ph_cookie is for exactly that" ph_cookie is not for that. ph_cookie is a left-over because the wifi stack abused `rcvif' to attach a node to management frames. The problem about using ph_cookie all over the place is that you don't know who owns it. Is it the driver like in wireless case? Is it pf(4) like in your case? Is it a subsystem like in pipex case? If you can easily avoid using ph_cookie, then please do it. Otherwise you're putting maintenance burden by writing fragile code that will subtly break when somebody will start using ph_cookie for something else.
Re: pf generic packet delay
* Martin Pieuchot [2018-02-21 09:37]: > On 21/02/18(Wed) 02:37, Henning Brauer wrote: > I'd suggest moving the pool allocation and the function in net/pf*.c > and only have a function call under #if NPF > 0. worth discussing, but imo that part doesn't really have all that much to do with pf. > I'd suggest defining your own structure containing a timeout and a mbuf > pointer instead of abusing ph_cookie. Since you're already allocating > something it doesn't matter much and you're code won't be broken by > a future refactoring. dlg pointed me to ph_cookie, I was about to use my own structure. "ph_cookie is for exactly that" > You're leaking a mbuf if the interface has been detached. hah! true. fixed locally (the obvious: else m_freem(m);) -- Henning Brauer, h...@bsws.de, henn...@openbsd.org BS Web Services GmbH, http://bsws.de, Full-Service ISP Secure Hosting, Mail and DNS. Virtual & Dedicated Servers, Root to Fully Managed Henning Brauer Consulting, http://henningbrauer.com/
Re: pf generic packet delay
On 21/02/18(Wed) 02:37, Henning Brauer wrote: > Here comes generic delay functionality for pf. > The manpage bits are missing for the moment, but it's really simple to > use: > > match in set delay 1 > > delay is in ms. should I change the parser to explicitely require > "ms", as in "match in set delay 1ms"? > I have a pool_sethardlimit as a "last resort" style upper limit of > delayed packets held, hardcoded to 1 right now - does that need to > be tunable? if not - what value? I'll obviously make it a define at > least. > delay range is 1-65535ms - is 65s too excessive, aka do we need to > impose a lower upper limit? > > I would welcome a few tests (and then oks from ppl with the right @) so > that we can go further once the basic functionality is in. And yes, > things like a delay range with random value in between chosen can be > added later. Getting the basic delay machinery, foremost in if.c, right > is the main goal now, the pf side is relatively easy to twiddle with. I'd suggest moving the pool allocation and the function in net/pf*.c and only have a function call under #if NPF > 0. More comments inline. > Index: sys/net/if.c > === > RCS file: /cvs/src/sys/net/if.c,v > retrieving revision 1.547 > diff -u -p -r1.547 if.c > --- sys/net/if.c 20 Feb 2018 03:46:45 - 1.547 > +++ sys/net/if.c 21 Feb 2018 01:26:37 - > @@ -127,6 +127,10 @@ > > #if NPF > 0 > #include > +#include > + > +#define IF_TIMEOUTPL_LIMIT 1 > +struct pool iftimeopl; I'd suggest defining your own structure containing a timeout and a mbuf pointer instead of abusing ph_cookie. Since you're already allocating something it doesn't matter much and you're code won't be broken by a future refactoring. > #endif > > void if_attachsetup(struct ifnet *); > @@ -164,6 +168,10 @@ void ifa_print_all(void); > > void if_qstart_compat(struct ifqueue *); > > +#if NPF > 0 > +void if_enqueue_delayed(void *); > +#endif > + > /* > * interface index map > * > @@ -261,6 +269,14 @@ ifinit(void) > } > > net_tick(&net_tick_to); > + > +#if NPF > 0 > + pool_init(&iftimeopl, sizeof(struct timeout), 0, IPL_SOFTNET, 0, > + "iftimeout", NULL); > + pool_sethardlimit(&iftimeopl, IF_TIMEOUTPL_LIMIT, "iftimeout pool " > + "limit exceeded, dropping packets", 60); > +/* XXX hardlimit as last resort safeguard, limit? */ > +#endif > } > > static struct if_idxmap if_idxmap = { > @@ -674,12 +690,44 @@ if_qstart_compat(struct ifqueue *ifq) > KERNEL_UNLOCK(); > } > > +#if NPF > 0 > +void > +if_enqueue_delayed(void *arg) > +{ > + struct mbuf *m = arg; > + struct ifnet*ifp; > + struct timeout *to = m->m_pkthdr.ph_cookie; > + > + ifp = if_get(m->m_pkthdr.ph_ifidx); > + if (ifp != NULL) { > + if_enqueue(ifp, m); > + if_put(ifp); > + } You're leaking a mbuf if the interface has been detached. > + pool_put(&iftimeopl, to); > +} > +#endif > + > int > if_enqueue(struct ifnet *ifp, struct mbuf *m) > { > unsigned int idx; > struct ifqueue *ifq; > int error; > + > +#if NPF > 0 > + if (m->m_pkthdr.pf.delay > 0) { > + struct timeout *to; > + > + if ((to = pool_get(&iftimeopl, PR_NOWAIT)) == NULL) > + return (ENOBUFS); > + timeout_set(to, if_enqueue_delayed, m); > + timeout_add_msec(to, m->m_pkthdr.pf.delay); > + m->m_pkthdr.pf.delay = 0; > + m->m_pkthdr.ph_cookie = to; > + m->m_pkthdr.ph_ifidx = ifp->if_index; > + return (0); > + } > +#endif > > #if NBRIDGE > 0 > if (ifp->if_bridgeport && (m->m_flags & M_PROTO1) == 0) { > Index: sys/net/pf.c > === > RCS file: /cvs/src/sys/net/pf.c,v > retrieving revision 1.1061 > diff -u -p -r1.1061 pf.c > --- sys/net/pf.c 18 Feb 2018 21:45:30 - 1.1061 > +++ sys/net/pf.c 21 Feb 2018 01:16:00 - > @@ -3484,6 +3484,8 @@ pf_rule_to_actions(struct pf_rule *r, st > a->set_prio[0] = r->set_prio[0]; > a->set_prio[1] = r->set_prio[1]; > } > + if (r->rule_flag & PFRULE_SETDELAY) > + a->delay = r->delay; > } > > #define PF_TEST_ATTRIB(t, a) \ > @@ -3979,6 +3981,7 @@ pf_create_state(struct pf_pdesc *pd, str > #endif /* NPFSYNC > 0 */ > s->set_prio[0] = act->set_prio[0]; > s->set_prio[1] = act->set_prio[1]; > + s->delay = act->delay; > SLIST_INIT(&s->src_nodes); > > switch (pd->proto) { > @@ -7025,6 +7028,7 @@ done: > if (s->state_flags & PFSTATE_SETPRIO) > pd.m->m_pkthdr.pf.prio = s->set_prio[0]; > } > + pd.m->m_pkthdr.pf.delay = s->delay; > } el