this introduces mpsafety to ifqueue operations. this follows on from some mopping up of (now) inappropriate use of ifqueues and the IF_ and IFQ_ operations on them. in most cases those ifqueues have been replaced with mbuf_lists or mbuf_queues.
the reason we want mpsafe ifqueues is so we can run the network stack without the big lock, but still run drivers on another cpu (which may or may not need the big lock). we do this by adding a mutex in struct ifqueue to protect the queue data structures. that ends up sending us down a rabbit hole. ifqueues are like bufqs (or bufqs are like ifqueues) in that they abstract a set of operations (enqueue, dequeue) on top of different queueing disciplines (priq or hfsc). they also support switching disciplines at runtime. so the ifq operations are now functions which wrap discipline code. priq has been factored out and hfsc has been massaged slightly. the ifq code manages the locking, the accounting (ifq_len), and freeing the mbufs (to avoid locking in the mbuf layer while holding an ifq lock). a discipline just has to accept or reject a queue op. however, a lot of drivers use IFQ_POLL, which gives them a reference to an mbuf which is still on the ifqueue. now that we're expecting multiple cpus to operate on a queue, it is possible that one cpu could be between IFQ_POLL and IFQ_DEQUEUE calls while another cpu is purging the queue. ie, the first cpu will end up using after a free. i was going to provide an IFQ_REQUEUE api so drivers could cleanly dequeue an mbuf, attempt to transmit it, and requeue it if there's a temporary failure. that worked great from a mbuf and queue consistency point of view, but kenjiro cho pointed out that it can make statistics in some disciplines hard. eg, requeue on hfsc in my code caused the bandwidth estimates to go wrong on the next dequeue. the alternative i came up with was to break apart the dequeue operation into something like a database transaction. a driver that wants to try and queue an mbuf calls ifq_deq_begin to get a reference to the mbuf and hold the ifqueue mutex. if it figures out it can tx the mbuf, it calls ifq_deq_commit to properly take the mbuf and release the lock. if it cant take the mbuf it calles ifq_deq_rollback, which basically gives up the mutex. this diff implements all that, and does a naive conversion of most drivers using IFQ_POLL over to ifq_deq_begin/commit/rollback. the exceptions to that are bge, de, and vge. bge and vge have been modified to only use IFQ_DEQUEUE after checking for enough space on the tx ring. de has been changed to simplify its working by using m_defrag, but still uses a transaction. it is a big diff, but i have been beating on it a bit and im confident with it. all the risk in my mind is in the driver changes. i would appreciate testing by everyone though, but in particular de and vge users. cheers, dlg Index: share/man/man9/ifq_enq.9 =================================================================== RCS file: share/man/man9/ifq_enq.9 diff -N share/man/man9/ifq_enq.9 --- /dev/null 1 Jan 1970 00:00:00 -0000 +++ share/man/man9/ifq_enq.9 12 Nov 2015 05:50:12 -0000 @@ -0,0 +1,140 @@ +.\" $OpenBSD$ +.\" +.\" Copyright (c) 2015 David Gwynne <d...@openbsd.org> +.\" +.\" Permission to use, copy, modify, and distribute this software for any +.\" purpose with or without fee is hereby granted, provided that the above +.\" copyright notice and this permission notice appear in all copies. +.\" +.\" THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES +.\" WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF +.\" MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR +.\" ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES +.\" WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN +.\" ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF +.\" OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. +.\" +.Dd $Mdocdate: November 12 2015 $ +.Dt IFQ_ENQ 9 +.Os +.Sh NAME +.Nm ifq_enq , +.Nm ifq_deq , +.Nm ifq_deq_begin , +.Nm ifq_deq_commit , +.Nm ifq_deq_rollback , +.Nm ifq_purge , +.Nm ifq_len , +.Nm ifq_empty +.Nd Interface send queue API +.Sh SYNOPSIS +.In net/if_var.h +.Ft int +.Fn ifq_enq "struct ifqueue *ifq" "struct mbuf *m" +.Ft struft mbuf * +.Fn ifq_deq "struct ifqueue *ifq" +.Ft struft mbuf * +.Fn ifq_deq_begin "struct ifqueue *ifq" +.Ft void +.Fn ifq_deq_commit "struct ifqueue *ifq" "struct mbuf *m" +.Ft void +.Fn ifq_deq_rollback "struct ifqueue *ifq" "struct mbuf *m" +.Ft unsigned int +.Fn ifq_purge "struct ifqueue *ifq" +.Ft unsigned int +.Fn ifq_len "struct ifqueue *ifq" +.Ft unsigned int +.Fn ifq_empty "struct ifqueue *ifq" +.Sh DESCRIPTION +The ifqueue API provides implementions of data structures and +operations for the network stack to queue mbufs for a network driver +to dequeue from its start routine for transmission. +.Bl -tag -width Ds +.It Fn ifq_enq "struct ifqueue *ifq" "struct mbuf *m" +Enqueue mbuf +.Fa m +on the the +.Fa ifq +interface send queue. +If the queue rejects the packet it will be freed with +.Xr m_freem +and counted as a drop. +.It Fn ifq_deq "struct ifqueue *ifq" +Dequeue the next mbuf to be transmitted from the +.Fa ifq +interface send queue. +.It Fn ifq_deq_begin "struct ifqueue *ifq" +Get a reference to the next mbuf to be transmitted from the +.Fa ifq +interface send queue. +If an mbuf is to be transmitted, also acquire a lock on the send queue +to exclude modification or freeing of the referenced mbuf. +The mbuf must not be freed, or have its length (m_pkthdr.len) or +cookie (m_pkthdr.ph_cookie) modified until it has been dequeued +completely with +.Fn ifq_deq_commit . +.It Fn ifq_deq_commit "struct ifqueue *ifq" "struct mbuf *m" +Dequeue the mbuf +.Fa m +that was referenced by a previous call to +.Fn ifq_deq_begin +and release the lock on +.Fa ifq . +.It Fn ifq_deq_rollback "struct ifqueue *ifq" "struct mbuf *m" +Release the lock on the interface send queue +.Fa ifq +that was acquired while a reference to +.Fa m +was being held. +.It Fn ifq_purge "struct ifqueue *ifq" +Free all the mbufs on the interface send queue +.Fa ifq . +Freed mbufs will be accounted as drops. +.It Fn ifq_len "struct ifqueue *ifq" +Return the number of mbufs on the interface send queue +.Fa ifq . +Note that while +.Fn ifq_len +may report that mbufs are on the queue, the current queue +discipline may not make them available for dequeueing with +.Fn ifq_deq +or +.Fn ifq_deq_begin . +.It Fn ifq_empty "struct ifqueue *ifq" +Return if the interface send queue +.Fa ifq +is empty. +.El +.Sh CONTEXT +.Fn ifq_enq , +.Fn ifq_deq , +.Fn ifq_deq_begin , +.Fn ifq_deq_commit , +.Fn ifq_deq_rollback , +.Fn ifq_purge , +.Fn ifq_len , +and +.Fn ifq_empty +can be called during autoconf, from process context, or from interrupt context. +.Sh RETURN VALUES +.Fn ifq_enq +returns 0 if the mbuf was successfully queued, or non-zero if mbuf was freed. +.Pp +.Fn ifq_deq +and +.Fn ifq_deq_begin +return the next mbuf to be transmitted by the interface. +If no packet is available for transmission, +.Dv NULL +is returned. +.Pp +.Fn ifq_purge +returns the number of mbufs that were removed from the queue and freed. +.Pp +.Fn ifq_len +returns the number of mbufs on the queue. +.Pp +.Fn ifq_empty +returns a non-zero value if the queue is empty, otherwise 0. +.Sh SEE ALSO +.Xr m_freem 9 Index: share/man/man9/Makefile =================================================================== RCS file: /cvs/src/share/man/man9/Makefile,v retrieving revision 1.251 diff -u -p -r1.251 Makefile --- share/man/man9/Makefile 2 Nov 2015 09:21:48 -0000 1.251 +++ share/man/man9/Makefile 12 Nov 2015 05:50:12 -0000 @@ -18,7 +18,7 @@ MAN= aml_evalnode.9 atomic_add_int.9 ato ieee80211.9 ieee80211_crypto.9 ieee80211_input.9 ieee80211_ioctl.9 \ ieee80211_node.9 ieee80211_output.9 ieee80211_proto.9 \ ieee80211_radiotap.9 \ - if_rxr_init.9 iic.9 intro.9 inittodr.9 intr_barrier.9 \ + if_rxr_init.9 ifq_enq.9 iic.9 intro.9 inittodr.9 intr_barrier.9 \ kern.9 km_alloc.9 knote.9 kthread.9 ktrace.9 \ loadfirmware.9 lock.9 log.9 \ malloc.9 membar_sync.9 mbuf.9 mbuf_tags.9 md5.9 mi_switch.9 \ @@ -217,6 +217,9 @@ MLINKS+=ieee80211_proto.9 ieee80211_prot MLINKS+=if_rxr_init.9 if_rxr_get.9 if_rxr_init.9 if_rxr_put.9 \ if_rxr_init.9 if_rxr_inuse.9 if_rxr_init.9 if_rxr_ioctl.9 \ if_rxr_init.9 if_rxr_info_ioctl.9 +MLINKS+=ifq_enq.9 ifq_deq.9 ifq_enq.9 ifq_deq_begin.9 \ + ifq_enq.9 ifq_deq_commit.9 ifq_enq.9 ifq_deq_rollback.9 \ + ifq_enq.9 ifq_purge.9 ifq_enq.9 ifq_len.9 ifq_enq.9 ifq_empty.9 MLINKS+=iic.9 iic_acquire_bus.9 iic.9 iic_release_bus.9 iic.9 iic_exec.9 \ iic.9 iic_smbus_write_byte.9 iic.9 iic_smbus_read_byte.9 \ iic.9 iic_smbus_receive_byte.9 Index: sys/arch/armv7/imx/imxenet.c =================================================================== RCS file: /cvs/src/sys/arch/armv7/imx/imxenet.c,v retrieving revision 1.17 diff -u -p -r1.17 imxenet.c --- sys/arch/armv7/imx/imxenet.c 27 Oct 2015 15:07:56 -0000 1.17 +++ sys/arch/armv7/imx/imxenet.c 12 Nov 2015 05:50:12 -0000 @@ -816,16 +816,17 @@ imxenet_start(struct ifnet *ifp) return; for (;;) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; if (imxenet_encap(sc, m_head)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); ifp->if_opackets++; Index: sys/arch/armv7/omap/if_cpsw.c =================================================================== RCS file: /cvs/src/sys/arch/armv7/omap/if_cpsw.c,v retrieving revision 1.28 diff -u -p -r1.28 if_cpsw.c --- sys/arch/armv7/omap/if_cpsw.c 27 Oct 2015 15:07:56 -0000 1.28 +++ sys/arch/armv7/omap/if_cpsw.c 12 Nov 2015 05:50:12 -0000 @@ -465,11 +465,9 @@ cpsw_start(struct ifnet *ifp) break; } - IFQ_POLL(&ifp->if_snd, m); + IFQ_DEQUEUE(&ifp->if_snd, m); if (m == NULL) break; - - IFQ_DEQUEUE(&ifp->if_snd, m); dm = rdp->tx_dm[sc->sc_txnext]; error = bus_dmamap_load_mbuf(sc->sc_bdt, dm, m, BUS_DMA_NOWAIT); Index: sys/arch/armv7/sunxi/sxie.c =================================================================== RCS file: /cvs/src/sys/arch/armv7/sunxi/sxie.c,v retrieving revision 1.10 diff -u -p -r1.10 sxie.c --- sys/arch/armv7/sunxi/sxie.c 27 Oct 2015 15:07:56 -0000 1.10 +++ sys/arch/armv7/sunxi/sxie.c 12 Nov 2015 05:50:12 -0000 @@ -471,17 +471,19 @@ sxie_start(struct ifnet *ifp) m = NULL; head = NULL; trynext: - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) return; if (m->m_pkthdr.len > SXIE_MAX_PKT_SIZE) { + ifq_deq_commit(&ifp->if_snd, m); printf("sxie_start: packet too big\n"); m_freem(m); return; } if (sc->txf_inuse > 1) { + ifq_deq_rollback(&ifp->if_snd, m); printf("sxie_start: tx fifos in use.\n"); ifp->if_flags |= IFF_OACTIVE; return; @@ -505,7 +507,7 @@ trynext: /* transmit to PHY from fifo */ SXISET4(sc, SXIE_TXCR0 + (fifo * 4), 1); ifp->if_timer = 5; - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/arch/octeon/dev/if_cnmac.c =================================================================== RCS file: /cvs/src/sys/arch/octeon/dev/if_cnmac.c,v retrieving revision 1.29 diff -u -p -r1.29 if_cnmac.c --- sys/arch/octeon/dev/if_cnmac.c 28 Oct 2015 14:04:17 -0000 1.29 +++ sys/arch/octeon/dev/if_cnmac.c 12 Nov 2015 05:50:12 -0000 @@ -1019,24 +1019,12 @@ octeon_eth_start(struct ifnet *ifp) /* XXX assume that OCTEON doesn't buffer packets */ if (__predict_false(!cn30xxgmx_link_status(sc->sc_gmx_port))) { /* dequeue and drop them */ - while (1) { - IFQ_DEQUEUE(&ifp->if_snd, m); - if (m == NULL) - break; -#if 0 -#ifdef DDB - m_print(m, "cd", printf); -#endif - printf("%s: drop\n", sc->sc_dev.dv_xname); -#endif - m_freem(m); - IF_DROP(&ifp->if_snd); - } + IFQ_PURGE(&ifp->if_snd); goto last; } for (;;) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (__predict_false(m == NULL)) break; @@ -1048,11 +1036,12 @@ octeon_eth_start(struct ifnet *ifp) * and bail out. */ if (octeon_eth_send_queue_is_full(sc)) { + ifq_deq_rollback(&ifp->if_snd, m); return; } /* XXX */ - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); OCTEON_ETH_TAP(ifp, m, BPF_DIRECTION_OUT); Index: sys/arch/sgi/dev/if_iec.c =================================================================== RCS file: /cvs/src/sys/arch/sgi/dev/if_iec.c,v retrieving revision 1.16 diff -u -p -r1.16 if_iec.c --- sys/arch/sgi/dev/if_iec.c 25 Oct 2015 13:22:09 -0000 1.16 +++ sys/arch/sgi/dev/if_iec.c 12 Nov 2015 05:50:12 -0000 @@ -760,12 +760,14 @@ iec_start(struct ifnet *ifp) for (;;) { /* Grab a packet off the queue. */ - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; - if (sc->sc_txpending == IEC_NTXDESC) + if (sc->sc_txpending == IEC_NTXDESC) { + ifq_deq_rollback(&ifp->if_snd, m0); break; + } /* * Get the next available transmit descriptor. @@ -779,7 +781,7 @@ iec_start(struct ifnet *ifp) DPRINTF(IEC_DEBUG_START, ("iec_start: len = %d, nexttx = %d\n", len, nexttx)); - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); if (len <= IEC_TXD_BUFSIZE) { /* * If the packet is small enough, Index: sys/arch/sgi/dev/if_mec.c =================================================================== RCS file: /cvs/src/sys/arch/sgi/dev/if_mec.c,v retrieving revision 1.31 diff -u -p -r1.31 if_mec.c --- sys/arch/sgi/dev/if_mec.c 25 Oct 2015 13:22:09 -0000 1.31 +++ sys/arch/sgi/dev/if_mec.c 12 Nov 2015 05:50:12 -0000 @@ -738,11 +738,12 @@ mec_start(struct ifnet *ifp) for (;;) { /* Grab a packet off the queue. */ - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (sc->sc_txpending == MEC_NTXDESC) { + ifq_deq_rollback(&ifp->if_snd, m0); break; } @@ -763,7 +764,7 @@ mec_start(struct ifnet *ifp) DPRINTF(MEC_DEBUG_START, ("mec_start: len = %d, nexttx = %d\n", len, nexttx)); - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); if (len < ETHER_PAD_LEN) { /* * I don't know if MEC chip does auto padding, Index: sys/arch/sgi/hpc/if_sq.c =================================================================== RCS file: /cvs/src/sys/arch/sgi/hpc/if_sq.c,v retrieving revision 1.18 diff -u -p -r1.18 if_sq.c --- sys/arch/sgi/hpc/if_sq.c 25 Oct 2015 13:22:09 -0000 1.18 +++ sys/arch/sgi/hpc/if_sq.c 12 Nov 2015 05:50:12 -0000 @@ -671,7 +671,7 @@ sq_start(struct ifnet *ifp) /* * Grab a packet off the queue. */ - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd, m0); if (m0 == NULL) break; m = NULL; @@ -696,6 +696,7 @@ sq_start(struct ifnet *ifp) BUS_DMA_NOWAIT) != 0) { MGETHDR(m, M_DONTWAIT, MT_DATA); if (m == NULL) { + ifq_deq_rollback(&ifp->if_snd, m0); printf("%s: unable to allocate Tx mbuf\n", sc->sc_dev.dv_xname); break; @@ -703,6 +704,7 @@ sq_start(struct ifnet *ifp) if (len > MHLEN) { MCLGET(m, M_DONTWAIT); if ((m->m_flags & M_EXT) == 0) { + ifq_deq_rollback(&ifp->if_snd, m0); printf("%s: unable to allocate Tx " "cluster\n", sc->sc_dev.dv_xname); @@ -722,6 +724,7 @@ sq_start(struct ifnet *ifp) if ((err = bus_dmamap_load_mbuf(sc->sc_dmat, dmamap, m, BUS_DMA_NOWAIT)) != 0) { + ifq_deq_rollback(&ifp->if_snd, m0); printf("%s: unable to load Tx buffer, " "error = %d\n", sc->sc_dev.dv_xname, err); @@ -734,6 +737,7 @@ sq_start(struct ifnet *ifp) * the packet. */ if (dmamap->dm_nsegs > sc->sc_nfreetx) { + ifq_deq_rollback(&ifp->if_snd, m0); /* * Not enough free descriptors to transmit this * packet. We haven't committed to anything yet, @@ -751,7 +755,7 @@ sq_start(struct ifnet *ifp) break; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 /* * Pass the packet to any BPF listeners. Index: sys/arch/socppc/dev/if_tsec.c =================================================================== RCS file: /cvs/src/sys/arch/socppc/dev/if_tsec.c,v retrieving revision 1.39 diff -u -p -r1.39 if_tsec.c --- sys/arch/socppc/dev/if_tsec.c 6 Nov 2015 11:35:48 -0000 1.39 +++ sys/arch/socppc/dev/if_tsec.c 12 Nov 2015 05:50:12 -0000 @@ -527,24 +527,25 @@ tsec_start(struct ifnet *ifp) idx = sc->sc_tx_prod; while ((sc->sc_txdesc[idx].td_status & TSEC_TX_TO1) == 0) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; error = tsec_encap(sc, m, &idx); if (error == ENOBUFS) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } if (error == EFBIG) { - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); m_freem(m); /* give up: drop it */ ifp->if_oerrors++; continue; } /* Now we are committed to transmit the packet. */ - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/arch/sparc/dev/be.c =================================================================== RCS file: /cvs/src/sys/arch/sparc/dev/be.c,v retrieving revision 1.52 diff -u -p -r1.52 be.c --- sys/arch/sparc/dev/be.c 25 Oct 2015 13:22:09 -0000 1.52 +++ sys/arch/sparc/dev/be.c 12 Nov 2015 05:50:12 -0000 @@ -261,11 +261,9 @@ bestart(ifp) cnt = sc->sc_no_td; for (;;) { - IFQ_POLL(&ifp->if_snd, m); + IFQ_DEQUEUE(&ifp->if_snd, m); if (m == NULL) break; - - IFQ_DEQUEUE(&ifp->if_snd, m); #if NBPFILTER > 0 /* Index: sys/arch/sparc/dev/qe.c =================================================================== RCS file: /cvs/src/sys/arch/sparc/dev/qe.c,v retrieving revision 1.42 diff -u -p -r1.42 qe.c --- sys/arch/sparc/dev/qe.c 25 Oct 2015 13:22:09 -0000 1.42 +++ sys/arch/sparc/dev/qe.c 12 Nov 2015 05:50:12 -0000 @@ -201,11 +201,9 @@ qestart(ifp) bix = sc->sc_last_td; for (;;) { - IFQ_POLL(&ifp->if_snd, m); + IFQ_DEQUEUE(&ifp->if_snd, m); if (m == NULL) break; - - IFQ_DEQUEUE(&ifp->if_snd, m); #if NBPFILTER > 0 /* Index: sys/arch/sparc64/dev/vnet.c =================================================================== RCS file: /cvs/src/sys/arch/sparc64/dev/vnet.c,v retrieving revision 1.47 diff -u -p -r1.47 vnet.c --- sys/arch/sparc64/dev/vnet.c 25 Oct 2015 13:22:09 -0000 1.47 +++ sys/arch/sparc64/dev/vnet.c 12 Nov 2015 05:50:12 -0000 @@ -1101,24 +1101,26 @@ vnet_start(struct ifnet *ifp) start = prod = sc->sc_tx_prod & (sc->sc_vd->vd_nentries - 1); while (sc->sc_vd->vd_desc[prod].hdr.dstate == VIO_DESC_FREE) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; count = sc->sc_tx_prod - sc->sc_tx_cons; if (count >= (sc->sc_vd->vd_nentries - 1) || map->lm_count >= map->lm_nentries) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } buf = pool_get(&sc->sc_pool, PR_NOWAIT|PR_ZERO); if (buf == NULL) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } m_copydata(m, 0, m->m_pkthdr.len, buf + VNET_ETHER_ALIGN); - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); #if NBPFILTER > 0 /* @@ -1176,24 +1178,26 @@ vnet_start_desc(struct ifnet *ifp) u_int prod, count; for (;;) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; count = sc->sc_tx_prod - sc->sc_tx_cons; if (count >= (sc->sc_vd->vd_nentries - 1) || map->lm_count >= map->lm_nentries) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; return; } buf = pool_get(&sc->sc_pool, PR_NOWAIT|PR_ZERO); if (buf == NULL) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; return; } m_copydata(m, 0, m->m_pkthdr.len, buf); - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); #if NBPFILTER > 0 /* Index: sys/arch/vax/if/if_qe.c =================================================================== RCS file: /cvs/src/sys/arch/vax/if/if_qe.c,v retrieving revision 1.36 diff -u -p -r1.36 if_qe.c --- sys/arch/vax/if/if_qe.c 27 Oct 2015 15:20:13 -0000 1.36 +++ sys/arch/vax/if/if_qe.c 12 Nov 2015 05:50:12 -0000 @@ -443,7 +443,7 @@ qestart(struct ifnet *ifp) continue; } idx = sc->sc_nexttx; - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd, m); if (m == NULL) goto out; /* @@ -458,11 +458,12 @@ qestart(struct ifnet *ifp) panic("qestart"); if ((i + sc->sc_inq) >= (TXDESCS - 1)) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; goto out; } - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/dev/ic/aic6915.c =================================================================== RCS file: /cvs/src/sys/dev/ic/aic6915.c,v retrieving revision 1.18 diff -u -p -r1.18 aic6915.c --- sys/dev/ic/aic6915.c 25 Oct 2015 12:48:46 -0000 1.18 +++ sys/dev/ic/aic6915.c 12 Nov 2015 05:50:12 -0000 @@ -363,7 +363,7 @@ sf_start(struct ifnet *ifp) /* * Grab a packet off the queue. */ - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; m = NULL; @@ -385,6 +385,7 @@ sf_start(struct ifnet *ifp) BUS_DMA_WRITE|BUS_DMA_NOWAIT) != 0) { MGETHDR(m, M_DONTWAIT, MT_DATA); if (m == NULL) { + ifq_deq_rollback(&ifp->if_snd, m0); printf("%s: unable to allocate Tx mbuf\n", sc->sc_dev.dv_xname); break; @@ -392,6 +393,7 @@ sf_start(struct ifnet *ifp) if (m0->m_pkthdr.len > MHLEN) { MCLGET(m, M_DONTWAIT); if ((m->m_flags & M_EXT) == 0) { + ifq_deq_rollback(&ifp->if_snd, m0); printf("%s: unable to allocate Tx " "cluster\n", sc->sc_dev.dv_xname); m_freem(m); @@ -403,6 +405,7 @@ sf_start(struct ifnet *ifp) error = bus_dmamap_load_mbuf(sc->sc_dmat, dmamap, m, BUS_DMA_WRITE|BUS_DMA_NOWAIT); if (error) { + ifq_deq_rollback(&ifp->if_snd, m0); printf("%s: unable to load Tx buffer, " "error = %d\n", sc->sc_dev.dv_xname, error); m_freem(m); @@ -413,7 +416,7 @@ sf_start(struct ifnet *ifp) /* * WE ARE NOW COMMITTED TO TRANSMITTING THE PACKET. */ - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); if (m != NULL) { m_freem(m0); m0 = m; Index: sys/dev/ic/an.c =================================================================== RCS file: /cvs/src/sys/dev/ic/an.c,v retrieving revision 1.66 diff -u -p -r1.66 an.c --- sys/dev/ic/an.c 25 Oct 2015 12:48:46 -0000 1.66 +++ sys/dev/ic/an.c 12 Nov 2015 05:50:12 -0000 @@ -1097,18 +1097,19 @@ an_start(struct ifnet *ifp) DPRINTF(("an_start: not running %d\n", ic->ic_state)); break; } - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) { DPRINTF2(("an_start: no pending mbuf\n")); break; } if (sc->sc_txd[cur].d_inuse) { + ifq_deq_rollback(&ifp->if_snd, m); DPRINTF2(("an_start: %x/%d busy\n", sc->sc_txd[cur].d_fid, cur)); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); ifp->if_opackets++; #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/dev/ic/bwi.c =================================================================== RCS file: /cvs/src/sys/dev/ic/bwi.c,v retrieving revision 1.120 diff -u -p -r1.120 bwi.c --- sys/dev/ic/bwi.c 11 Nov 2015 10:07:25 -0000 1.120 +++ sys/dev/ic/bwi.c 12 Nov 2015 05:50:12 -0000 @@ -7209,7 +7209,6 @@ bwi_start(struct ifnet *ifp) if (m == NULL) break; - if (m->m_len < sizeof(*eh)) { m = m_pullup(m, sizeof(*eh)); if (m == NULL) { Index: sys/dev/ic/dc.c =================================================================== RCS file: /cvs/src/sys/dev/ic/dc.c,v retrieving revision 1.145 diff -u -p -r1.145 dc.c --- sys/dev/ic/dc.c 25 Oct 2015 12:48:46 -0000 1.145 +++ sys/dev/ic/dc.c 12 Nov 2015 05:50:12 -0000 @@ -2618,7 +2618,7 @@ dc_start(struct ifnet *ifp) idx = sc->dc_cdata.dc_tx_prod; while(sc->dc_cdata.dc_tx_chain[idx].sd_mbuf == NULL) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; @@ -2628,7 +2628,7 @@ dc_start(struct ifnet *ifp) /* note: dc_coal breaks the poll-and-dequeue rule. * if dc_coal fails, we lose the packet. */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); if (dc_coal(sc, &m_head)) { ifp->if_flags |= IFF_OACTIVE; break; @@ -2636,6 +2636,9 @@ dc_start(struct ifnet *ifp) } if (dc_encap(sc, m_head, &idx)) { + if ((sc->dc_flags & DC_TX_COALESCE) == 0) + ifq_deq_rollback(&ifp->if_snd, m_head); + ifp->if_flags |= IFF_OACTIVE; break; } @@ -2644,7 +2647,7 @@ dc_start(struct ifnet *ifp) if (sc->dc_flags & DC_TX_COALESCE) { /* if mbuf is coalesced, it is already dequeued */ } else - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); /* * If there's a BPF listener, bounce a copy of this frame Index: sys/dev/ic/elink3.c =================================================================== RCS file: /cvs/src/sys/dev/ic/elink3.c,v retrieving revision 1.88 diff -u -p -r1.88 elink3.c --- sys/dev/ic/elink3.c 25 Oct 2015 12:48:46 -0000 1.88 +++ sys/dev/ic/elink3.c 12 Nov 2015 05:50:13 -0000 @@ -954,7 +954,7 @@ epstart(struct ifnet *ifp) startagain: /* Sneak a peek at the next packet */ - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) return; @@ -973,7 +973,7 @@ startagain: if (len + pad > ETHER_MAX_LEN) { /* packet is obviously too large: toss it */ ++ifp->if_oerrors; - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); m_freem(m0); goto readcheck; } @@ -983,6 +983,7 @@ startagain: bus_space_write_2(iot, ioh, EP_COMMAND, SET_TX_AVAIL_THRESH | ((len + pad + 4) >> sc->txashift)); /* not enough room in FIFO */ + ifq_deq_rollback(&ifp->if_snd, m0); ifp->if_flags |= IFF_OACTIVE; return; } else { @@ -990,7 +991,7 @@ startagain: SET_TX_AVAIL_THRESH | EP_THRESH_DISABLE); } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); if (m0 == NULL) return; Index: sys/dev/ic/fxp.c =================================================================== RCS file: /cvs/src/sys/dev/ic/fxp.c,v retrieving revision 1.123 diff -u -p -r1.123 fxp.c --- sys/dev/ic/fxp.c 25 Oct 2015 12:48:46 -0000 1.123 +++ sys/dev/ic/fxp.c 12 Nov 2015 05:50:13 -0000 @@ -689,19 +689,22 @@ fxp_start(struct ifnet *ifp) txs = txs->tx_next; - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (bus_dmamap_load_mbuf(sc->sc_dmat, txs->tx_map, m0, BUS_DMA_NOWAIT) != 0) { MGETHDR(m, M_DONTWAIT, MT_DATA); - if (m == NULL) + if (m == NULL) { + ifq_deq_rollback(&ifp->if_snd, m0); break; + } if (m0->m_pkthdr.len > MHLEN) { MCLGET(m, M_DONTWAIT); if (!(m->m_flags & M_EXT)) { m_freem(m); + ifq_deq_rollback(&ifp->if_snd, m0); break; } } @@ -710,11 +713,12 @@ fxp_start(struct ifnet *ifp) if (bus_dmamap_load_mbuf(sc->sc_dmat, txs->tx_map, m, BUS_DMA_NOWAIT) != 0) { m_freem(m); + ifq_deq_rollback(&ifp->if_snd, m0); break; } } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); if (m != NULL) { m_freem(m0); m0 = m; Index: sys/dev/ic/gem.c =================================================================== RCS file: /cvs/src/sys/dev/ic/gem.c,v retrieving revision 1.114 diff -u -p -r1.114 gem.c --- sys/dev/ic/gem.c 25 Oct 2015 12:48:46 -0000 1.114 +++ sys/dev/ic/gem.c 12 Nov 2015 05:50:13 -0000 @@ -1657,7 +1657,7 @@ gem_start(struct ifnet *ifp) return; while (sc->sc_txd[sc->sc_tx_prod].sd_mbuf == NULL) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; @@ -1685,12 +1685,13 @@ gem_start(struct ifnet *ifp) if ((sc->sc_tx_cnt + map->dm_nsegs) > (GEM_NTXDESC - 2)) { bus_dmamap_unload(sc->sc_dmatag, map); + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } /* We are now committed to transmitting the packet. */ - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); #if NBPFILTER > 0 /* @@ -1736,7 +1737,7 @@ gem_start(struct ifnet *ifp) return; drop: - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); m_freem(m); ifp->if_oerrors++; } Index: sys/dev/ic/hme.c =================================================================== RCS file: /cvs/src/sys/dev/ic/hme.c,v retrieving revision 1.75 diff -u -p -r1.75 hme.c --- sys/dev/ic/hme.c 25 Oct 2015 12:48:46 -0000 1.75 +++ sys/dev/ic/hme.c 12 Nov 2015 05:50:13 -0000 @@ -644,7 +644,7 @@ hme_start(struct ifnet *ifp) return; while (sc->sc_txd[sc->sc_tx_prod].sd_mbuf == NULL) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; @@ -672,12 +672,13 @@ hme_start(struct ifnet *ifp) if ((HME_TX_RING_SIZE - (sc->sc_tx_cnt + map->dm_nsegs)) < 5) { bus_dmamap_unload(sc->sc_dmatag, map); + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } /* We are now committed to transmitting the packet. */ - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); #if NBPFILTER > 0 /* @@ -732,7 +733,7 @@ hme_start(struct ifnet *ifp) return; drop: - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); m_freem(m); ifp->if_oerrors++; } Index: sys/dev/ic/lemac.c =================================================================== RCS file: /cvs/src/sys/dev/ic/lemac.c,v retrieving revision 1.22 diff -u -p -r1.22 lemac.c --- sys/dev/ic/lemac.c 25 Oct 2015 12:48:46 -0000 1.22 +++ sys/dev/ic/lemac.c 12 Nov 2015 05:50:13 -0000 @@ -641,13 +641,14 @@ lemac_ifstart(struct ifnet *ifp) struct mbuf *m0; int tx_pg; - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; if ((sc->sc_csr.csr_tqc = LEMAC_INB(sc, LEMAC_REG_TQC)) >= lemac_txmax) { sc->sc_cntrs.cntr_txfull++; + ifq_deq_rollack(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } @@ -662,11 +663,12 @@ lemac_ifstart(struct ifnet *ifp) */ if (tx_pg == 0 || tx_pg > sc->sc_lastpage) { sc->sc_cntrs.cntr_txnospc++; + ifq_deq_rollack(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); /* * The first four bytes of each transmit buffer are for Index: sys/dev/ic/malo.c =================================================================== RCS file: /cvs/src/sys/dev/ic/malo.c,v retrieving revision 1.109 diff -u -p -r1.109 malo.c --- sys/dev/ic/malo.c 4 Nov 2015 12:11:59 -0000 1.109 +++ sys/dev/ic/malo.c 12 Nov 2015 05:50:13 -0000 @@ -1028,14 +1028,15 @@ malo_start(struct ifnet *ifp) } else { if (ic->ic_state != IEEE80211_S_RUN) break; - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (sc->sc_txring.queued >= MALO_TX_RING_COUNT - 1) { + ifq_deq_rollback(&ifp->if_snd, m0); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 if (ifp->if_bpf != NULL) bpf_mtap(ifp->if_bpf, m0, BPF_DIRECTION_OUT); Index: sys/dev/ic/pgt.c =================================================================== RCS file: /cvs/src/sys/dev/ic/pgt.c,v retrieving revision 1.79 diff -u -p -r1.79 pgt.c --- sys/dev/ic/pgt.c 25 Oct 2015 12:48:46 -0000 1.79 +++ sys/dev/ic/pgt.c 12 Nov 2015 05:50:13 -0000 @@ -2120,15 +2120,17 @@ pgt_start(struct ifnet *ifp) for (; sc->sc_dirtyq_count[PGT_QUEUE_DATA_LOW_TX] < PGT_QUEUE_FULL_THRESHOLD && !IFQ_IS_EMPTY(&ifp->if_snd);) { pd = TAILQ_FIRST(&sc->sc_freeq[PGT_QUEUE_DATA_LOW_TX]); - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; if (m->m_pkthdr.len <= PGT_FRAG_SIZE) { error = pgt_load_tx_desc_frag(sc, PGT_QUEUE_DATA_LOW_TX, pd); - if (error) + if (error) { + ifq_deq_rollback(&ifp->if_snd, m); break; - IFQ_DEQUEUE(&ifp->if_snd, m); + } + ifq_deq_commit(&ifp->if_snd, m); m_copydata(m, 0, m->m_pkthdr.len, pd->pd_mem); pgt_desc_transmit(sc, PGT_QUEUE_DATA_LOW_TX, pd, m->m_pkthdr.len, 0); @@ -2142,8 +2144,10 @@ pgt_start(struct ifnet *ifp) * even support a full two.) */ if (sc->sc_dirtyq_count[PGT_QUEUE_DATA_LOW_TX] + 2 > - PGT_QUEUE_FULL_THRESHOLD) + PGT_QUEUE_FULL_THRESHOLD) { + ifq_deq_rollback(&ifp->if_snd, m); break; + } pd2 = TAILQ_NEXT(pd, pd_link); error = pgt_load_tx_desc_frag(sc, PGT_QUEUE_DATA_LOW_TX, pd); @@ -2157,9 +2161,11 @@ pgt_start(struct ifnet *ifp) pd_link); } } - if (error) + if (error) { + ifq_deq_rollback(&ifp->if_snd, m); break; - IFQ_DEQUEUE(&ifp->if_snd, m); + } + ifq_deq_commit(&ifp->if_snd, m); m_copydata(m, 0, PGT_FRAG_SIZE, pd->pd_mem); pgt_desc_transmit(sc, PGT_QUEUE_DATA_LOW_TX, pd, PGT_FRAG_SIZE, 1); @@ -2168,7 +2174,7 @@ pgt_start(struct ifnet *ifp) pgt_desc_transmit(sc, PGT_QUEUE_DATA_LOW_TX, pd2, m->m_pkthdr.len - PGT_FRAG_SIZE, 0); } else { - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); ifp->if_oerrors++; m_freem(m); m = NULL; Index: sys/dev/ic/re.c =================================================================== RCS file: /cvs/src/sys/dev/ic/re.c,v retrieving revision 1.182 diff -u -p -r1.182 re.c --- sys/dev/ic/re.c 2 Nov 2015 00:08:50 -0000 1.182 +++ sys/dev/ic/re.c 12 Nov 2015 05:50:13 -0000 @@ -1853,29 +1853,31 @@ re_start(struct ifnet *ifp) idx = sc->rl_ldata.rl_txq_prodidx; for (;;) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; if (sc->rl_ldata.rl_txq[idx].txq_mbuf != NULL) { KASSERT(idx == sc->rl_ldata.rl_txq_considx); + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } error = re_encap(sc, m, &idx); if (error != 0 && error != ENOBUFS) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } else if (error != 0) { - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); m_freem(m); ifp->if_oerrors++; continue; } /* now we are committed to transmit the packet */ - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); queued++; #if NBPFILTER > 0 Index: sys/dev/ic/rt2560.c =================================================================== RCS file: /cvs/src/sys/dev/ic/rt2560.c,v retrieving revision 1.74 diff -u -p -r1.74 rt2560.c --- sys/dev/ic/rt2560.c 4 Nov 2015 12:11:59 -0000 1.74 +++ sys/dev/ic/rt2560.c 12 Nov 2015 05:50:13 -0000 @@ -1948,15 +1948,16 @@ rt2560_start(struct ifnet *ifp) } else { if (ic->ic_state != IEEE80211_S_RUN) break; - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (sc->txq.queued >= RT2560_TX_RING_COUNT - 1) { + ifq_deq_rollback(&ifp->if_snd, m0); ifp->if_flags |= IFF_OACTIVE; sc->sc_flags |= RT2560_DATA_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 if (ifp->if_bpf != NULL) bpf_mtap(ifp->if_bpf, m0, BPF_DIRECTION_OUT); Index: sys/dev/ic/rt2661.c =================================================================== RCS file: /cvs/src/sys/dev/ic/rt2661.c,v retrieving revision 1.84 diff -u -p -r1.84 rt2661.c --- sys/dev/ic/rt2661.c 4 Nov 2015 12:11:59 -0000 1.84 +++ sys/dev/ic/rt2661.c 12 Nov 2015 05:50:13 -0000 @@ -1952,15 +1952,16 @@ rt2661_start(struct ifnet *ifp) } else { if (ic->ic_state != IEEE80211_S_RUN) break; - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (sc->txq[0].queued >= RT2661_TX_RING_COUNT - 1) { + ifq_deq_rollback(&ifp->if_snd, m0); /* there is no place left in this ring */ ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 if (ifp->if_bpf != NULL) bpf_mtap(ifp->if_bpf, m0, BPF_DIRECTION_OUT); Index: sys/dev/ic/rtw.c =================================================================== RCS file: /cvs/src/sys/dev/ic/rtw.c,v retrieving revision 1.92 diff -u -p -r1.92 rtw.c --- sys/dev/ic/rtw.c 4 Nov 2015 12:11:59 -0000 1.92 +++ sys/dev/ic/rtw.c 12 Nov 2015 05:50:13 -0000 @@ -2755,7 +2755,7 @@ rtw_dequeue(struct ifnet *ifp, struct rt *mp = NULL; - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) { DPRINTF(sc, RTW_DEBUG_XMIT, ("%s: no frame ready\n", __func__)); @@ -2764,12 +2764,13 @@ rtw_dequeue(struct ifnet *ifp, struct rt if (rtw_txring_choose(sc, tsbp, tdbp, RTW_TXPRIMD) == -1) { DPRINTF(sc, RTW_DEBUG_XMIT, ("%s: no descriptor\n", __func__)); + ifq_deq_rollback(&ifp->if_snd, m0); *if_flagsp |= IFF_OACTIVE; sc->sc_if.if_timer = 1; return 0; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); if (m0 == NULL) { DPRINTF(sc, RTW_DEBUG_XMIT, ("%s: no frame/ring ready\n", __func__)); Index: sys/dev/ic/smc83c170.c =================================================================== RCS file: /cvs/src/sys/dev/ic/smc83c170.c,v retrieving revision 1.23 diff -u -p -r1.23 smc83c170.c --- sys/dev/ic/smc83c170.c 25 Oct 2015 12:48:46 -0000 1.23 +++ sys/dev/ic/smc83c170.c 12 Nov 2015 05:50:13 -0000 @@ -352,7 +352,7 @@ epic_start(struct ifnet *ifp) /* * Grab a packet off the queue. */ - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; m = NULL; @@ -380,12 +380,15 @@ epic_start(struct ifnet *ifp) bus_dmamap_unload(sc->sc_dmat, dmamap); MGETHDR(m, M_DONTWAIT, MT_DATA); - if (m == NULL) + if (m == NULL) { + ifq_deq_rollback(&ifp->if_snd, m0); break; + } if (m0->m_pkthdr.len > MHLEN) { MCLGET(m, M_DONTWAIT); if ((m->m_flags & M_EXT) == 0) { m_freem(m); + ifq_deq_rollback(&ifp->if_snd, m0); break; } } @@ -393,10 +396,12 @@ epic_start(struct ifnet *ifp) m->m_pkthdr.len = m->m_len = m0->m_pkthdr.len; error = bus_dmamap_load_mbuf(sc->sc_dmat, dmamap, m, BUS_DMA_WRITE|BUS_DMA_NOWAIT); - if (error) + if (error) { + ifq_deq_rollback(&ifp->if_snd, m0); break; + } } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); if (m != NULL) { m_freem(m0); m0 = m; Index: sys/dev/ic/smc91cxx.c =================================================================== RCS file: /cvs/src/sys/dev/ic/smc91cxx.c,v retrieving revision 1.42 diff -u -p -r1.42 smc91cxx.c --- sys/dev/ic/smc91cxx.c 25 Oct 2015 12:48:46 -0000 1.42 +++ sys/dev/ic/smc91cxx.c 12 Nov 2015 05:50:13 -0000 @@ -548,7 +548,7 @@ smc91cxx_start(ifp) /* * Peek at the next packet. */ - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) return; @@ -568,7 +568,7 @@ smc91cxx_start(ifp) if ((len + pad) > (ETHER_MAX_LEN - ETHER_CRC_LEN)) { printf("%s: large packet discarded\n", sc->sc_dev.dv_xname); ifp->if_oerrors++; - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); m_freem(m); goto readcheck; } @@ -620,6 +620,7 @@ smc91cxx_start(ifp) ifp->if_timer = 5; ifp->if_flags |= IFF_OACTIVE; + ifq_deq_rollback(&ifp->if_snd, m); return; } @@ -645,7 +646,7 @@ smc91cxx_start(ifp) * Get the packet from the kernel. This will include the Ethernet * frame header, MAC address, etc. */ - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); /* * Push the packet out to the card. Index: sys/dev/ic/ti.c =================================================================== RCS file: /cvs/src/sys/dev/ic/ti.c,v retrieving revision 1.18 diff -u -p -r1.18 ti.c --- sys/dev/ic/ti.c 25 Oct 2015 12:48:46 -0000 1.18 +++ sys/dev/ic/ti.c 12 Nov 2015 05:50:13 -0000 @@ -1965,7 +1965,7 @@ ti_start(struct ifnet *ifp) prodidx = sc->ti_tx_saved_prodidx; while(sc->ti_cdata.ti_tx_chain[prodidx] == NULL) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; @@ -1980,12 +1980,13 @@ ti_start(struct ifnet *ifp) error = ti_encap_tigon2(sc, m_head, &prodidx); if (error) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } /* now we are committed to transmit the packet */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); pkts++; /* Index: sys/dev/isa/if_ef_isapnp.c =================================================================== RCS file: /cvs/src/sys/dev/isa/if_ef_isapnp.c,v retrieving revision 1.31 diff -u -p -r1.31 if_ef_isapnp.c --- sys/dev/isa/if_ef_isapnp.c 25 Oct 2015 13:13:06 -0000 1.31 +++ sys/dev/isa/if_ef_isapnp.c 12 Nov 2015 05:50:13 -0000 @@ -243,7 +243,7 @@ efstart(ifp) return; startagain: - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) return; @@ -254,7 +254,7 @@ startagain: if (len + pad > ETHER_MAX_LEN) { ifp->if_oerrors++; - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); m_freem(m0); goto startagain; } @@ -262,6 +262,7 @@ startagain: if (bus_space_read_2(iot, ioh, EF_W1_FREE_TX) < len + pad + 4) { bus_space_write_2(iot, ioh, EP_COMMAND, SET_TX_AVAIL_THRESH | ((len + pad) >> 2)); + ifq_deq_rollback(&ifp->if_snd, m0); ifp->if_flags |= IFF_OACTIVE; return; } else { @@ -277,7 +278,7 @@ startagain: bpf_mtap(ifp->if_bpf, m0, BPF_DIRECTION_OUT); #endif - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); if (m0 == NULL) /* XXX not needed */ return; Index: sys/dev/isa/if_ex.c =================================================================== RCS file: /cvs/src/sys/dev/isa/if_ex.c,v retrieving revision 1.41 diff -u -p -r1.41 if_ex.c --- sys/dev/isa/if_ex.c 25 Oct 2015 13:13:06 -0000 1.41 +++ sys/dev/isa/if_ex.c 12 Nov 2015 05:50:13 -0000 @@ -389,7 +389,7 @@ ex_start(struct ifnet *ifp) * more packets left, or the card cannot accept any more yet. */ while (!(ifp->if_flags & IFF_OACTIVE)) { - IFQ_POLL(&ifp->if_snd, opkt); + opkt = ifq_deq_begin(&ifp->if_snd); if (opkt == NULL) break; @@ -414,7 +414,7 @@ ex_start(struct ifnet *ifp) avail = -i; DODEBUG(Sent_Pkts, printf("i=%d, avail=%d\n", i, avail);); if (avail >= len + XMT_HEADER_LEN) { - IFQ_DEQUEUE(&ifp->if_snd, opkt); + ifq_deq_commit(&ifp->if_snd, opkt); #ifdef EX_PSA_INTR /* @@ -519,6 +519,7 @@ ex_start(struct ifnet *ifp) ifp->if_opackets++; m_freem(opkt); } else { + ifq_deq_rollback(&ifp->if_snd, opkt); ifp->if_flags |= IFF_OACTIVE; DODEBUG(Status, printf("OACTIVE start\n");); } Index: sys/dev/pci/if_bce.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_bce.c,v retrieving revision 1.47 diff -u -p -r1.47 if_bce.c --- sys/dev/pci/if_bce.c 25 Oct 2015 13:04:28 -0000 1.47 +++ sys/dev/pci/if_bce.c 12 Nov 2015 05:50:13 -0000 @@ -535,7 +535,7 @@ bce_start(struct ifnet *ifp) while (txsfree > 0) { /* Grab a packet off the queue. */ - IFQ_POLL(&ifp->if_snd, m0); + IFQ_DEQUEUE(&ifp->if_snd, m0); if (m0 == NULL) break; @@ -546,9 +546,6 @@ bce_start(struct ifnet *ifp) (sc->bce_txsnext + BCE_NRXDESC) * MCLBYTES); ctrl = m0->m_pkthdr.len & CTRL_BC_MASK; ctrl |= CTRL_SOF | CTRL_EOF | CTRL_IOC; - - /* WE ARE NOW COMMITTED TO TRANSMITTING THE PACKET. */ - IFQ_DEQUEUE(&ifp->if_snd, m0); #if NBPFILTER > 0 /* Pass the packet to any BPF listeners. */ Index: sys/dev/pci/if_bge.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_bge.c,v retrieving revision 1.372 diff -u -p -r1.372 if_bge.c --- sys/dev/pci/if_bge.c 10 Nov 2015 20:23:50 -0000 1.372 +++ sys/dev/pci/if_bge.c 12 Nov 2015 05:50:13 -0000 @@ -3985,7 +3985,7 @@ bge_cksum_pad(struct mbuf *m) * pointers to descriptors. */ int -bge_encap(struct bge_softc *sc, struct mbuf *m_head, int *txinc) +bge_encap(struct bge_softc *sc, struct mbuf *m, int *txinc) { struct bge_tx_bd *f = NULL; u_int32_t frag, cur; @@ -3995,20 +3995,20 @@ bge_encap(struct bge_softc *sc, struct m cur = frag = (sc->bge_tx_prodidx + *txinc) % BGE_TX_RING_CNT; - if (m_head->m_pkthdr.csum_flags) { - if (m_head->m_pkthdr.csum_flags & M_IPV4_CSUM_OUT) + if (m->m_pkthdr.csum_flags) { + if (m->m_pkthdr.csum_flags & M_IPV4_CSUM_OUT) csum_flags |= BGE_TXBDFLAG_IP_CSUM; - if (m_head->m_pkthdr.csum_flags & (M_TCP_CSUM_OUT | - M_UDP_CSUM_OUT)) { + if (m->m_pkthdr.csum_flags & + (M_TCP_CSUM_OUT | M_UDP_CSUM_OUT)) { csum_flags |= BGE_TXBDFLAG_TCP_UDP_CSUM; - if (m_head->m_pkthdr.len < ETHER_MIN_NOPAD && - bge_cksum_pad(m_head) != 0) + if (m->m_pkthdr.len < ETHER_MIN_NOPAD && + bge_cksum_pad(m) != 0) return (ENOBUFS); } } if (sc->bge_flags & BGE_JUMBO_FRAME && - m_head->m_pkthdr.len > ETHER_MAX_LEN) + m->m_pkthdr.len > ETHER_MAX_LEN) csum_flags |= BGE_TXBDFLAG_JUMBO_FRAME; if (!(BGE_CHIPREV(sc->bge_chipid) == BGE_CHIPREV_5700_BX)) @@ -4019,7 +4019,7 @@ bge_encap(struct bge_softc *sc, struct m * less than eight bytes. If we encounter a teeny mbuf * at the end of a chain, we can pad. Otherwise, copy. */ - if (bge_compact_dma_runt(m_head) != 0) + if (bge_compact_dma_runt(m) != 0) return (ENOBUFS); doit: @@ -4030,13 +4030,13 @@ doit: * the fragment pointers. Stop when we run out * of fragments or hit the end of the mbuf chain. */ - switch (bus_dmamap_load_mbuf(sc->bge_dmatag, dmamap, m_head, + switch (bus_dmamap_load_mbuf(sc->bge_dmatag, dmamap, m, BUS_DMA_NOWAIT)) { case 0: break; case EFBIG: - if (m_defrag(m_head, M_DONTWAIT) == 0 && - bus_dmamap_load_mbuf(sc->bge_dmatag, dmamap, m_head, + if (m_defrag(m, M_DONTWAIT) == 0 && + bus_dmamap_load_mbuf(sc->bge_dmatag, dmamap, m, BUS_DMA_NOWAIT) == 0) break; @@ -4045,10 +4045,6 @@ doit: return (ENOBUFS); } - /* Check if we have enough free send BDs. */ - if (sc->bge_txcnt + *txinc + dmamap->dm_nsegs >= BGE_TX_RING_CNT) - goto fail_unload; - for (i = 0; i < dmamap->dm_nsegs; i++) { f = &sc->bge_rdata->bge_tx_ring[frag]; if (sc->bge_cdata.bge_tx_chain[frag] != NULL) @@ -4058,9 +4054,9 @@ doit: f->bge_flags = csum_flags; f->bge_vlan_tag = 0; #if NVLAN > 0 - if (m_head->m_flags & M_VLANTAG) { + if (m->m_flags & M_VLANTAG) { f->bge_flags |= BGE_TXBDFLAG_VLAN_TAG; - f->bge_vlan_tag = m_head->m_pkthdr.ether_vtag; + f->bge_vlan_tag = m->m_pkthdr.ether_vtag; } #endif cur = frag; @@ -4077,7 +4073,7 @@ doit: goto fail_unload; sc->bge_rdata->bge_tx_ring[cur].bge_flags |= BGE_TXBDFLAG_END; - sc->bge_cdata.bge_tx_chain[cur] = m_head; + sc->bge_cdata.bge_tx_chain[cur] = m; sc->bge_cdata.bge_tx_map[cur] = dmamap; *txinc += dmamap->dm_nsegs; @@ -4098,7 +4094,7 @@ void bge_start(struct ifnet *ifp) { struct bge_softc *sc; - struct mbuf *m_head; + struct mbuf *m; int txinc; sc = ifp->if_softc; @@ -4110,25 +4106,28 @@ bge_start(struct ifnet *ifp) txinc = 0; while (1) { - IFQ_POLL(&ifp->if_snd, m_head); - if (m_head == NULL) + /* Check if we have enough free send BDs. */ + if (sc->bge_txcnt + txinc + BGE_NTXSEG >= BGE_TX_RING_CNT) { + ifp->if_flags |= IFF_OACTIVE; break; + } - if (bge_encap(sc, m_head, &txinc)) + IFQ_DEQUEUE(&ifp->if_snd, m); + if (m == NULL) break; - /* now we are committed to transmit the packet */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); + if (bge_encap(sc, m, &txinc) != 0) { + m_freem(m); + continue; + } #if NBPFILTER > 0 if (ifp->if_bpf) - bpf_mtap_ether(ifp->if_bpf, m_head, BPF_DIRECTION_OUT); + bpf_mtap_ether(ifp->if_bpf, m, BPF_DIRECTION_OUT); #endif } if (txinc != 0) { - int txcnt; - /* Transmit */ sc->bge_tx_prodidx = (sc->bge_tx_prodidx + txinc) % BGE_TX_RING_CNT; @@ -4137,9 +4136,7 @@ bge_start(struct ifnet *ifp) bge_writembx(sc, BGE_MBX_TX_HOST_PROD0_LO, sc->bge_tx_prodidx); - txcnt = atomic_add_int_nv(&sc->bge_txcnt, txinc); - if (txcnt > BGE_TX_RING_CNT - 16) - ifp->if_flags |= IFF_OACTIVE; + atomic_add_int(&sc->bge_txcnt, txinc); /* * Set a timeout in case the chip goes out to lunch. Index: sys/dev/pci/if_bnx.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_bnx.c,v retrieving revision 1.115 diff -u -p -r1.115 if_bnx.c --- sys/dev/pci/if_bnx.c 25 Oct 2015 13:04:28 -0000 1.115 +++ sys/dev/pci/if_bnx.c 12 Nov 2015 05:50:14 -0000 @@ -5006,7 +5006,7 @@ bnx_start(struct ifnet *ifp) */ while (sc->used_tx_bd < sc->max_tx_bd) { /* Check for any frames to send. */ - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; @@ -5016,6 +5016,7 @@ bnx_start(struct ifnet *ifp) * for the NIC to drain the chain. */ if (bnx_tx_encap(sc, m_head)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; DBPRINT(sc, BNX_INFO_SEND, "TX chain is closed for " "business! Total tx_bd used = %d\n", @@ -5023,7 +5024,7 @@ bnx_start(struct ifnet *ifp) break; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); count++; #if NBPFILTER > 0 Index: sys/dev/pci/if_cas.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_cas.c,v retrieving revision 1.43 diff -u -p -r1.43 if_cas.c --- sys/dev/pci/if_cas.c 25 Oct 2015 13:04:28 -0000 1.43 +++ sys/dev/pci/if_cas.c 12 Nov 2015 05:50:14 -0000 @@ -1863,7 +1863,7 @@ cas_start(struct ifnet *ifp) bix = sc->sc_tx_prod; while (sc->sc_txd[bix].sd_mbuf == NULL) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; @@ -1881,11 +1881,12 @@ cas_start(struct ifnet *ifp) * or fail... */ if (cas_encap(sc, m, &bix)) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); ifp->if_timer = 5; } Index: sys/dev/pci/if_de.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_de.c,v retrieving revision 1.126 diff -u -p -r1.126 if_de.c --- sys/dev/pci/if_de.c 4 Nov 2015 00:10:50 -0000 1.126 +++ sys/dev/pci/if_de.c 12 Nov 2015 05:50:14 -0000 @@ -3800,12 +3800,7 @@ tulip_txput(tulip_softc_t * const sc, st int segcnt, freedescs; u_int32_t d_status; bus_dmamap_t map; - int error; struct ifnet *ifp = &sc->tulip_if; -#ifdef DIAGNOSTIC - struct mbuf *ombuf = m; -#endif - int compressed = 0; #if defined(TULIP_DEBUG) if ((sc->tulip_cmdmode & TULIP_CMD_TXRUN) == 0) { @@ -3858,47 +3853,24 @@ tulip_txput(tulip_softc_t * const sc, st #endif goto finish; } - error = bus_dmamap_load_mbuf(sc->tulip_dmatag, map, m, BUS_DMA_NOWAIT); - if (error != 0) { - if (error == EFBIG) { - /* - * The packet exceeds the number of transmit buffer - * entries that we can use for one packet, so we have - * to recopy it into one mbuf and then try again. - */ - struct mbuf *tmp; - if (!notonqueue) { -#ifdef DIAGNOSTIC - if (IFQ_IS_EMPTY(&ifp->if_snd)) - panic("%s: if_snd queue empty", ifp->if_xname); -#endif - IFQ_DEQUEUE(&ifp->if_snd, tmp); -#ifdef DIAGNOSTIC - if (tmp != ombuf) - panic("tulip_txput: different mbuf dequeued!"); -#endif - } - compressed = 1; - m = tulip_mbuf_compress(m); - if (m == NULL) { -#if defined(TULIP_DEBUG) - sc->tulip_dbg.dbg_txput_finishes[2]++; -#endif - tulip_free_txmap(sc, map); - goto finish; - } - error = bus_dmamap_load_mbuf(sc->tulip_dmatag, map, m, BUS_DMA_NOWAIT); - } - if (error != 0) { - printf(TULIP_PRINTF_FMT ": unable to load tx map, " - "error = %d\n", TULIP_PRINTF_ARGS, error); -#if defined(TULIP_DEBUG) - sc->tulip_dbg.dbg_txput_finishes[3]++; -#endif - tulip_free_txmap(sc, map); - goto finish; - } + switch (bus_dmamap_load_mbuf(sc->tulip_dmatag, map, m, BUS_DMA_NOWAIT)) { + case 0: + break; + case EFBIG: + /* + * The packet exceeds the number of transmit buffer + * entries that we can use for one packet, so we have + * to recopy it into one mbuf and then try again. + */ + if (m_defrag(m, M_DONTWAIT) == 0 && + bus_dmamap_load_mbuf(sc->tulip_dmatag, map, m, BUS_DMA_NOWAIT) == 0) + break; + /* FALLTHROUGH */ + default: + tulip_free_txmap(sc, map); + goto finish; } + if ((freedescs -= (map->dm_nsegs + 1) / 2) <= 0 /* * See if there's any unclaimed space in the transmit ring. @@ -3949,19 +3921,8 @@ tulip_txput(tulip_softc_t * const sc, st * The descriptors have been filled in. Now get ready * to transmit. */ - if (!compressed && !notonqueue) { - /* remove the mbuf from the queue */ - struct mbuf *tmp; -#ifdef DIAGNOSTIC - if (IFQ_IS_EMPTY(&ifp->if_snd)) - panic("%s: if_snd queue empty", ifp->if_xname); -#endif - IFQ_DEQUEUE(&ifp->if_snd, tmp); -#ifdef DIAGNOSTIC - if (tmp != ombuf) - panic("tulip_txput: different mbuf dequeued!"); -#endif - } + if (!notonqueue) + ifq_deq_commit(&ifp->if_snd, m); ml_enqueue(&sc->tulip_txq, m); m = NULL; @@ -4198,21 +4159,21 @@ tulip_ifstart(struct ifnet * const ifp) { TULIP_PERFSTART(ifstart) tulip_softc_t * const sc = TULIP_IFP_TO_SOFTC(ifp); + struct mbuf *m, *m0; if (sc->tulip_if.if_flags & IFF_RUNNING) { if ((sc->tulip_flags & (TULIP_WANTSETUP|TULIP_TXPROBE_ACTIVE)) == TULIP_WANTSETUP) tulip_txput_setup(sc); - while (!IFQ_IS_EMPTY(&sc->tulip_if.if_snd)) { - struct mbuf *m, *m0; - IFQ_POLL(&sc->tulip_if.if_snd, m); + for (;;) { + m = ifq_deq_begin(&sc->tulip_if.if_snd); if (m == NULL) break; - if ((m0 = tulip_txput(sc, m, 0)) != NULL) { - if (m0 != m) - /* should not happen */ - printf("tulip_if_start: txput failed!\n"); + m0 = tulip_txput(sc, m, 0); + if (m0 != NULL) { + KASSERT(m == m0); + ifq_deq_rollback(&sc->tulip_if.if_snd, m); break; } } Index: sys/dev/pci/if_em.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_em.c,v retrieving revision 1.310 diff -u -p -r1.310 if_em.c --- sys/dev/pci/if_em.c 29 Oct 2015 03:19:42 -0000 1.310 +++ sys/dev/pci/if_em.c 12 Nov 2015 05:50:14 -0000 @@ -605,16 +605,17 @@ em_start(struct ifnet *ifp) } for (;;) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; if (em_encap(sc, m_head)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 /* Send a copy of the frame to the BPF listener */ Index: sys/dev/pci/if_ipw.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_ipw.c,v retrieving revision 1.110 diff -u -p -r1.110 if_ipw.c --- sys/dev/pci/if_ipw.c 25 Oct 2015 13:04:28 -0000 1.110 +++ sys/dev/pci/if_ipw.c 12 Nov 2015 05:50:14 -0000 @@ -1299,19 +1299,20 @@ ipw_start(struct ifnet *ifp) return; for (;;) { - IFQ_POLL(&ifp->if_snd, m); - if (m == NULL) - break; - if (sc->txfree < 1 + IPW_MAX_NSEG) { ifp->if_flags |= IFF_OACTIVE; break; } + IFQ_DEQUEUE(&ifp->if_snd, m); + if (m == NULL) + break; + #if NBPFILTER > 0 if (ifp->if_bpf != NULL) bpf_mtap(ifp->if_bpf, m, BPF_DIRECTION_OUT); #endif + m = ieee80211_encap(ifp, m, &ni); if (m == NULL) continue; Index: sys/dev/pci/if_iwi.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_iwi.c,v retrieving revision 1.127 diff -u -p -r1.127 if_iwi.c --- sys/dev/pci/if_iwi.c 25 Oct 2015 13:04:28 -0000 1.127 +++ sys/dev/pci/if_iwi.c 12 Nov 2015 05:50:14 -0000 @@ -1389,15 +1389,15 @@ iwi_start(struct ifnet *ifp) return; for (;;) { - IFQ_POLL(&ifp->if_snd, m0); - if (m0 == NULL) - break; - - if (sc->txq[0].queued >= IWI_TX_RING_COUNT - 8) { + if (sc->txq[0].queued + IWI_MAX_NSEG + 2 >= IWI_TX_RING_COUNT) { ifp->if_flags |= IFF_OACTIVE; break; } + IFQ_DEQUEUE(&ifp->if_snd, m0); + if (m0 == NULL) + break; + #if NBPFILTER > 0 if (ifp->if_bpf != NULL) bpf_mtap(ifp->if_bpf, m0, BPF_DIRECTION_OUT); Index: sys/dev/pci/if_ix.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_ix.c,v retrieving revision 1.127 diff -u -p -r1.127 if_ix.c --- sys/dev/pci/if_ix.c 4 Nov 2015 00:20:35 -0000 1.127 +++ sys/dev/pci/if_ix.c 12 Nov 2015 05:50:14 -0000 @@ -378,16 +378,17 @@ ixgbe_start(struct ifnet * ifp) BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); for (;;) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; if (ixgbe_encap(txr, m_head)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/dev/pci/if_ixgb.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_ixgb.c,v retrieving revision 1.66 diff -u -p -r1.66 if_ixgb.c --- sys/dev/pci/if_ixgb.c 25 Oct 2015 13:04:28 -0000 1.66 +++ sys/dev/pci/if_ixgb.c 12 Nov 2015 05:50:14 -0000 @@ -283,16 +283,17 @@ ixgb_start(struct ifnet *ifp) BUS_DMASYNC_POSTREAD | BUS_DMASYNC_POSTWRITE); for (;;) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; if (ixgb_encap(sc, m_head)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 /* Send a copy of the frame to the BPF listener */ Index: sys/dev/pci/if_lge.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_lge.c,v retrieving revision 1.68 diff -u -p -r1.68 if_lge.c --- sys/dev/pci/if_lge.c 25 Oct 2015 13:04:28 -0000 1.68 +++ sys/dev/pci/if_lge.c 12 Nov 2015 05:50:14 -0000 @@ -956,17 +956,18 @@ lge_start(struct ifnet *ifp) if (CSR_READ_1(sc, LGE_TXCMDFREE_8BIT) == 0) break; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; if (lge_encap(sc, m_head, &idx)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } /* now we are committed to transmit the packet */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); pkts++; #if NBPFILTER > 0 Index: sys/dev/pci/if_lii.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_lii.c,v retrieving revision 1.38 diff -u -p -r1.38 if_lii.c --- sys/dev/pci/if_lii.c 25 Oct 2015 13:04:28 -0000 1.38 +++ sys/dev/pci/if_lii.c 12 Nov 2015 05:50:14 -0000 @@ -798,12 +798,13 @@ lii_start(struct ifnet *ifp) return; for (;;) { - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (!sc->sc_free_tx_slots || lii_free_tx_space(sc) < m0->m_pkthdr.len) { + ifq_deq_rollback(&ifp->if_snd, m0); ifp->if_flags |= IFF_OACTIVE; break; } @@ -819,7 +820,7 @@ lii_start(struct ifnet *ifp) LII_WRITE_2(sc, LII_MB_TXD_WR_IDX, sc->sc_txd_cur/4); - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 if (ifp->if_bpf != NULL) Index: sys/dev/pci/if_msk.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_msk.c,v retrieving revision 1.117 diff -u -p -r1.117 if_msk.c --- sys/dev/pci/if_msk.c 25 Oct 2015 13:04:28 -0000 1.117 +++ sys/dev/pci/if_msk.c 12 Nov 2015 05:50:14 -0000 @@ -1545,7 +1545,7 @@ msk_start(struct ifnet *ifp) DPRINTFN(2, ("msk_start\n")); while (sc_if->sk_cdata.sk_tx_chain[idx].sk_mbuf == NULL) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; @@ -1555,12 +1555,13 @@ msk_start(struct ifnet *ifp) * for the NIC to drain the ring. */ if (msk_encap(sc_if, m_head, &idx)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } /* now we are committed to transmit the packet */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); pkts++; /* Index: sys/dev/pci/if_nep.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_nep.c,v retrieving revision 1.20 diff -u -p -r1.20 if_nep.c --- sys/dev/pci/if_nep.c 25 Oct 2015 13:04:28 -0000 1.20 +++ sys/dev/pci/if_nep.c 12 Nov 2015 05:50:14 -0000 @@ -1877,17 +1877,18 @@ nep_start(struct ifnet *ifp) idx = sc->sc_tx_prod; for (;;) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; if (sc->sc_tx_cnt >= (NEP_NTXDESC - NEP_NTXSEGS)) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } /* Now we are committed to transmit the packet. */ - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); if (nep_encap(sc, &m, &idx)) break; Index: sys/dev/pci/if_nfe.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_nfe.c,v retrieving revision 1.112 diff -u -p -r1.112 if_nfe.c --- sys/dev/pci/if_nfe.c 25 Oct 2015 13:04:28 -0000 1.112 +++ sys/dev/pci/if_nfe.c 12 Nov 2015 05:50:14 -0000 @@ -979,17 +979,18 @@ nfe_start(struct ifnet *ifp) return; for (;;) { - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (nfe_encap(sc, m0) != 0) { + ifq_deq_rollback(&ifp->if_snd, m0); ifp->if_flags |= IFF_OACTIVE; break; } /* packet put in h/w queue, remove from s/w queue */ - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 if (ifp->if_bpf != NULL) Index: sys/dev/pci/if_nge.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_nge.c,v retrieving revision 1.86 diff -u -p -r1.86 if_nge.c --- sys/dev/pci/if_nge.c 25 Oct 2015 13:04:28 -0000 1.86 +++ sys/dev/pci/if_nge.c 12 Nov 2015 05:50:14 -0000 @@ -1413,17 +1413,18 @@ nge_start(struct ifnet *ifp) return; while(sc->nge_ldata->nge_tx_list[idx].nge_mbuf == NULL) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; if (nge_encap(sc, m_head, &idx)) { + ifq_deq_rollbac(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } /* now we are committed to transmit the packet */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); pkts++; #if NBPFILTER > 0 Index: sys/dev/pci/if_nxe.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_nxe.c,v retrieving revision 1.68 diff -u -p -r1.68 if_nxe.c --- sys/dev/pci/if_nxe.c 25 Oct 2015 13:04:28 -0000 1.68 +++ sys/dev/pci/if_nxe.c 12 Nov 2015 05:50:14 -0000 @@ -1322,17 +1322,18 @@ nxe_start(struct ifnet *ifp) bzero(txd, sizeof(struct nxe_tx_desc)); do { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; pkt = nxe_pkt_get(sc->sc_tx_pkts); if (pkt == NULL) { + ifq_deq_rollback(&ifp->if_snd, m); SET(ifp->if_flags, IFF_OACTIVE); break; } - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); dmap = pkt->pkt_dmap; m = nxe_load_pkt(sc, dmap, m); Index: sys/dev/pci/if_pcn.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_pcn.c,v retrieving revision 1.38 diff -u -p -r1.38 if_pcn.c --- sys/dev/pci/if_pcn.c 25 Oct 2015 13:04:28 -0000 1.38 +++ sys/dev/pci/if_pcn.c 12 Nov 2015 05:50:14 -0000 @@ -833,14 +833,16 @@ pcn_start(struct ifnet *ifp) */ for (;;) { /* Grab a packet off the queue. */ - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; m = NULL; /* Get a work queue entry. */ - if (sc->sc_txsfree == 0) + if (sc->sc_txsfree == 0) { + ifq_deq_rollback(&ifp->if_snd, m0); break; + } txs = &sc->sc_txsoft[sc->sc_txsnext]; dmamap = txs->txs_dmamap; @@ -854,11 +856,14 @@ pcn_start(struct ifnet *ifp) if (bus_dmamap_load_mbuf(sc->sc_dmat, dmamap, m0, BUS_DMA_WRITE|BUS_DMA_NOWAIT) != 0) { MGETHDR(m, M_DONTWAIT, MT_DATA); - if (m == NULL) + if (m == NULL) { + ifq_deq_rollback(&ifp->if_snd, m0); break; + } if (m0->m_pkthdr.len > MHLEN) { MCLGET(m, M_DONTWAIT); if ((m->m_flags & M_EXT) == 0) { + ifq_deq_rollback(&ifp->if_snd, m0); m_freem(m); break; } @@ -867,8 +872,10 @@ pcn_start(struct ifnet *ifp) m->m_pkthdr.len = m->m_len = m0->m_pkthdr.len; error = bus_dmamap_load_mbuf(sc->sc_dmat, dmamap, m, BUS_DMA_WRITE|BUS_DMA_NOWAIT); - if (error) + if (error) { + ifq_deq_rollback(&ifp->if_snd, m0); break; + } } /* @@ -892,10 +899,11 @@ pcn_start(struct ifnet *ifp) bus_dmamap_unload(sc->sc_dmat, dmamap); if (m != NULL) m_freem(m); + ifq_deq_rollback(&ifp->if_snd, m0); break; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); if (m != NULL) { m_freem(m0); m0 = m; Index: sys/dev/pci/if_se.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_se.c,v retrieving revision 1.14 diff -u -p -r1.14 if_se.c --- sys/dev/pci/if_se.c 25 Oct 2015 13:04:28 -0000 1.14 +++ sys/dev/pci/if_se.c 12 Nov 2015 05:50:14 -0000 @@ -1217,17 +1217,18 @@ se_start(struct ifnet *ifp) i = cd->se_tx_prod; while (cd->se_tx_mbuf[i] == NULL) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; if (se_encap(sc, m_head, &i) != 0) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } /* now we are committed to transmit the packet */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); queued++; /* Index: sys/dev/pci/if_sis.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_sis.c,v retrieving revision 1.128 diff -u -p -r1.128 if_sis.c --- sys/dev/pci/if_sis.c 25 Oct 2015 13:04:28 -0000 1.128 +++ sys/dev/pci/if_sis.c 12 Nov 2015 05:50:14 -0000 @@ -1677,17 +1677,18 @@ sis_start(struct ifnet *ifp) return; while(sc->sis_ldata->sis_tx_list[idx].sis_mbuf == NULL) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; if (sis_encap(sc, m_head, &idx)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } /* now we are committed to transmit the packet */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); queued++; Index: sys/dev/pci/if_sk.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_sk.c,v retrieving revision 1.178 diff -u -p -r1.178 if_sk.c --- sys/dev/pci/if_sk.c 25 Oct 2015 13:04:28 -0000 1.178 +++ sys/dev/pci/if_sk.c 12 Nov 2015 05:50:14 -0000 @@ -1499,7 +1499,7 @@ sk_start(struct ifnet *ifp) DPRINTFN(2, ("sk_start\n")); while (sc_if->sk_cdata.sk_tx_chain[idx].sk_mbuf == NULL) { - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) break; @@ -1509,12 +1509,13 @@ sk_start(struct ifnet *ifp) * for the NIC to drain the ring. */ if (sk_encap(sc_if, m_head, &idx)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; break; } /* now we are committed to transmit the packet */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); pkts++; /* Index: sys/dev/pci/if_stge.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_stge.c,v retrieving revision 1.62 diff -u -p -r1.62 if_stge.c --- sys/dev/pci/if_stge.c 25 Oct 2015 13:04:28 -0000 1.62 +++ sys/dev/pci/if_stge.c 12 Nov 2015 05:50:15 -0000 @@ -487,7 +487,7 @@ stge_start(struct ifnet *ifp) /* * Grab a packet off the queue. */ - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; @@ -495,8 +495,10 @@ stge_start(struct ifnet *ifp) * Leave one unused descriptor at the end of the * list to prevent wrapping completely around. */ - if (sc->sc_txpending == (STGE_NTXDESC - 1)) + if (sc->sc_txpending == (STGE_NTXDESC - 1)) { + ifq_deq_rollback(&ifp->if_snd, m0); break; + } /* * Get the last and next available transmit descriptor. @@ -522,17 +524,18 @@ stge_start(struct ifnet *ifp) printf("%s: Tx packet consumes too many " "DMA segments (%u), dropping...\n", sc->sc_dev.dv_xname, dmamap->dm_nsegs); - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); m_freem(m0); continue; } /* * Short on resources, just stop for now. */ + ifq_deq_rollback(&ifp->if_snd, m0); break; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); /* * WE ARE NOW COMMITTED TO TRANSMITTING THE PACKET. Index: sys/dev/pci/if_tht.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_tht.c,v retrieving revision 1.134 diff -u -p -r1.134 if_tht.c --- sys/dev/pci/if_tht.c 25 Oct 2015 13:04:28 -0000 1.134 +++ sys/dev/pci/if_tht.c 12 Nov 2015 05:50:15 -0000 @@ -1111,17 +1111,18 @@ tht_start(struct ifnet *ifp) tht_fifo_pre(sc, &sc->sc_txt); do { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; pkt = tht_pkt_get(&sc->sc_tx_list); if (pkt == NULL) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); if (tht_load_pkt(sc, pkt, m) != 0) { m_freem(m); tht_pkt_put(&sc->sc_tx_list, pkt); Index: sys/dev/pci/if_txp.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_txp.c,v retrieving revision 1.117 diff -u -p -r1.117 if_txp.c --- sys/dev/pci/if_txp.c 25 Oct 2015 13:04:28 -0000 1.117 +++ sys/dev/pci/if_txp.c 12 Nov 2015 05:50:15 -0000 @@ -1286,7 +1286,7 @@ txp_start(struct ifnet *ifp) cnt = r->r_cnt; while (1) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; mnew = NULL; @@ -1311,7 +1311,7 @@ txp_start(struct ifnet *ifp) } m_copydata(m, 0, m->m_pkthdr.len, mtod(mnew, caddr_t)); mnew->m_pkthdr.len = mnew->m_len = m->m_pkthdr.len; - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); m_freem(m); m = mnew; if (bus_dmamap_load_mbuf(sc->sc_dmat, sd->sd_map, m, @@ -1398,7 +1398,7 @@ txp_start(struct ifnet *ifp) * the packet. */ if (mnew == NULL) - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); ifp->if_timer = 5; @@ -1440,6 +1440,7 @@ txp_start(struct ifnet *ifp) oactive: bus_dmamap_unload(sc->sc_dmat, sd->sd_map); oactive1: + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; r->r_prod = firstprod; r->r_cnt = firstcnt; Index: sys/dev/pci/if_vge.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_vge.c,v retrieving revision 1.65 diff -u -p -r1.65 if_vge.c --- sys/dev/pci/if_vge.c 25 Oct 2015 13:04:28 -0000 1.65 +++ sys/dev/pci/if_vge.c 12 Nov 2015 05:50:15 -0000 @@ -620,8 +620,8 @@ vge_allocmem(struct vge_softc *sc) /* Create DMA maps for TX buffers */ for (i = 0; i < VGE_TX_DESC_CNT; i++) { - error = bus_dmamap_create(sc->sc_dmat, MCLBYTES * nseg, nseg, - MCLBYTES, 0, BUS_DMA_ALLOCNOW, + error = bus_dmamap_create(sc->sc_dmat, MCLBYTES * nseg, + VGE_TX_FRAGS, MCLBYTES, 0, BUS_DMA_ALLOCNOW, &sc->vge_ldata.vge_tx_dmamap[i]); if (error) { printf("%s: can't create DMA map for TX\n", @@ -1321,13 +1321,12 @@ vge_intr(void *arg) int vge_encap(struct vge_softc *sc, struct mbuf *m_head, int idx) { - struct ifnet *ifp = &sc->arpcom.ac_if; bus_dmamap_t txmap; struct vge_tx_desc *d = NULL; struct vge_tx_frag *f; - struct mbuf *mnew = NULL; int error, frag; u_int32_t vge_flags; + unsigned int len; vge_flags = 0; @@ -1339,14 +1338,19 @@ vge_encap(struct vge_softc *sc, struct m vge_flags |= VGE_TDCTL_UDPCSUM; txmap = sc->vge_ldata.vge_tx_dmamap[idx]; -repack: error = bus_dmamap_load_mbuf(sc->sc_dmat, txmap, m_head, BUS_DMA_NOWAIT); - if (error) { - printf("%s: can't map mbuf (error %d)\n", - sc->vge_dev.dv_xname, error); - return (ENOBUFS); - } + switch (error) { + case 0: + break; + case EFBIG: /* mbuf chain is too fragmented */ + if ((error = m_defrag(m_head, M_DONTWAIT)) == 0 && + (error = bus_dmamap_load_mbuf(sc->sc_dmat, txmap, m_head, + BUS_DMA_NOWAIT)) == 0) + break; + default: + return (error); + } d = &sc->vge_ldata.vge_tx_list[idx]; /* If owned by chip, fail */ @@ -1354,40 +1358,12 @@ repack: return (ENOBUFS); for (frag = 0; frag < txmap->dm_nsegs; frag++) { - /* Check if we have used all 7 fragments. */ - if (frag == VGE_TX_FRAGS) - break; f = &d->vge_frag[frag]; f->vge_buflen = htole16(VGE_BUFLEN(txmap->dm_segs[frag].ds_len)); f->vge_addrlo = htole32(VGE_ADDR_LO(txmap->dm_segs[frag].ds_addr)); f->vge_addrhi = htole16(VGE_ADDR_HI(txmap->dm_segs[frag].ds_addr) & 0xFFFF); } - /* - * We used up all 7 fragments! Now what we have to do is - * copy the data into a mbuf cluster and map that. - */ - if (frag == VGE_TX_FRAGS) { - MGETHDR(mnew, M_DONTWAIT, MT_DATA); - if (mnew == NULL) - return (ENOBUFS); - - if (m_head->m_pkthdr.len > MHLEN) { - MCLGET(mnew, M_DONTWAIT); - if (!(mnew->m_flags & M_EXT)) { - m_freem(mnew); - return (ENOBUFS); - } - } - m_copydata(m_head, 0, m_head->m_pkthdr.len, - mtod(mnew, caddr_t)); - mnew->m_pkthdr.len = mnew->m_len = m_head->m_pkthdr.len; - IFQ_DEQUEUE(&ifp->if_snd, m_head); - m_freem(m_head); - m_head = mnew; - goto repack; - } - /* This chip does not do auto-padding */ if (m_head->m_pkthdr.len < VGE_MIN_FRAMELEN) { f = &d->vge_frag[frag]; @@ -1396,19 +1372,21 @@ repack: m_head->m_pkthdr.len)); f->vge_addrlo = htole32(VGE_ADDR_LO(txmap->dm_segs[0].ds_addr)); f->vge_addrhi = htole16(VGE_ADDR_HI(txmap->dm_segs[0].ds_addr) & 0xFFFF); - m_head->m_pkthdr.len = VGE_MIN_FRAMELEN; + len = VGE_MIN_FRAMELEN; frag++; - } + } else + len = m_head->m_pkthdr.len; + /* For some reason, we need to tell the card fragment + 1 */ frag++; bus_dmamap_sync(sc->sc_dmat, txmap, 0, txmap->dm_mapsize, BUS_DMASYNC_PREWRITE); - d->vge_sts = htole32(m_head->m_pkthdr.len << 16); + d->vge_sts = htole32(len << 16); d->vge_ctl = htole32(vge_flags|(frag << 28) | VGE_TD_LS_NORM); - if (m_head->m_pkthdr.len > ETHERMTU + ETHER_HDR_LEN) + if (len > ETHERMTU + ETHER_HDR_LEN) d->vge_ctl |= htole32(VGE_TDCTL_JUMBO); #if NVLAN > 0 @@ -1425,10 +1403,6 @@ repack: sc->vge_ldata.vge_tx_list[idx].vge_sts |= htole32(VGE_TDSTS_OWN); idx++; - if (mnew == NULL) { - /* if mbuf is coalesced, it is already dequeued */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); - } return (0); } @@ -1456,11 +1430,21 @@ vge_start(struct ifnet *ifp) if (pidx < 0) pidx = VGE_TX_DESC_CNT - 1; - while (sc->vge_ldata.vge_tx_mbuf[idx] == NULL) { - IFQ_POLL(&ifp->if_snd, m_head); + for (;;) { + if (sc->vge_ldata.vge_tx_mbuf[idx] != NULL) { + ifp->if_flags |= IFF_OACTIVE; + break; + } + + IFQ_DEQUEUE(&ifp->if_snd, m_head); if (m_head == NULL) break; + if (vge_encap(sc, m_head, idx)) { + m_freem(m_head); + continue; + } + /* * If there's a BPF listener, bounce a copy of this frame * to him. @@ -1469,11 +1453,6 @@ vge_start(struct ifnet *ifp) if (ifp->if_bpf) bpf_mtap_ether(ifp->if_bpf, m_head, BPF_DIRECTION_OUT); #endif - - if (vge_encap(sc, m_head, idx)) { - ifp->if_flags |= IFF_OACTIVE; - break; - } sc->vge_ldata.vge_tx_list[pidx].vge_frag[0].vge_buflen |= htole16(VGE_TXDESC_Q); Index: sys/dev/pci/if_vic.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_vic.c,v retrieving revision 1.92 diff -u -p -r1.92 if_vic.c --- sys/dev/pci/if_vic.c 25 Oct 2015 13:04:28 -0000 1.92 +++ sys/dev/pci/if_vic.c 12 Nov 2015 05:50:15 -0000 @@ -1053,12 +1053,13 @@ vic_start(struct ifnet *ifp) break; } - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; idx = sc->sc_data->vd_tx_nextidx; if (idx >= sc->sc_data->vd_tx_length) { + ifq_deq_rollback(&ifp->if_snd, m); printf("%s: tx idx is corrupt\n", DEVNAME(sc)); ifp->if_oerrors++; break; @@ -1068,6 +1069,7 @@ vic_start(struct ifnet *ifp) txb = &sc->sc_txbuf[idx]; if (txb->txb_m != NULL) { + ifq_deq_rollback(&ifp->if_snd, m); printf("%s: tx ring is corrupt\n", DEVNAME(sc)); sc->sc_data->vd_tx_stopped = 1; ifp->if_oerrors++; @@ -1078,7 +1080,7 @@ vic_start(struct ifnet *ifp) * we're committed to sending it now. if we cant map it into * dma memory then we drop it. */ - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); if (vic_load_txb(sc, txb, m) != 0) { m_freem(m); ifp->if_oerrors++; Index: sys/dev/pci/if_vio.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_vio.c,v retrieving revision 1.34 diff -u -p -r1.34 if_vio.c --- sys/dev/pci/if_vio.c 25 Oct 2015 13:04:28 -0000 1.34 +++ sys/dev/pci/if_vio.c 12 Nov 2015 05:50:15 -0000 @@ -738,12 +738,13 @@ again: int slot, r; struct virtio_net_hdr *hdr; - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; r = virtio_enqueue_prep(vq, &slot); if (r == EAGAIN) { + ifq_deq_rollback(&ifp->if_snd, m); ifp->if_flags |= IFF_OACTIVE; break; } @@ -780,7 +781,7 @@ again: r = vio_encap(sc, slot, m); if (r != 0) { virtio_enqueue_abort(vq, slot); - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); m_freem(m); ifp->if_oerrors++; continue; @@ -790,11 +791,12 @@ again: if (r != 0) { bus_dmamap_unload(vsc->sc_dmat, sc->sc_tx_dmamaps[slot]); + ifq_deq_rollback(&ifp->if_snd, m); sc->sc_tx_mbufs[slot] = NULL; ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); bus_dmamap_sync(vsc->sc_dmat, sc->sc_tx_dmamaps[slot], 0, sc->sc_tx_dmamaps[slot]->dm_mapsize, BUS_DMASYNC_PREWRITE); Index: sys/dev/pci/if_xge.c =================================================================== RCS file: /cvs/src/sys/dev/pci/if_xge.c,v retrieving revision 1.63 diff -u -p -r1.63 if_xge.c --- sys/dev/pci/if_xge.c 25 Oct 2015 13:04:28 -0000 1.63 +++ sys/dev/pci/if_xge.c 12 Nov 2015 05:50:15 -0000 @@ -1073,23 +1073,26 @@ xge_start(struct ifnet *ifp) par = lcr = 0; for (;;) { - IFQ_POLL(&ifp->if_snd, m); + m = ifq_deq_begin(&ifp->if_snd); if (m == NULL) break; /* out of packets */ - if (sc->sc_nexttx == sc->sc_lasttx) + if (sc->sc_nexttx == sc->sc_lasttx) { + ifq_deq_rollback(&ifp->if_snd, m); break; /* No more space */ + } nexttx = sc->sc_nexttx; dmp = sc->sc_txm[nexttx]; if ((error = bus_dmamap_load_mbuf(sc->sc_dmat, dmp, m, BUS_DMA_WRITE|BUS_DMA_NOWAIT)) != 0) { + ifq_deq_rollback(&ifp->if_snd, m); printf("%s: bus_dmamap_load_mbuf error %d\n", XNAME, error); break; } - IFQ_DEQUEUE(&ifp->if_snd, m); + ifq_deq_commit(&ifp->if_snd, m); bus_dmamap_sync(sc->sc_dmat, dmp, 0, dmp->dm_mapsize, BUS_DMASYNC_PREWRITE); Index: sys/dev/pcmcia/if_malo.c =================================================================== RCS file: /cvs/src/sys/dev/pcmcia/if_malo.c,v retrieving revision 1.87 diff -u -p -r1.87 if_malo.c --- sys/dev/pcmcia/if_malo.c 11 Nov 2015 10:07:25 -0000 1.87 +++ sys/dev/pcmcia/if_malo.c 12 Nov 2015 05:50:15 -0000 @@ -996,7 +996,6 @@ cmalo_start(struct ifnet *ifp) if (m == NULL) return; - #if NBPFILTER > 0 if (ifp->if_bpf) bpf_mtap(ifp->if_bpf, m, BPF_DIRECTION_OUT); Index: sys/dev/pcmcia/if_xe.c =================================================================== RCS file: /cvs/src/sys/dev/pcmcia/if_xe.c,v retrieving revision 1.53 diff -u -p -r1.53 if_xe.c --- sys/dev/pcmcia/if_xe.c 25 Oct 2015 13:13:06 -0000 1.53 +++ sys/dev/pcmcia/if_xe.c 12 Nov 2015 05:50:15 -0000 @@ -1090,7 +1090,7 @@ xe_start(ifp) return; /* Peek at the next packet. */ - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) return; @@ -1107,13 +1107,14 @@ xe_start(ifp) PAGE(sc, 0); space = bus_space_read_2(bst, bsh, offset + TSO0) & 0x7fff; if (len + pad + 2 > space) { + ifq_deq_rollback(&ifp->if_snd, m0); DPRINTF(XED_FIFO, ("%s: not enough space in output FIFO (%d > %d)\n", sc->sc_dev.dv_xname, len + pad + 2, space)); return; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/dev/usb/if_aue.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_aue.c,v retrieving revision 1.101 diff -u -p -r1.101 if_aue.c --- sys/dev/usb/if_aue.c 25 Oct 2015 12:11:56 -0000 1.101 +++ sys/dev/usb/if_aue.c 12 Nov 2015 05:50:15 -0000 @@ -1244,16 +1244,17 @@ aue_start(struct ifnet *ifp) if (ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (aue_send(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 /* Index: sys/dev/usb/if_axe.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_axe.c,v retrieving revision 1.133 diff -u -p -r1.133 if_axe.c --- sys/dev/usb/if_axe.c 25 Oct 2015 12:11:56 -0000 1.133 +++ sys/dev/usb/if_axe.c 12 Nov 2015 05:50:15 -0000 @@ -1253,15 +1253,16 @@ axe_start(struct ifnet *ifp) if (ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (axe_encap(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); /* * If there's a BPF listener, bounce a copy of this frame Index: sys/dev/usb/if_axen.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_axen.c,v retrieving revision 1.17 diff -u -p -r1.17 if_axen.c --- sys/dev/usb/if_axen.c 25 Oct 2015 12:11:56 -0000 1.17 +++ sys/dev/usb/if_axen.c 12 Nov 2015 05:50:15 -0000 @@ -1273,15 +1273,16 @@ axen_start(struct ifnet *ifp) if (ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (axen_encap(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); /* * If there's a BPF listener, bounce a copy of this frame Index: sys/dev/usb/if_cdce.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_cdce.c,v retrieving revision 1.66 diff -u -p -r1.66 if_cdce.c --- sys/dev/usb/if_cdce.c 25 Oct 2015 12:11:56 -0000 1.66 +++ sys/dev/usb/if_cdce.c 12 Nov 2015 05:50:15 -0000 @@ -389,16 +389,17 @@ cdce_start(struct ifnet *ifp) if (usbd_is_dying(sc->cdce_udev) || (ifp->if_flags & IFF_OACTIVE)) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (cdce_encap(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/dev/usb/if_cdcef.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_cdcef.c,v retrieving revision 1.38 diff -u -p -r1.38 if_cdcef.c --- sys/dev/usb/if_cdcef.c 25 Oct 2015 12:11:56 -0000 1.38 +++ sys/dev/usb/if_cdcef.c 12 Nov 2015 05:50:15 -0000 @@ -277,7 +277,7 @@ cdcef_start(struct ifnet *ifp) if(ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd, m_head); if (m_head == NULL) { return; } @@ -287,17 +287,18 @@ cdcef_start(struct ifnet *ifp) * drop packet because receiver is not listening, * or if packet is larger than xmit buffer */ - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); m_freem(m_head); return; } if (cdcef_encap(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/dev/usb/if_cue.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_cue.c,v retrieving revision 1.72 diff -u -p -r1.72 if_cue.c --- sys/dev/usb/if_cue.c 25 Oct 2015 12:11:56 -0000 1.72 +++ sys/dev/usb/if_cue.c 12 Nov 2015 05:50:15 -0000 @@ -887,16 +887,17 @@ cue_start(struct ifnet *ifp) if (ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (cue_send(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 /* Index: sys/dev/usb/if_kue.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_kue.c,v retrieving revision 1.81 diff -u -p -r1.81 if_kue.c --- sys/dev/usb/if_kue.c 25 Oct 2015 12:11:56 -0000 1.81 +++ sys/dev/usb/if_kue.c 12 Nov 2015 05:50:15 -0000 @@ -858,16 +858,17 @@ kue_start(struct ifnet *ifp) if (ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (kue_send(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 /* Index: sys/dev/usb/if_mos.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_mos.c,v retrieving revision 1.32 diff -u -p -r1.32 if_mos.c --- sys/dev/usb/if_mos.c 25 Oct 2015 12:11:56 -0000 1.32 +++ sys/dev/usb/if_mos.c 12 Nov 2015 05:50:15 -0000 @@ -1138,15 +1138,16 @@ mos_start(struct ifnet *ifp) if (ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (mos_encap(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); /* * If there's a BPF listener, bounce a copy of this frame Index: sys/dev/usb/if_ral.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_ral.c,v retrieving revision 1.134 diff -u -p -r1.134 if_ral.c --- sys/dev/usb/if_ral.c 4 Nov 2015 12:12:00 -0000 1.134 +++ sys/dev/usb/if_ral.c 12 Nov 2015 05:50:15 -0000 @@ -1257,14 +1257,15 @@ ural_start(struct ifnet *ifp) } else { if (ic->ic_state != IEEE80211_S_RUN) break; - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (sc->tx_queued >= RAL_TX_LIST_COUNT - 1) { + ifq_deq_rollback(&ifp->if_snd, m0); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 if (ifp->if_bpf != NULL) bpf_mtap(ifp->if_bpf, m0, BPF_DIRECTION_OUT); Index: sys/dev/usb/if_rum.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_rum.c,v retrieving revision 1.113 diff -u -p -r1.113 if_rum.c --- sys/dev/usb/if_rum.c 4 Nov 2015 12:12:00 -0000 1.113 +++ sys/dev/usb/if_rum.c 12 Nov 2015 05:50:15 -0000 @@ -1261,14 +1261,15 @@ rum_start(struct ifnet *ifp) } else { if (ic->ic_state != IEEE80211_S_RUN) break; - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (sc->tx_queued >= RUM_TX_LIST_COUNT - 1) { + ifq_deq_rollback(&ifp->if_snd, m0); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 if (ifp->if_bpf != NULL) bpf_mtap(ifp->if_bpf, m0, BPF_DIRECTION_OUT); Index: sys/dev/usb/if_smsc.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_smsc.c,v retrieving revision 1.21 diff -u -p -r1.21 if_smsc.c --- sys/dev/usb/if_smsc.c 25 Oct 2015 12:11:56 -0000 1.21 +++ sys/dev/usb/if_smsc.c 12 Nov 2015 05:50:15 -0000 @@ -610,15 +610,16 @@ smsc_start(struct ifnet *ifp) return; } - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (smsc_encap(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/dev/usb/if_uath.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_uath.c,v retrieving revision 1.71 diff -u -p -r1.71 if_uath.c --- sys/dev/usb/if_uath.c 4 Nov 2015 12:12:00 -0000 1.71 +++ sys/dev/usb/if_uath.c 12 Nov 2015 05:50:15 -0000 @@ -1495,14 +1495,15 @@ uath_start(struct ifnet *ifp) } else { if (ic->ic_state != IEEE80211_S_RUN) break; - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (sc->tx_queued >= UATH_TX_DATA_LIST_COUNT) { + ifq_deq_rollback(&ifp->if_snd, m0); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 if (ifp->if_bpf != NULL) bpf_mtap(ifp->if_bpf, m0, BPF_DIRECTION_OUT); Index: sys/dev/usb/if_udav.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_udav.c,v retrieving revision 1.73 diff -u -p -r1.73 if_udav.c --- sys/dev/usb/if_udav.c 25 Oct 2015 12:11:56 -0000 1.73 +++ sys/dev/usb/if_udav.c 12 Nov 2015 05:50:15 -0000 @@ -918,16 +918,17 @@ udav_start(struct ifnet *ifp) if (ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (udav_send(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/dev/usb/if_ugl.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_ugl.c,v retrieving revision 1.14 diff -u -p -r1.14 if_ugl.c --- sys/dev/usb/if_ugl.c 25 Oct 2015 12:11:56 -0000 1.14 +++ sys/dev/usb/if_ugl.c 12 Nov 2015 05:50:15 -0000 @@ -604,16 +604,17 @@ ugl_start(struct ifnet *ifp) if (ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (ugl_send(sc, m_head, 0)) { + ifq_deq_commit(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 /* Index: sys/dev/usb/if_upl.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_upl.c,v retrieving revision 1.67 diff -u -p -r1.67 if_upl.c --- sys/dev/usb/if_upl.c 30 Jun 2015 13:54:42 -0000 1.67 +++ sys/dev/usb/if_upl.c 12 Nov 2015 05:50:15 -0000 @@ -580,16 +580,17 @@ upl_start(struct ifnet *ifp) if (ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (upl_send(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 /* Index: sys/dev/usb/if_url.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_url.c,v retrieving revision 1.76 diff -u -p -r1.76 if_url.c --- sys/dev/usb/if_url.c 25 Oct 2015 12:11:56 -0000 1.76 +++ sys/dev/usb/if_url.c 12 Nov 2015 05:50:15 -0000 @@ -791,16 +791,17 @@ url_start(struct ifnet *ifp) if (ifp->if_flags & IFF_OACTIVE) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (url_send(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); #if NBPFILTER > 0 if (ifp->if_bpf) Index: sys/dev/usb/if_urndis.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_urndis.c,v retrieving revision 1.56 diff -u -p -r1.56 if_urndis.c --- sys/dev/usb/if_urndis.c 25 Oct 2015 12:11:56 -0000 1.56 +++ sys/dev/usb/if_urndis.c 12 Nov 2015 05:50:15 -0000 @@ -1148,15 +1148,16 @@ urndis_start(struct ifnet *ifp) if (usbd_is_dying(sc->sc_udev) || (ifp->if_flags & IFF_OACTIVE)) return; - IFQ_POLL(&ifp->if_snd, m_head); + m_head = ifq_deq_begin(&ifp->if_snd); if (m_head == NULL) return; if (urndis_encap(sc, m_head, 0)) { + ifq_deq_rollback(&ifp->if_snd, m_head); ifp->if_flags |= IFF_OACTIVE; return; } - IFQ_DEQUEUE(&ifp->if_snd, m_head); + ifq_deq_commit(&ifp->if_snd, m_head); /* * If there's a BPF listener, bounce a copy of this frame Index: sys/dev/usb/if_urtw.c =================================================================== RCS file: /cvs/src/sys/dev/usb/if_urtw.c,v retrieving revision 1.56 diff -u -p -r1.56 if_urtw.c --- sys/dev/usb/if_urtw.c 4 Nov 2015 12:12:00 -0000 1.56 +++ sys/dev/usb/if_urtw.c 12 Nov 2015 05:50:15 -0000 @@ -2449,16 +2449,17 @@ urtw_start(struct ifnet *ifp) } else { if (ic->ic_state != IEEE80211_S_RUN) break; - IFQ_POLL(&ifp->if_snd, m0); + m0 = ifq_deq_begin(&ifp->if_snd); if (m0 == NULL) break; if (sc->sc_tx_low_queued >= URTW_TX_DATA_LIST_COUNT || sc->sc_tx_normal_queued >= URTW_TX_DATA_LIST_COUNT) { + ifq_deq_rollback(&ifp->if_snd, m0); ifp->if_flags |= IFF_OACTIVE; break; } - IFQ_DEQUEUE(&ifp->if_snd, m0); + ifq_deq_commit(&ifp->if_snd, m0); #if NBPFILTER > 0 if (ifp->if_bpf != NULL) bpf_mtap(ifp->if_bpf, m0, BPF_DIRECTION_OUT); Index: sys/net/hfsc.c =================================================================== RCS file: /cvs/src/sys/net/hfsc.c,v retrieving revision 1.30 diff -u -p -r1.30 hfsc.c --- sys/net/hfsc.c 9 Nov 2015 01:06:31 -0000 1.30 +++ sys/net/hfsc.c 12 Nov 2015 05:50:15 -0000 @@ -1,4 +1,4 @@ -/* $OpenBSD: hfsc.c,v 1.30 2015/11/09 01:06:31 dlg Exp $ */ +/* $OpenBSD: hfsc.c,v 1.29 2015/10/23 02:29:24 dlg Exp $ */ /* * Copyright (c) 2012-2013 Henning Brauer <henn...@openbsd.org> @@ -181,11 +181,11 @@ struct hfsc_class { */ struct hfsc_if { struct hfsc_if *hif_next; /* interface state list */ - struct ifqueue *hif_ifq; /* backpointer to ifq */ struct hfsc_class *hif_rootclass; /* root class */ struct hfsc_class *hif_defaultclass; /* default class */ struct hfsc_class **hif_class_tbl; - struct hfsc_class *hif_pollcache; /* cache for poll operation */ + + u_int64_t hif_microtime; /* time at deq_begin */ u_int hif_allocated; /* # of slots in hif_class_tbl */ u_int hif_classes; /* # of classes in the tree */ @@ -206,9 +206,8 @@ int hfsc_class_destroy(struct hfsc_if struct hfsc_class *); struct hfsc_class *hfsc_nextclass(struct hfsc_class *); -struct mbuf *hfsc_cl_dequeue(struct hfsc_class *); -struct mbuf *hfsc_cl_poll(struct hfsc_class *); -void hfsc_cl_purge(struct hfsc_if *, struct hfsc_class *); +void hfsc_cl_purge(struct hfsc_if *, struct hfsc_class *, + struct mbuf_list *); void hfsc_deferred(void *); void hfsc_update_cfmin(struct hfsc_class *); @@ -256,6 +255,30 @@ struct hfsc_class *hfsc_clh2cph(struct h struct pool hfsc_class_pl, hfsc_internal_sc_pl; +/* + * ifqueue glue. + */ + +void *hfsc_alloc(void *); +void hfsc_free(void *); +int hfsc_enq(struct ifqueue *, struct mbuf *); +struct mbuf *hfsc_deq_begin(struct ifqueue *, void **); +void hfsc_deq_commit(struct ifqueue *, struct mbuf *, void *); +void hfsc_deq_rollback(struct ifqueue *, struct mbuf *, void *); +void hfsc_purge(struct ifqueue *, struct mbuf_list *); + +const struct ifq_ops hfsc_ops = { + hfsc_alloc, + hfsc_free, + hfsc_enq, + hfsc_deq_begin, + hfsc_deq_commit, + hfsc_deq_rollback, + hfsc_purge, +}; + +const struct ifq_ops * const ifq_hfsc_ops = &hfsc_ops; + u_int64_t hfsc_microuptime(void) { @@ -296,64 +319,37 @@ hfsc_initialize(void) { pool_init(&hfsc_class_pl, sizeof(struct hfsc_class), 0, 0, PR_WAITOK, "hfscclass", NULL); + pool_setipl(&hfsc_class_pl, IPL_NONE); pool_init(&hfsc_internal_sc_pl, sizeof(struct hfsc_internal_sc), 0, 0, PR_WAITOK, "hfscintsc", NULL); + pool_setipl(&hfsc_internal_sc_pl, IPL_NONE); } -int -hfsc_attach(struct ifnet *ifp) +struct hfsc_if * +hfsc_pf_alloc(struct ifnet *ifp) { struct hfsc_if *hif; - if (ifp == NULL || ifp->if_snd.ifq_hfsc != NULL) - return (0); + KASSERT(ifp != NULL); - hif = malloc(sizeof(struct hfsc_if), M_DEVBUF, M_WAITOK | M_ZERO); + hif = malloc(sizeof(*hif), M_DEVBUF, M_WAITOK | M_ZERO); TAILQ_INIT(&hif->hif_eligible); hif->hif_class_tbl = mallocarray(HFSC_DEFAULT_CLASSES, sizeof(void *), M_DEVBUF, M_WAITOK | M_ZERO); hif->hif_allocated = HFSC_DEFAULT_CLASSES; - hif->hif_ifq = &ifp->if_snd; - ifp->if_snd.ifq_hfsc = hif; - timeout_set(&hif->hif_defer, hfsc_deferred, ifp); - /* XXX HRTIMER don't schedule it yet, only when some packets wait. */ - timeout_add(&hif->hif_defer, 1); - return (0); + return (hif); } int -hfsc_detach(struct ifnet *ifp) +hfsc_pf_addqueue(struct hfsc_if *hif, struct pf_queuespec *q) { - struct hfsc_if *hif; - - if (ifp == NULL) - return (0); - - hif = ifp->if_snd.ifq_hfsc; - timeout_del(&hif->hif_defer); - ifp->if_snd.ifq_hfsc = NULL; - - free(hif->hif_class_tbl, M_DEVBUF, hif->hif_allocated * sizeof(void *)); - free(hif, M_DEVBUF, sizeof(struct hfsc_if)); - - return (0); -} - -int -hfsc_addqueue(struct pf_queuespec *q) -{ - struct hfsc_if *hif; struct hfsc_class *cl, *parent; struct hfsc_sc rtsc, lssc, ulsc; - if (q->kif->pfik_ifp == NULL) - return (0); - - if ((hif = q->kif->pfik_ifp->if_snd.ifq_hfsc) == NULL) - return (EINVAL); + KASSERT(hif != NULL); if (q->parent_qid == HFSC_NULLCLASS_HANDLE && hif->hif_rootclass == NULL) @@ -386,61 +382,82 @@ hfsc_addqueue(struct pf_queuespec *q) } int -hfsc_delqueue(struct pf_queuespec *q) -{ - struct hfsc_if *hif; - struct hfsc_class *cl; - - if (q->kif->pfik_ifp == NULL) - return (0); - - if ((hif = q->kif->pfik_ifp->if_snd.ifq_hfsc) == NULL) - return (EINVAL); - - if ((cl = hfsc_clh2cph(hif, q->qid)) == NULL) - return (EINVAL); - - return (hfsc_class_destroy(hif, cl)); -} - -int -hfsc_qstats(struct pf_queuespec *q, void *ubuf, int *nbytes) +hfsc_pf_qstats(struct pf_queuespec *q, void *ubuf, int *nbytes) { + struct ifnet *ifp = q->kif->pfik_ifp; struct hfsc_if *hif; struct hfsc_class *cl; struct hfsc_class_stats stats; int error = 0; - if (q->kif->pfik_ifp == NULL) - return (EBADF); - - if ((hif = q->kif->pfik_ifp->if_snd.ifq_hfsc) == NULL) + if (ifp == NULL) return (EBADF); - if ((cl = hfsc_clh2cph(hif, q->qid)) == NULL) + if (*nbytes < sizeof(stats)) return (EINVAL); - if (*nbytes < sizeof(stats)) + hif = ifq_q_enter(&ifp->if_snd, ifq_hfsc_ops); + if (hif == NULL) + return (EBADF); + + if ((cl = hfsc_clh2cph(hif, q->qid)) == NULL) { + ifq_q_leave(&ifp->if_snd, hif); return (EINVAL); + } hfsc_getclstats(&stats, cl); + ifq_q_leave(&ifp->if_snd, hif); if ((error = copyout((caddr_t)&stats, ubuf, sizeof(stats))) != 0) return (error); + *nbytes = sizeof(stats); return (0); } void -hfsc_purge(struct ifqueue *ifq) +hfsc_pf_free(struct hfsc_if *hif) +{ + hfsc_free(hif); +} + +void * +hfsc_alloc(void *q) +{ + struct hfsc_if *hif = q; + KASSERT(hif != NULL); + + timeout_add(&hif->hif_defer, 1); + return (hif); +} + +void +hfsc_free(void *q) { - struct hfsc_if *hif = ifq->ifq_hfsc; + struct hfsc_if *hif = q; + int i; + + KERNEL_ASSERT_LOCKED(); + + timeout_del(&hif->hif_defer); + + i = hif->hif_allocated; + do + hfsc_class_destroy(hif, hif->hif_class_tbl[--i]); + while (i > 0); + + free(hif->hif_class_tbl, M_DEVBUF, hif->hif_allocated * sizeof(void *)); + free(hif, M_DEVBUF, sizeof(*hif)); +} + +void +hfsc_purge(struct ifqueue *ifq, struct mbuf_list *ml) +{ + struct hfsc_if *hif = ifq->ifq_q; struct hfsc_class *cl; for (cl = hif->hif_rootclass; cl != NULL; cl = hfsc_nextclass(cl)) - if (ml_len(&cl->cl_q.q) > 0) - hfsc_cl_purge(hif, cl); - hif->hif_ifq->ifq_len = 0; + hfsc_cl_purge(hif, cl, ml); } struct hfsc_class * @@ -555,9 +572,7 @@ hfsc_class_destroy(struct hfsc_if *hif, return (EBUSY); s = splnet(); - - if (ml_len(&cl->cl_q.q) > 0) - hfsc_cl_purge(hif, cl); + KASSERT(ml_empty(&cl->cl_q.q)); if (cl->cl_parent != NULL) { struct hfsc_class *p = cl->cl_parent->cl_children; @@ -624,9 +639,9 @@ hfsc_nextclass(struct hfsc_class *cl) } int -hfsc_enqueue(struct ifqueue *ifq, struct mbuf *m) +hfsc_enq(struct ifqueue *ifq, struct mbuf *m) { - struct hfsc_if *hif = ifq->ifq_hfsc; + struct hfsc_if *hif = ifq->ifq_q; struct hfsc_class *cl; if ((cl = hfsc_clh2cph(hif, m->m_pkthdr.pf.qid)) == NULL || @@ -638,12 +653,12 @@ hfsc_enqueue(struct ifqueue *ifq, struct } if (ml_len(&cl->cl_q.q) >= cl->cl_q.qlimit) { - /* drop. mbuf needs to be freed */ + /* drop occurred. mbuf needs to be freed */ PKTCNTR_INC(&cl->cl_stats.drop_cnt, m->m_pkthdr.len); return (ENOBUFS); } + ml_enqueue(&cl->cl_q.q, m); - ifq->ifq_len++; m->m_pkthdr.pf.prio = IFQ_MAXPRIO; /* successfully queued. */ @@ -654,71 +669,68 @@ hfsc_enqueue(struct ifqueue *ifq, struct } struct mbuf * -hfsc_dequeue(struct ifqueue *ifq, int remove) +hfsc_deq_begin(struct ifqueue *ifq, void **cookiep) { - struct hfsc_if *hif = ifq->ifq_hfsc; + struct hfsc_if *hif = ifq->ifq_q; struct hfsc_class *cl, *tcl; struct mbuf *m; - int next_len, realtime = 0; u_int64_t cur_time; - if (IFQ_LEN(ifq) == 0) - return (NULL); - cur_time = hfsc_microuptime(); - if (remove && hif->hif_pollcache != NULL) { - cl = hif->hif_pollcache; - hif->hif_pollcache = NULL; - /* check if the class was scheduled by real-time criteria */ - if (cl->cl_rsc != NULL) - realtime = (cl->cl_e <= cur_time); - } else { + /* + * if there are eligible classes, use real-time criteria. + * find the class with the minimum deadline among + * the eligible classes. + */ + cl = hfsc_ellist_get_mindl(hif, cur_time); + if (cl == NULL) { /* - * if there are eligible classes, use real-time criteria. - * find the class with the minimum deadline among - * the eligible classes. + * use link-sharing criteria + * get the class with the minimum vt in the hierarchy */ - if ((cl = hfsc_ellist_get_mindl(hif, cur_time)) != NULL) { - realtime = 1; - } else { - /* - * use link-sharing criteria - * get the class with the minimum vt in the hierarchy - */ - cl = NULL; - tcl = hif->hif_rootclass; + cl = NULL; + tcl = hif->hif_rootclass; - while (tcl != NULL && tcl->cl_children != NULL) { - tcl = hfsc_actlist_firstfit(tcl, cur_time); - if (tcl == NULL) - continue; - - /* - * update parent's cl_cvtmin. - * don't update if the new vt is smaller. - */ - if (tcl->cl_parent->cl_cvtmin < tcl->cl_vt) - tcl->cl_parent->cl_cvtmin = tcl->cl_vt; + while (tcl != NULL && tcl->cl_children != NULL) { + tcl = hfsc_actlist_firstfit(tcl, cur_time); + if (tcl == NULL) + continue; - cl = tcl; - } - /* XXX HRTIMER plan hfsc_deferred precisely here. */ - if (cl == NULL) - return (NULL); - } + /* + * update parent's cl_cvtmin. + * don't update if the new vt is smaller. + */ + if (tcl->cl_parent->cl_cvtmin < tcl->cl_vt) + tcl->cl_parent->cl_cvtmin = tcl->cl_vt; - if (!remove) { - hif->hif_pollcache = cl; - m = hfsc_cl_poll(cl); - return (m); + cl = tcl; } + /* XXX HRTIMER plan hfsc_deferred precisely here. */ + if (cl == NULL) + return (NULL); } - if ((m = hfsc_cl_dequeue(cl)) == NULL) - panic("hfsc_dequeue"); + m = ml_dequeue(&cl->cl_q.q); + KASSERT(m != NULL); + + hif->hif_microtime = cur_time; + *cookiep = cl; + return (m); +} + +void +hfsc_deq_commit(struct ifqueue *ifq, struct mbuf *m, void *cookie) +{ + struct hfsc_if *hif = ifq->ifq_q; + struct hfsc_class *cl = cookie; + int next_len, realtime = 0; + u_int64_t cur_time = hif->hif_microtime; + + /* check if the class was scheduled by real-time criteria */ + if (cl->cl_rsc != NULL) + realtime = (cl->cl_e <= cur_time); - ifq->ifq_len--; PKTCNTR_INC(&cl->cl_stats.xmit_cnt, m->m_pkthdr.len); hfsc_update_vf(cl, m->m_pkthdr.len, cur_time); @@ -739,51 +751,49 @@ hfsc_dequeue(struct ifqueue *ifq, int re /* the class becomes passive */ hfsc_set_passive(hif, cl); } +} - return (m); +void +hfsc_deq_rollback(struct ifqueue *ifq, struct mbuf *m, void *cookie) +{ + struct hfsc_class *cl = cookie; + + ml_requeue(&cl->cl_q.q, m); } void hfsc_deferred(void *arg) { struct ifnet *ifp = arg; + struct hfsc_if *hif; int s; + KERNEL_ASSERT_LOCKED(); + KASSERT(HFSC_ENABLED(&ifp->if_snd)); + s = splnet(); - if (HFSC_ENABLED(&ifp->if_snd) && !IFQ_IS_EMPTY(&ifp->if_snd)) + if (!IFQ_IS_EMPTY(&ifp->if_snd)) if_start(ifp); splx(s); - /* XXX HRTIMER nearest virtual/fit time is likely less than 1/HZ. */ - timeout_add(&ifp->if_snd.ifq_hfsc->hif_defer, 1); -} - -struct mbuf * -hfsc_cl_dequeue(struct hfsc_class *cl) -{ - return (ml_dequeue(&cl->cl_q.q)); -} + hif = ifp->if_snd.ifq_q; -struct mbuf * -hfsc_cl_poll(struct hfsc_class *cl) -{ - /* XXX */ - return (cl->cl_q.q.ml_head); + /* XXX HRTIMER nearest virtual/fit time is likely less than 1/HZ. */ + timeout_add(&hif->hif_defer, 1); } void -hfsc_cl_purge(struct hfsc_if *hif, struct hfsc_class *cl) +hfsc_cl_purge(struct hfsc_if *hif, struct hfsc_class *cl, struct mbuf_list *ml) { struct mbuf *m; if (ml_empty(&cl->cl_q.q)) return; - while ((m = hfsc_cl_dequeue(cl)) != NULL) { + MBUF_LIST_FOREACH(&cl->cl_q.q, m) PKTCNTR_INC(&cl->cl_stats.drop_cnt, m->m_pkthdr.len); - m_freem(m); - hif->hif_ifq->ifq_len--; - } + + ml_enlist(ml, &cl->cl_q.q); hfsc_update_vf(cl, 0, 0); /* remove cl from the actlist */ hfsc_set_passive(hif, cl); Index: sys/net/hfsc.h =================================================================== RCS file: /cvs/src/sys/net/hfsc.h,v retrieving revision 1.10 diff -u -p -r1.10 hfsc.h --- sys/net/hfsc.h 9 Nov 2015 01:06:31 -0000 1.10 +++ sys/net/hfsc.h 12 Nov 2015 05:50:15 -0000 @@ -1,4 +1,4 @@ -/* $OpenBSD: hfsc.h,v 1.10 2015/11/09 01:06:31 dlg Exp $ */ +/* $OpenBSD: hfsc.h,v 1.9 2015/09/30 11:36:20 dlg Exp $ */ /* * Copyright (c) 2012-2013 Henning Brauer <henn...@openbsd.org> @@ -112,19 +112,18 @@ struct ifqueue; struct pf_queuespec; struct hfsc_if; -#define HFSC_ENABLED(ifq) ((ifq)->ifq_hfsc != NULL) +extern const struct ifq_ops * const ifq_hfsc_ops; + +#define HFSC_ENABLED(ifq) ((ifq)->ifq_ops == ifq_hfsc_ops) #define HFSC_DEFAULT_QLIMIT 50 +struct hfsc_if *hfsc_pf_alloc(struct ifnet *); +int hfsc_pf_addqueue(struct hfsc_if *, struct pf_queuespec *); +void hfsc_pf_free(struct hfsc_if *); +int hfsc_pf_qstats(struct pf_queuespec *, void *, int *); + void hfsc_initialize(void); -int hfsc_attach(struct ifnet *); -int hfsc_detach(struct ifnet *); -void hfsc_purge(struct ifqueue *); -int hfsc_enqueue(struct ifqueue *, struct mbuf *); -struct mbuf *hfsc_dequeue(struct ifqueue *, int); u_int64_t hfsc_microuptime(void); -int hfsc_addqueue(struct pf_queuespec *); -int hfsc_delqueue(struct pf_queuespec *); -int hfsc_qstats(struct pf_queuespec *, void *, int *); #endif /* _KERNEL */ #endif /* _HFSC_H_ */ Index: sys/net/if.c =================================================================== RCS file: /cvs/src/sys/net/if.c,v retrieving revision 1.405 diff -u -p -r1.405 if.c --- sys/net/if.c 11 Nov 2015 10:23:23 -0000 1.405 +++ sys/net/if.c 12 Nov 2015 05:50:15 -0000 @@ -397,9 +397,6 @@ if_attachsetup(struct ifnet *ifp) if_addgroup(ifp, IFG_ALL); - if (ifp->if_snd.ifq_maxlen == 0) - IFQ_SET_MAXLEN(&ifp->if_snd, IFQ_MAXLEN); - if_attachdomain(ifp); #if NPF > 0 pfi_attach_ifnet(ifp); @@ -510,6 +507,8 @@ if_attach_common(struct ifnet *ifp) TAILQ_INIT(&ifp->if_addrlist); TAILQ_INIT(&ifp->if_maddrlist); + ifq_init(&ifp->if_snd); + ifp->if_addrhooks = malloc(sizeof(*ifp->if_addrhooks), M_TEMP, M_WAITOK); TAILQ_INIT(ifp->if_addrhooks); @@ -538,7 +537,7 @@ if_start(struct ifnet *ifp) splassert(IPL_NET); - if (ifp->if_snd.ifq_len >= min(8, ifp->if_snd.ifq_maxlen) && + if (ifq_len(&ifp->if_snd) >= min(8, ifp->if_snd.ifq_maxlen) && !ISSET(ifp->if_flags, IFF_OACTIVE)) { if (ISSET(ifp->if_xflags, IFXF_TXREADY)) { TAILQ_REMOVE(&iftxlist, ifp, if_txlist); @@ -783,8 +782,6 @@ if_input_process(void *xmq) s = splnet(); while ((m = ml_dequeue(&ml)) != NULL) { - sched_pause(); - ifp = if_get(m->m_pkthdr.ph_ifidx); if (ifp == NULL) { m_freem(m); @@ -942,6 +939,8 @@ if_detach(struct ifnet *ifp) if_idxmap_remove(ifp); splx(s); + + ifq_destroy(&ifp->if_snd); } /* @@ -2693,6 +2692,325 @@ niq_enlist(struct niqueue *niq, struct m if_congestion(); return (rv); +} + +/* + * send queues. + */ + +void *priq_alloc(void *); +void priq_free(void *); +int priq_enq(struct ifqueue *, struct mbuf *); +struct mbuf *priq_deq_begin(struct ifqueue *, void **); +void priq_deq_commit(struct ifqueue *, struct mbuf *, void *); +void priq_deq_rollback(struct ifqueue *, struct mbuf *, void *); +void priq_purge(struct ifqueue *, struct mbuf_list *); + +const struct ifq_ops priq_ops = { + priq_alloc, + priq_free, + priq_enq, + priq_deq_begin, + priq_deq_commit, + priq_deq_rollback, + priq_purge, +}; + +const struct ifq_ops * const ifq_priq_ops = &priq_ops; + +struct priq_list { + struct mbuf *head; + struct mbuf *tail; +}; + +struct priq { + struct priq_list pq_lists[IFQ_NQUEUES]; +}; + +void * +priq_alloc(void *null) +{ + return (malloc(sizeof(struct priq), M_DEVBUF, M_WAITOK | M_ZERO)); +} + +void +priq_free(void *pq) +{ + free(pq, M_DEVBUF, sizeof(struct priq)); +} + +int +priq_enq(struct ifqueue *ifq, struct mbuf *m) +{ + struct priq *pq; + struct priq_list *pl; + + if (ifq_len(ifq) >= ifq->ifq_maxlen) + return (ENOBUFS); + + pq = ifq->ifq_q; + KASSERT(m->m_pkthdr.pf.prio < IFQ_MAXPRIO); + pl = &pq->pq_lists[m->m_pkthdr.pf.prio]; + + m->m_nextpkt = NULL; + if (pl->tail == NULL) + pl->head = m; + else + pl->tail->m_nextpkt = m; + pl->tail = m; + + return (0); +} + +struct mbuf * +priq_deq_begin(struct ifqueue *ifq, void **cookiep) +{ + struct priq *pq = ifq->ifq_q; + struct priq_list *pl; + unsigned int prio = nitems(pq->pq_lists); + struct mbuf *m; + + do { + pl = &pq->pq_lists[--prio]; + m = pl->head; + if (m != NULL) { + *cookiep = pl; + return (m); + } + } while (prio > 0); + + return (NULL); +} + +void +priq_deq_commit(struct ifqueue *ifq, struct mbuf *m, void *cookie) +{ + struct priq_list *pl = cookie; + + KASSERT(pl->head == m); + + pl->head = m->m_nextpkt; + m->m_nextpkt = NULL; + + if (pl->head == NULL) + pl->tail = NULL; +} + +void +priq_deq_rollback(struct ifqueue *ifq, struct mbuf *m, void *cookie) +{ + struct priq_list *pl = cookie; + + KASSERT(pl->head == m); +} + +void +priq_purge(struct ifqueue *ifq, struct mbuf_list *ml) +{ + struct priq *pq = ifq->ifq_q; + struct priq_list *pl; + unsigned int prio = nitems(pq->pq_lists); + struct mbuf *m, *n; + + do { + pl = &pq->pq_lists[--prio]; + + for (m = pl->head; m != NULL; m = n) { + n = m->m_nextpkt; + ml_enqueue(ml, m); + } + + pl->head = pl->tail = NULL; + } while (prio > 0); +} + +int +ifq_enqueue_try(struct ifqueue *ifq, struct mbuf *m) +{ + int rv; + + mtx_enter(&ifq->ifq_mtx); + rv = ifq->ifq_ops->ifqop_enq(ifq, m); + if (rv == 0) + ifq->ifq_len++; + else + ifq->ifq_drops++; + mtx_leave(&ifq->ifq_mtx); + + return (rv); +} + +int +ifq_enq(struct ifqueue *ifq, struct mbuf *m) +{ + int err; + + err = ifq_enqueue_try(ifq, m); + if (err != 0) + m_freem(m); + + return (err); +} + +struct mbuf * +ifq_deq_begin(struct ifqueue *ifq) +{ + struct mbuf *m = NULL; + void *cookie; + + mtx_enter(&ifq->ifq_mtx); + if (ifq->ifq_len == 0 || + (m = ifq->ifq_ops->ifqop_deq_begin(ifq, &cookie)) == NULL) { + mtx_leave(&ifq->ifq_mtx); + return (NULL); + } + + m->m_pkthdr.ph_cookie = cookie; + + return (m); +} + +void +ifq_deq_commit(struct ifqueue *ifq, struct mbuf *m) +{ + void *cookie; + + KASSERT(m != NULL); + cookie = m->m_pkthdr.ph_cookie; + + ifq->ifq_ops->ifqop_deq_commit(ifq, m, cookie); + ifq->ifq_len--; + mtx_leave(&ifq->ifq_mtx); +} + +void +ifq_deq_rollback(struct ifqueue *ifq, struct mbuf *m) +{ + void *cookie; + + KASSERT(m != NULL); + cookie = m->m_pkthdr.ph_cookie; + + ifq->ifq_ops->ifqop_deq_rollback(ifq, m, cookie); + mtx_leave(&ifq->ifq_mtx); +} + +struct mbuf * +ifq_deq(struct ifqueue *ifq) +{ + struct mbuf *m; + + m = ifq_deq_begin(ifq); + if (m == NULL) + return (NULL); + + ifq_deq_commit(ifq, m); + + return (m); +} + +unsigned int +ifq_purge(struct ifqueue *ifq) +{ + struct mbuf_list ml = MBUF_LIST_INITIALIZER(); + unsigned int rv; + + mtx_enter(&ifq->ifq_mtx); + ifq->ifq_ops->ifqop_purge(ifq, &ml); + rv = ifq->ifq_len; + ifq->ifq_len = 0; + ifq->ifq_drops += rv; + mtx_leave(&ifq->ifq_mtx); + + KASSERT(rv == ml_len(&ml)); + + ml_purge(&ml); + + return (rv); +} + +void +ifq_init(struct ifqueue *ifq) +{ + mtx_init(&ifq->ifq_mtx, IPL_NET); + ifq->ifq_drops = 0; + + /* default to priq */ + ifq->ifq_ops = &priq_ops; + ifq->ifq_q = priq_ops.ifqop_alloc(NULL); + + ifq->ifq_serializer = 0; + ifq->ifq_len = 0; + + if (ifq->ifq_maxlen == 0) + ifq_set_maxlen(ifq, IFQ_MAXLEN); +} + +void +ifq_attach(struct ifqueue *ifq, const struct ifq_ops *newops, void *opsarg) +{ + struct mbuf_list ml = MBUF_LIST_INITIALIZER(); + struct mbuf_list free_ml = MBUF_LIST_INITIALIZER(); + struct mbuf *m; + const struct ifq_ops *oldops; + void *newq, *oldq; + + newq = newops->ifqop_alloc(opsarg); + + mtx_enter(&ifq->ifq_mtx); + ifq->ifq_ops->ifqop_purge(ifq, &ml); + ifq->ifq_len = 0; + + oldops = ifq->ifq_ops; + oldq = ifq->ifq_q; + + ifq->ifq_ops = newops; + ifq->ifq_q = newq; + + while ((m = ml_dequeue(&ml)) != NULL) { + if (ifq->ifq_ops->ifqop_enq(ifq, m) != 0) { + ifq->ifq_drops++; + ml_enqueue(&free_ml, m); + } else + ifq->ifq_len++; + } + mtx_leave(&ifq->ifq_mtx); + + oldops->ifqop_free(oldq); + + ml_purge(&free_ml); +} + +void * +ifq_q_enter(struct ifqueue *ifq, const struct ifq_ops *ops) +{ + mtx_enter(&ifq->ifq_mtx); + if (ifq->ifq_ops == ops) + return (ifq->ifq_q); + + mtx_leave(&ifq->ifq_mtx); + + return (NULL); +} + +void +ifq_q_leave(struct ifqueue *ifq, void *q) +{ + KASSERT(q == ifq->ifq_q); + mtx_leave(&ifq->ifq_mtx); +} + +void +ifq_destroy(struct ifqueue *ifq) +{ + struct mbuf_list ml = MBUF_LIST_INITIALIZER(); + + /* don't need to lock because this is the last use of the ifq */ + + ifq->ifq_ops->ifqop_purge(ifq, &ml); + ifq->ifq_ops->ifqop_free(ifq->ifq_q); + + ml_purge(&ml); } __dead void Index: sys/net/if_tun.c =================================================================== RCS file: /cvs/src/sys/net/if_tun.c,v retrieving revision 1.159 diff -u -p -r1.159 if_tun.c --- sys/net/if_tun.c 25 Oct 2015 12:05:40 -0000 1.159 +++ sys/net/if_tun.c 12 Nov 2015 05:50:16 -0000 @@ -685,10 +685,11 @@ tun_dev_ioctl(struct tun_softc *tp, u_lo tp->tun_flags &= ~TUN_ASYNC; break; case FIONREAD: - IFQ_POLL(&tp->tun_if.if_snd, m); - if (m != NULL) + m = ifq_deq_begin(&tp->tun_if.if_snd); + if (m != NULL) { *(int *)data = m->m_pkthdr.len; - else + ifq_deq_rollback(&tp->tun_if.if_snd, m); + } else *(int *)data = 0; break; case TIOCSPGRP: @@ -810,6 +811,14 @@ tun_dev_read(struct tun_softc *tp, struc } while (m0 == NULL); splx(s); + if (tp->tun_flags & TUN_LAYER2) { +#if NBPFILTER > 0 + if (ifp->if_bpf) + bpf_mtap(ifp->if_bpf, m, BPF_DIRECTION_OUT); +#endif + ifp->if_opackets++; + } + while (m0 != NULL && uio->uio_resid > 0 && error == 0) { len = min(uio->uio_resid, m0->m_len); if (len != 0) @@ -1007,7 +1016,7 @@ tun_dev_poll(struct tun_softc *tp, int e { int revents, s; struct ifnet *ifp; - struct mbuf *m; + unsigned int len; ifp = &tp->tun_if; revents = 0; @@ -1015,10 +1024,9 @@ tun_dev_poll(struct tun_softc *tp, int e TUNDEBUG(("%s: tunpoll\n", ifp->if_xname)); if (events & (POLLIN | POLLRDNORM)) { - IFQ_POLL(&ifp->if_snd, m); - if (m != NULL) { - TUNDEBUG(("%s: tunselect q=%d\n", ifp->if_xname, - IFQ_LEN(ifp->if_snd))); + len = IFQ_LEN(&ifp->if_snd); + if (len > 0) { + TUNDEBUG(("%s: tunselect q=%d\n", ifp->if_xname, len)); revents |= events & (POLLIN | POLLRDNORM); } else { TUNDEBUG(("%s: tunpoll waiting\n", ifp->if_xname)); @@ -1114,7 +1122,7 @@ filt_tunread(struct knote *kn, long hint int s; struct tun_softc *tp; struct ifnet *ifp; - struct mbuf *m; + unsigned int len; if (kn->kn_status & KN_DETACHED) { kn->kn_data = 0; @@ -1125,10 +1133,10 @@ filt_tunread(struct knote *kn, long hint ifp = &tp->tun_if; s = splnet(); - IFQ_POLL(&ifp->if_snd, m); - if (m != NULL) { + len = IFQ_LEN(&ifp->if_snd); + if (len > 0) { splx(s); - kn->kn_data = IFQ_LEN(&ifp->if_snd); + kn->kn_data = len; TUNDEBUG(("%s: tunkqread q=%d\n", ifp->if_xname, IFQ_LEN(&ifp->if_snd))); @@ -1175,21 +1183,11 @@ void tun_start(struct ifnet *ifp) { struct tun_softc *tp = ifp->if_softc; - struct mbuf *m; splassert(IPL_NET); - IFQ_POLL(&ifp->if_snd, m); - if (m != NULL) { - if (tp->tun_flags & TUN_LAYER2) { -#if NBPFILTER > 0 - if (ifp->if_bpf) - bpf_mtap(ifp->if_bpf, m, BPF_DIRECTION_OUT); -#endif - ifp->if_opackets++; - } + if (IFQ_LEN(&ifp->if_snd)) tun_wakeup(tp); - } } void Index: sys/net/if_var.h =================================================================== RCS file: /cvs/src/sys/net/if_var.h,v retrieving revision 1.52 diff -u -p -r1.52 if_var.h --- sys/net/if_var.h 11 Nov 2015 10:23:23 -0000 1.52 +++ sys/net/if_var.h 12 Nov 2015 05:50:16 -0000 @@ -98,17 +98,33 @@ struct if_clone { { { 0 }, name, sizeof(name) - 1, create, destroy } /* - * Structure defining a queue for a network interface. + * Structure defining the send queue for a network interface. */ -struct ifqueue { - struct { - struct mbuf *head; - struct mbuf *tail; - } ifq_q[IFQ_NQUEUES]; - int ifq_len; - int ifq_maxlen; - int ifq_drops; - struct hfsc_if *ifq_hfsc; + +struct ifqueue; + +struct ifq_ops { + void *(*ifqop_alloc)(void *); + void (*ifqop_free)(void *); + int (*ifqop_enq)(struct ifqueue *, struct mbuf *); + struct mbuf *(*ifqop_deq_begin)(struct ifqueue *, void **); + void (*ifqop_deq_commit)(struct ifqueue *, + struct mbuf *, void *); + void (*ifqop_deq_rollback)(struct ifqueue *, + struct mbuf *, void *); + void (*ifqop_purge)(struct ifqueue *, + struct mbuf_list *); +}; + +struct ifqueue { + struct mutex ifq_mtx; + uint64_t ifq_drops; + const struct ifq_ops *ifq_ops; + void *ifq_q; + unsigned int ifq_len; + unsigned int ifq_serializer; + + unsigned int ifq_maxlen; }; /* @@ -256,121 +272,55 @@ struct ifg_list { }; #ifdef _KERNEL -#define IFQ_MAXLEN 256 -#define IFNET_SLOWHZ 1 /* granularity is 1 second */ - /* - * Output queues (ifp->if_snd) and internetwork datagram level (pup level 1) - * input routines have queues of messages stored on ifqueue structures - * (defined above). Entries are added to and deleted from these structures - * by these macros, which should be called with ipl raised to splnet(). + * Interface send queues. */ -#define IF_QFULL(ifq) ((ifq)->ifq_len >= (ifq)->ifq_maxlen) -#define IF_DROP(ifq) ((ifq)->ifq_drops++) -#define IF_ENQUEUE(ifq, m) \ -do { \ - (m)->m_nextpkt = NULL; \ - if ((ifq)->ifq_q[(m)->m_pkthdr.pf.prio].tail == NULL) \ - (ifq)->ifq_q[(m)->m_pkthdr.pf.prio].head = m; \ - else \ - (ifq)->ifq_q[(m)->m_pkthdr.pf.prio].tail->m_nextpkt = m; \ - (ifq)->ifq_q[(m)->m_pkthdr.pf.prio].tail = m; \ - (ifq)->ifq_len++; \ -} while (/* CONSTCOND */0) -#define IF_PREPEND(ifq, m) \ -do { \ - (m)->m_nextpkt = (ifq)->ifq_q[(m)->m_pkthdr.pf.prio].head; \ - if ((ifq)->ifq_q[(m)->m_pkthdr.pf.prio].tail == NULL) \ - (ifq)->ifq_q[(m)->m_pkthdr.pf.prio].tail = (m); \ - (ifq)->ifq_q[(m)->m_pkthdr.pf.prio].head = (m); \ - (ifq)->ifq_len++; \ -} while (/* CONSTCOND */0) -#define IF_POLL(ifq, m) \ -do { \ - int if_dequeue_prio = IFQ_MAXPRIO; \ - do { \ - (m) = (ifq)->ifq_q[if_dequeue_prio].head; \ - } while (!(m) && --if_dequeue_prio >= 0); \ -} while (/* CONSTCOND */0) +void ifq_init(struct ifqueue *); +void ifq_attach(struct ifqueue *, const struct ifq_ops *, void *); +void ifq_destroy(struct ifqueue *); +int ifq_enq_try(struct ifqueue *, struct mbuf *); +int ifq_enq(struct ifqueue *, struct mbuf *); +struct mbuf *ifq_deq_begin(struct ifqueue *); +void ifq_deq_commit(struct ifqueue *, struct mbuf *); +void ifq_deq_rollback(struct ifqueue *, struct mbuf *); +struct mbuf *ifq_deq(struct ifqueue *); +unsigned int ifq_purge(struct ifqueue *); +void *ifq_q_enter(struct ifqueue *, const struct ifq_ops *); +void ifq_q_leave(struct ifqueue *, void *); + +#define ifq_len(_ifq) ((_ifq)->ifq_len) +#define ifq_empty(_ifq) (ifq_len(_ifq) == 0) +#define ifq_set_maxlen(_ifq, _l) ((_ifq)->ifq_maxlen = (_l)) -#define IF_DEQUEUE(ifq, m) \ -do { \ - int if_dequeue_prio = IFQ_MAXPRIO; \ - do { \ - (m) = (ifq)->ifq_q[if_dequeue_prio].head; \ - if (m) { \ - if (((ifq)->ifq_q[if_dequeue_prio].head = \ - (m)->m_nextpkt) == NULL) \ - (ifq)->ifq_q[if_dequeue_prio].tail = NULL; \ - (m)->m_nextpkt = NULL; \ - (ifq)->ifq_len--; \ - } \ - } while (!(m) && --if_dequeue_prio >= 0); \ -} while (/* CONSTCOND */0) +extern const struct ifq_ops * const ifq_priq_ops; -#define IF_PURGE(ifq) \ -do { \ - struct mbuf *__m0; \ - \ - for (;;) { \ - IF_DEQUEUE((ifq), __m0); \ - if (__m0 == NULL) \ - break; \ - else \ - m_freem(__m0); \ - } \ -} while (/* CONSTCOND */0) -#define IF_LEN(ifq) ((ifq)->ifq_len) -#define IF_IS_EMPTY(ifq) ((ifq)->ifq_len == 0) +#define IFQ_MAXLEN 256 +#define IFNET_SLOWHZ 1 /* granularity is 1 second */ + +/* + * IFQ compat on ifq API + */ #define IFQ_ENQUEUE(ifq, m, err) \ do { \ - if (HFSC_ENABLED(ifq)) \ - (err) = hfsc_enqueue(((struct ifqueue *)(ifq)), m); \ - else { \ - if (IF_QFULL((ifq))) { \ - (err) = ENOBUFS; \ - } else { \ - IF_ENQUEUE((ifq), (m)); \ - (err) = 0; \ - } \ - } \ - if ((err)) { \ - m_freem((m)); \ - (ifq)->ifq_drops++; \ - } \ + (err) = ifq_enq((ifq), (m)); \ } while (/* CONSTCOND */0) #define IFQ_DEQUEUE(ifq, m) \ do { \ - if (HFSC_ENABLED((ifq))) \ - (m) = hfsc_dequeue(((struct ifqueue *)(ifq)), 1); \ - else \ - IF_DEQUEUE((ifq), (m)); \ -} while (/* CONSTCOND */0) - -#define IFQ_POLL(ifq, m) \ -do { \ - if (HFSC_ENABLED((ifq))) \ - (m) = hfsc_dequeue(((struct ifqueue *)(ifq)), 0); \ - else \ - IF_POLL((ifq), (m)); \ + (m) = ifq_deq(ifq); \ } while (/* CONSTCOND */0) #define IFQ_PURGE(ifq) \ do { \ - if (HFSC_ENABLED((ifq))) \ - hfsc_purge(((struct ifqueue *)(ifq))); \ - else \ - IF_PURGE((ifq)); \ + (void)ifq_purge(ifq); \ } while (/* CONSTCOND */0) -#define IFQ_SET_READY(ifq) /* nothing */ - -#define IFQ_LEN(ifq) IF_LEN(ifq) -#define IFQ_IS_EMPTY(ifq) ((ifq)->ifq_len == 0) -#define IFQ_SET_MAXLEN(ifq, len) ((ifq)->ifq_maxlen = (len)) +#define IFQ_LEN(ifq) ifq_len(ifq) +#define IFQ_IS_EMPTY(ifq) ifq_empty(ifq) +#define IFQ_SET_MAXLEN(ifq, len) ifq_set_maxlen(ifq, len) +#define IFQ_SET_READY(ifq) do { } while (0) /* default interface priorities */ #define IF_WIRED_DEFAULT_PRIORITY 0 @@ -405,6 +355,7 @@ extern struct ifnet_head ifnet; extern unsigned int lo0ifidx; void if_start(struct ifnet *); +int if_enqueue_try(struct ifnet *, struct mbuf *); int if_enqueue(struct ifnet *, struct mbuf *); void if_input(struct ifnet *, struct mbuf_list *); int if_input_local(struct ifnet *, struct mbuf *, sa_family_t); Index: sys/net/pf_ioctl.c =================================================================== RCS file: /cvs/src/sys/net/pf_ioctl.c,v retrieving revision 1.291 diff -u -p -r1.291 pf_ioctl.c --- sys/net/pf_ioctl.c 13 Oct 2015 19:32:31 -0000 1.291 +++ sys/net/pf_ioctl.c 12 Nov 2015 05:50:16 -0000 @@ -85,8 +85,10 @@ int pfclose(dev_t, int, int, struct p int pfioctl(dev_t, u_long, caddr_t, int, struct proc *); int pf_begin_rules(u_int32_t *, const char *); int pf_rollback_rules(u_int32_t, char *); -int pf_create_queues(void); +int pf_enable_queues(void); +void pf_remove_queues(void); int pf_commit_queues(void); +void pf_free_queues(struct pf_queuehead *); int pf_setup_pfsync_matching(struct pf_ruleset *); void pf_hash_rule(MD5_CTX *, struct pf_rule *); void pf_hash_rule_addr(MD5_CTX *, struct pf_rule_addr *); @@ -517,68 +519,144 @@ pf_rollback_rules(u_int32_t ticket, char /* queue defs only in the main ruleset */ if (anchor[0]) return (0); - return (pf_free_queues(pf_queues_inactive, NULL)); + + pf_free_queues(pf_queues_inactive); + + return (0); } -int -pf_free_queues(struct pf_queuehead *where, struct ifnet *ifp) +void +pf_free_queues(struct pf_queuehead *where) { struct pf_queuespec *q, *qtmp; TAILQ_FOREACH_SAFE(q, where, entries, qtmp) { - if (ifp && q->kif->pfik_ifp != ifp) - continue; TAILQ_REMOVE(where, q, entries); pfi_kif_unref(q->kif, PFI_KIF_REF_RULE); pool_put(&pf_queue_pl, q); } - return (0); } -int -pf_remove_queues(struct ifnet *ifp) +void +pf_remove_queues(void) { struct pf_queuespec *q; - int error = 0; - - /* remove queues */ - TAILQ_FOREACH_REVERSE(q, pf_queues_active, pf_queuehead, entries) { - if (ifp && q->kif->pfik_ifp != ifp) - continue; - if ((error = hfsc_delqueue(q)) != 0) - return (error); - } + struct ifnet *ifp; /* put back interfaces in normal queueing mode */ TAILQ_FOREACH(q, pf_queues_active, entries) { - if (ifp && q->kif->pfik_ifp != ifp) + if (q->parent_qid != 0) + continue; + + ifp = q->kif->pfik_ifp; + if (ifp == NULL) continue; - if (q->parent_qid == 0) - if ((error = hfsc_detach(q->kif->pfik_ifp)) != 0) - return (error); + + KASSERT(HFSC_ENABLED(&ifp->if_snd)); + + ifq_attach(&ifp->if_snd, ifq_priq_ops, NULL); } +} - return (0); +struct pf_hfsc_queue { + struct ifnet *ifp; + struct hfsc_if *hif; + struct pf_hfsc_queue *next; +}; + +static inline struct pf_hfsc_queue * +pf_hfsc_ifp2q(struct pf_hfsc_queue *list, struct ifnet *ifp) +{ + struct pf_hfsc_queue *phq = list; + + while (phq != NULL) { + if (phq->ifp == ifp) + return (phq); + + phq = phq->next; + } + + return (phq); } int pf_create_queues(void) { struct pf_queuespec *q; - int error = 0; + struct ifnet *ifp; + struct pf_hfsc_queue *list = NULL, *phq; + int error; + + /* find root queues and alloc hfsc for these interfaces */ + TAILQ_FOREACH(q, pf_queues_active, entries) { + if (q->parent_qid != 0) + continue; - /* find root queues and attach hfsc to these interfaces */ - TAILQ_FOREACH(q, pf_queues_active, entries) - if (q->parent_qid == 0) - if ((error = hfsc_attach(q->kif->pfik_ifp)) != 0) - return (error); + ifp = q->kif->pfik_ifp; + if (ifp == NULL) + continue; + + phq = malloc(sizeof(*phq), M_TEMP, M_WAITOK); + phq->ifp = ifp; + phq->hif = hfsc_pf_alloc(ifp); + + phq->next = list; + list = phq; + } /* and now everything */ - TAILQ_FOREACH(q, pf_queues_active, entries) - if ((error = hfsc_addqueue(q)) != 0) - return (error); + TAILQ_FOREACH(q, pf_queues_active, entries) { + ifp = q->kif->pfik_ifp; + if (ifp == NULL) + continue; + + phq = pf_hfsc_ifp2q(list, ifp); + KASSERT(phq != NULL); + + error = hfsc_pf_addqueue(phq->hif, q); + if (error != 0) + goto error; + } + + /* find root queues in old list to disable them if necessary */ + TAILQ_FOREACH(q, pf_queues_inactive, entries) { + if (q->parent_qid != 0) + continue; + + ifp = q->kif->pfik_ifp; + if (ifp == NULL) + continue; + + phq = pf_hfsc_ifp2q(list, ifp); + if (phq != NULL) + continue; + + ifq_attach(&ifp->if_snd, ifq_priq_ops, NULL); + } + + /* commit the new queues */ + while (list != NULL) { + phq = list; + list = phq->next; + + ifp = phq->ifp; + + ifq_attach(&ifp->if_snd, ifq_hfsc_ops, phq->hif); + free(phq, M_TEMP, sizeof(*phq)); + } return (0); + +error: + while (list != NULL) { + phq = list; + list = phq->next; + + hfsc_pf_free(phq->hif); + free(phq, M_TEMP, sizeof(*phq)); + } + + return (error); } int @@ -587,16 +665,21 @@ pf_commit_queues(void) struct pf_queuehead *qswap; int error; - if ((error = pf_remove_queues(NULL)) != 0) + /* swap */ + qswap = pf_queues_active; + pf_queues_active = pf_queues_inactive; + pf_queues_inactive = qswap; + + error = pf_create_queues(); + if (error != 0) { + pf_queues_inactive = pf_queues_active; + pf_queues_active = qswap; return (error); + } - /* swap */ - qswap = pf_queues_active; - pf_queues_active = pf_queues_inactive; - pf_queues_inactive = qswap; - pf_free_queues(pf_queues_inactive, NULL); + pf_free_queues(pf_queues_inactive); - return (pf_create_queues()); + return (0); } #define PF_MD5_UPD(st, elm) \ @@ -935,7 +1018,7 @@ pfioctl(dev_t dev, u_long cmd, caddr_t a else { pf_status.running = 0; pf_status.since = time_second; - pf_remove_queues(NULL); + pf_remove_queues(); DPFPRINTF(LOG_NOTICE, "pf: stopped"); } break; @@ -1001,7 +1084,7 @@ pfioctl(dev_t dev, u_long cmd, caddr_t a break; } bcopy(qs, &pq->queue, sizeof(pq->queue)); - error = hfsc_qstats(qs, pq->buf, &nbytes); + error = hfsc_pf_qstats(qs, pq->buf, &nbytes); if (error == 0) pq->nbytes = nbytes; break; Index: sys/net/pfvar.h =================================================================== RCS file: /cvs/src/sys/net/pfvar.h,v retrieving revision 1.422 diff -u -p -r1.422 pfvar.h --- sys/net/pfvar.h 30 Oct 2015 11:33:55 -0000 1.422 +++ sys/net/pfvar.h 12 Nov 2015 05:50:16 -0000 @@ -1657,9 +1657,6 @@ extern struct pf_queuehead pf_queues[ extern struct pf_queuehead *pf_queues_active, *pf_queues_inactive; extern u_int32_t ticket_pabuf; -extern int pf_free_queues(struct pf_queuehead *, - struct ifnet *); -extern int pf_remove_queues(struct ifnet *); extern int pf_tbladdr_setup(struct pf_ruleset *, struct pf_addr_wrap *); extern void pf_tbladdr_remove(struct pf_addr_wrap *); Index: sys/sys/mbuf.h =================================================================== RCS file: /cvs/src/sys/sys/mbuf.h,v retrieving revision 1.200 diff -u -p -r1.200 mbuf.h --- sys/sys/mbuf.h 2 Nov 2015 09:21:48 -0000 1.200 +++ sys/sys/mbuf.h 12 Nov 2015 05:50:16 -0000 @@ -392,6 +392,21 @@ struct mbstat { u_short m_mtypes[256]; /* type specific mbuf allocations */ }; +#include <sys/mutex.h> + +struct mbuf_list { + struct mbuf *ml_head; + struct mbuf *ml_tail; + u_int ml_len; +}; + +struct mbuf_queue { + struct mutex mq_mtx; + struct mbuf_list mq_list; + u_int mq_maxlen; + u_int mq_drops; +}; + #ifdef _KERNEL extern struct mbstat mbstat; @@ -474,14 +489,6 @@ struct m_tag *m_tag_next(struct mbuf *, * mbuf lists */ -#include <sys/mutex.h> - -struct mbuf_list { - struct mbuf *ml_head; - struct mbuf *ml_tail; - u_int ml_len; -}; - #define MBUF_LIST_INITIALIZER() { NULL, NULL, 0 } void ml_init(struct mbuf_list *); @@ -503,13 +510,6 @@ unsigned int ml_purge(struct mbuf_list /* * mbuf queues */ - -struct mbuf_queue { - struct mutex mq_mtx; - struct mbuf_list mq_list; - u_int mq_maxlen; - u_int mq_drops; -}; #define MBUF_QUEUE_INITIALIZER(_maxlen, _ipl) \ { MUTEX_INITIALIZER(_ipl), MBUF_LIST_INITIALIZER(), (_maxlen), 0 }