ip_output()
udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output
can also return ENOBUFS.
it doesn't look like the socket code (eg sosend_dgram()) is doing any
buffering - it's just copying the frame and stuffing it up to the
driver. No queuing involved before the NIC.
> On Jul 18, 2014, at 23:34, Adrian Chadd wrote:
>
> It upsets the ALTQ people too.
I'm an ALTQ person (pfSense, so maybe one if the biggest) and I'm not upset.
That cr*p needs to die in a fire.
___
freebsd-net@freebsd.org mailing list
http://lists.
gt;
>>> udp_output() -> ip_output()
>>>
>>> udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output
>>> can also return ENOBUFS.
>>>
>>> it doesn't look like the socket code (eg sosend_dgram()) is doing any
>>> buf
On Fri, 18 Jul 2014, hiren panchasara wrote:
On Wed, Jul 16, 2014 at 11:00 AM, Adrian Chadd wrote:
Hi!
So the UDP transmit path is udp_usrreqs->pru_send() == udp_send() ->
udp_output() -> ip_output()
udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output
can al
On Wed, Jul 16, 2014 at 11:00 AM, Adrian Chadd wrote:
> Hi!
>
> So the UDP transmit path is udp_usrreqs->pru_send() == udp_send() ->
> udp_output() -> ip_output()
>
> udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output
> can also return ENOBUFS
Hi!
So the UDP transmit path is udp_usrreqs->pru_send() == udp_send() ->
udp_output() -> ip_output()
udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output
can also return ENOBUFS.
it doesn't look like the socket code (eg sosend_dgram()) is doing any
bufferi
Return values in sendto() manpage says:
[ENOBUFS] The system was unable to allocate an internal buffer.
The operation may succeed when buffers become avail-
able.
[ENOBUFS] The output queue for a network interface was
< said:
> If I were to tweak the sysctl net.inet.ip.intr_queue_maxlen from its
> default of 50 up, would that possibly help named?
No, it will not have any effect on your problem. The IP input queue
is only on receive, and your problem is on transmit.
The only thing that could possibly help you
[Drop hostname part of IPv6-only address above to obtain IPv4-capable e-mail,
or just drop me from the recipients and I'll catch up from the archives]
Hello, "%s"!
I've read in this list from a couple years ago, several discussions about
ENOBUFS being returned to UDP-using
Hi,
On Wed, 10 Dec 2003, Giovanni P. Tirloni wrote:
> common:
> set bundle disable multilink
> set bundle enable compression
> set bundle yes encryption
^^^ please remove this line
You don't need ECP for MPPE (Microsoft Point to Point Encryption)
Maybe this
ping to the
first IP returns ENOBUFS (probably because the link is being dropped).
Anything related to the PPTP output window?
Here is the log entries after both links are established (they show as
connected in the win2k and winxp boxes and pptp0 was answering the LCP
echos):
Dec 10 11:
> From: Don Bowman [mailto:don@;sandvine.com]
> In bge_rxeof(), there can end up being a condition which causes
> the driver to endlessly interrupt.
>
> if (bge_newbuf_std(sc, sc->bge_std, NULL) == ENOBUFS) {
> ifp->if_ierrors++;
> bge_newbuf_std(sc, sc-&
In bge_rxeof(), there can end up being a condition which causes
the driver to endlessly interrupt.
if (bge_newbuf_std(sc, sc->bge_std, NULL) == ENOBUFS) {
ifp->if_ierrors++;
bge_newbuf_std(sc, sc->bge_std, m);
continue;
}
happens. Now, bge_newbuf_std returns ENOBUFS. 'm
On Fri, 18 Oct 2002, Kelly Yancey wrote:
> Hmm. Might that explain the abysmal performance of the em driver with
> packets smaller than 333 bytes?
>
> Kelly
>
This is just a follow-up to report that thanks to Luigi and Prafulla we
were able to track down the cause of the problems I was see
> In special cases, the error induced by having interrupts blocked
> causes errors which are much larger than polling alone.
Which conditions block interrupts for longer than, say, a millisecond?
Disk errors / wakeups? Anything occurring in "normal" conditions?
Pete
To Unsubscribe: send mail
On Fri, 18 Oct 2002, Kelly Yancey wrote:
> > You should definitely clarify how fast the smartbits unit is pushing
> > out traffic, and whether its speed depends on the measured RTT.
> >
>
> It doesn't sound like the box is that smart. As it was explained to me, the
> test setup includes a desir
On Fri, 18 Oct 2002, Luigi Rizzo wrote:
> Oh, I *thought* the numbers you reported were pps but now i see that
> nowhere you mentioned that.
>
Sorry. I just checked with our tester. Those are the total number of
packets sent during the test. Each test lasted 10 seconds, so divde by 10 to
get
On Fri, 18 Oct 2002, Prafulla Deuskar wrote:
> FYI. 82543 doesn't support PCI-X protocol.
> For PCI-X support use 82544, 82545 or 82546 based cards.
>
> -Prafulla
>
That is alright, we aren't expecting PCI-X speeds. It is just that our only
PCI slot on the motherboard (1U rack-mount system) is
Oh, I *thought* the numbers you reported were pps but now i see that
nowhere you mentioned that.
But if things are as you say, i am seriously puzzled on what you
are trying to measure -- the output interface (fxp) is a 100Mbit/s
card which cannot possibly support the load you are trying to offer
t
On Fri, 18 Oct 2002, Luigi Rizzo wrote:
> How is the measurement done, does the box under test act as a router
> with the smartbit pushing traffic in and expecting it back ?
>
The box has 2 interfaces, a fxp and a em (or bge). The GigE interface is
configured with 7 VLANs. THe SmartBit produce
FYI. 82543 doesn't support PCI-X protocol.
For PCI-X support use 82544, 82545 or 82546 based cards.
-Prafulla
Kelly Yancey [[EMAIL PROTECTED]] wrote:
> On Fri, 18 Oct 2002, Luigi Rizzo wrote:
>
> > On Fri, Oct 18, 2002 at 10:27:04AM -0700, Kelly Yancey wrote:
> > ...
> > > Hmm. Might that ex
How is the measurement done, does the box under test act as a router
with the smartbit pushing traffic in and expecting it back ?
The numbers are strange, anyways.
A frame of N bytes takes (N*8+160) nanoseconds on the wire, which
for 330-byte frames should amount to 100/(330*8+160) ~= 357kpps
On Fri, 18 Oct 2002, Luigi Rizzo wrote:
> On Fri, Oct 18, 2002 at 10:27:04AM -0700, Kelly Yancey wrote:
> ...
> > Hmm. Might that explain the abysmal performance of the em driver with
> > packets smaller than 333 bytes?
>
> what do you mean ? it works great for me. even on -current i
> can push
On Fri, Oct 18, 2002 at 06:21:37PM +0300, Petri Helenius wrote:
...
> Luigi´s polling work would be useful here. That would lead to incorrect
> timestamps
> on the packets, though?
polling introduce an extra uncertainty which might be as large as
an entire clock tick, yes.
But even with interrupt
On Fri, Oct 18, 2002 at 10:27:04AM -0700, Kelly Yancey wrote:
...
> Hmm. Might that explain the abysmal performance of the em driver with
> packets smaller than 333 bytes?
what do you mean ? it works great for me. even on -current i
can push out over 400kpps (64byte frames) on a 2.4GHz box.
On Fri, 18 Oct 2002, Petri Helenius wrote:
> >
> > just reading the source code, yes, it appears that the card has
> > support for delayed rx/tx interrupts -- see RIDV and TIDV definitions
> > and usage in sys/dev/em/* . I don't know in what units are the values
> > (28 and 128, respectively), but
Transmit/Receive Interrupt Delay values are in units of 1.024 microseconds.
The em driver currently uses these to enable interrupt coalescing on the cards.
Thanks,
Prafulla
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-net" in the body of the message
iginal Message -
> > From: "Jim McGrath" <[EMAIL PROTECTED]>
> > To: "Luigi Rizzo" <[EMAIL PROTECTED]>; "Petri Helenius" <[EMAIL PROTECTED]>
> > Cc: "Lars Eggert" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
>
> The chips I have are 82546. Is your recommendation to steer away
> from Intel
> Gigabit Ethernet chips? What would be more optimal alternative?
>
The 82543/82544 chips worked well in vanilla configurations. I never played
with an 82546. The em driver is supported by Intel, so any chip features
M
> > To: Jim McGrath
> > Cc: Petri Helenius; Lars Eggert; [EMAIL PROTECTED]
> > Subject: Re: ENOBUFS
> >
> >
> > On Fri, Oct 18, 2002 at 12:49:04AM -0400, Jim McGrath wrote:
> > > Careful here. Read the errata sheet!! I do not believe the em
> &g
<[EMAIL PROTECTED]>
> To: "Luigi Rizzo" <[EMAIL PROTECTED]>; "Petri Helenius" <[EMAIL PROTECTED]>
> Cc: "Lars Eggert" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
> Sent: Friday, October 18, 2002 7:49 AM
> Subject: RE: ENOBUFS
>
>
&
ROTECTED]
> Subject: Re: ENOBUFS
>
>
> On Fri, Oct 18, 2002 at 12:49:04AM -0400, Jim McGrath wrote:
> > Careful here. Read the errata sheet!! I do not believe the em
> driver uses
> > these parameters, and possibly for a good reason.
>
> as if i had access to t
>
> just reading the source code, yes, it appears that the card has
> support for delayed rx/tx interrupts -- see RIDV and TIDV definitions
> and usage in sys/dev/em/* . I don't know in what units are the values
> (28 and 128, respectively), but it does appear that tx interrupts are
> delayed a bit
-
From: "Jim McGrath" <[EMAIL PROTECTED]>
To: "Luigi Rizzo" <[EMAIL PROTECTED]>; "Petri Helenius" <[EMAIL PROTECTED]>
Cc: "Lars Eggert" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Friday, October 18, 2002 7:49 AM
Subject: RE: ENOBU
--Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:owner-freebsd-net@;FreeBSD.ORG]On Behalf Of Luigi Rizzo
> > Sent: Thursday, October 17, 2002 11:12 PM
> > To: Petri Helenius
> > Cc: Lars Eggert; [EMAIL PROTECTED]
> > Subject: Re: ENOBUFS
>
002 11:12 PM
> To: Petri Helenius
> Cc: Lars Eggert; [EMAIL PROTECTED]
> Subject: Re: ENOBUFS
>
>
> On Thu, Oct 17, 2002 at 11:55:24PM +0300, Petri Helenius wrote:
> ...
> > I seem to get about 5-6 packets on an interrupt. Is this tunable? At
>
> just reading the s
On Thu, Oct 17, 2002 at 11:55:24PM +0300, Petri Helenius wrote:
...
> I seem to get about 5-6 packets on an interrupt. Is this tunable? At
just reading the source code, yes, it appears that the card has
support for delayed rx/tx interrupts -- see RIDV and TIDV definitions
and usage in sys/dev/em/*
>
> Less :-) Let me tell you tomorrow, don't have the numbers here right now.
I seem to get about 5-6 packets on an interrupt. Is this tunable? At
50kpps the card generates 10k interrupts a second. Sending generates
way less. This is about 300Mbps so with the average packet size of
750 there shoul
> Sam Leffler wrote:
> > Try my port of the netbsd kttcp kernel module. You can find it at
> >
> > http://www.freebsd.org/~sam
>
> this seems to use some things from netbsd like
> so_rcv.sb_lastrecord and SBLASTRECORDCHK/SBLASTMBUFCHK.
> Is there something else I need to apply to build it on
> fr
Sam Leffler wrote:
> Try my port of the netbsd kttcp kernel module. You can find it at
>
> http://www.freebsd.org/~sam
this seems to use some things from netbsd like
so_rcv.sb_lastrecord and SBLASTRECORDCHK/SBLASTMBUFCHK.
Is there something else I need to apply to build it on
freebsd -STABLE?
> > The 900Mbps are similar to what I see here on similar hardware.
>
> What kind of receive performance do you observe? I haven´t got that
> far yet.
> >
> > For your two-interface setup, are the 600Mbps aggregate send rate on
> > both interfaces, or do you see 600Mbps per interface? In the latte
On Wed, Oct 16, 2002 at 08:57:19AM +0300, Petri Helenius wrote:
> >
> > how large are the packets and how fast is the box ?
>
> Packets go out at an average size of 1024 bytes. The box is dual
> P4 Xeon 2400/400 so I think it should qualify as "fast" ? I disabled
yes, it qualifies as fast. With
Petri Helenius wrote:
>>The 900Mbps are similar to what I see here on similar hardware.
>
> What kind of receive performance do you observe? I haven´t got that
> far yet.
Less :-) Let me tell you tomorrow, don't have the numbers here right now.
> 600Mbps per interface. I´m going to try this out
> The 900Mbps are similar to what I see here on similar hardware.
What kind of receive performance do you observe? I haven´t got that
far yet.
>
> For your two-interface setup, are the 600Mbps aggregate send rate on
> both interfaces, or do you see 600Mbps per interface? In the latter
600Mbps pe
Petri Helenius wrote:
>>how large are the packets and how fast is the box ?
>
>
> Packets go out at an average size of 1024 bytes. The box is dual
> P4 Xeon 2400/400 so I think it should qualify as "fast" ? I disabled
> hyperthreading to figure out if it was causing problems. I seem to
> be able
>
> how large are the packets and how fast is the box ?
Packets go out at an average size of 1024 bytes. The box is dual
P4 Xeon 2400/400 so I think it should qualify as "fast" ? I disabled
hyperthreading to figure out if it was causing problems. I seem to
be able to send packets at a rate in the
Petri Helenius wrote:
>>Probably means that your outgoing interface queue is filling up.
>>ENOBUFS is the only way the kernel has to tell you ``slow down!''.
>>
>
> How much should I be able to send to two em interfaces on one
> 66/64 PCI ?
I've seen
On Wed, Oct 16, 2002 at 02:04:11AM +0300, Petri Helenius wrote:
> >
> > What rate are you sending these packets at? A standard interface queue
> > length is 50 packets, you get ENOBUFS when it's full.
> >
> This might explain the phenomenan. (packets are going out
>
> Probably means that your outgoing interface queue is filling up.
> ENOBUFS is the only way the kernel has to tell you ``slow down!''.
>
How much should I be able to send to two em interfaces on one
66/64 PCI ?
Pete
To Unsubscribe: send mail to [EMAIL PROTECTED]
wit
>
> What rate are you sending these packets at? A standard interface queue
> length is 50 packets, you get ENOBUFS when it's full.
>
This might explain the phenomenan. (packets are going out bursty, with average
hovering at ~500Mbps:ish) I recomplied kernel with IFQ_MAXLEN of 500
< said:
> My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
Probably means that your outgoing interface queue is filling up.
ENOBUFS is the only way the kernel has to tell you ``slow down!''.
-GAWollman
To Unsubscribe: send mail to [EMAIL PROTECTED]
with &quo
On Wed, 16 Oct 2002, Petri Helenius wrote:
>
> My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
> while netstat -s counter under the heading of "ip" is incrementing:
> 7565828 output packets dropped due to no bufs, etc.
> but netstat -m sho
Petri Helenius wrote:
> My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
> while netstat -s counter under the heading of "ip" is incrementing:
> 7565828 output packets dropped due to no bufs, etc.
What rate are you sending these packets at? A stand
My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
while netstat -s counter under the heading of "ip" is incrementing:
7565828 output packets dropped due to no bufs, etc.
but netstat -m shows:
> netstat -m
579/1440/131072 mbufs in use (current/peak/max):
Julian Elischer writes:
>
>
> On Wed, 27 Mar 2002, Andrew Gallatin wrote:
>
> >
> > Archie Cobbs writes:
> > > Luigi Rizzo writes:
> > > > > Is if_tx_rdy() something that can be used generally or does it only
> > > > > work with dummynet ?
> > > >
> > > > well, the function i
On Wed, 27 Mar 2002, Andrew Gallatin wrote:
>
> Archie Cobbs writes:
> > Luigi Rizzo writes:
> > > > Is if_tx_rdy() something that can be used generally or does it only
> > > > work with dummynet ?
> > >
> > > well, the function is dummynet-specific, but I would certainly like
> > > a g
On Wed, 27 Mar 2002, Archie Cobbs wrote:
> Luigi Rizzo writes:
> > > Is if_tx_rdy() something that can be used generally or does it only
> > > work with dummynet ?
> >
> > well, the function is dummynet-specific, but I would certainly like
> > a generic callback list to be implemented in ifnet
On Wed, 27 Mar 2002, Luigi Rizzo wrote:
> On Wed, Mar 27, 2002 at 09:53:00AM -0800, Archie Cobbs wrote:
> ...
> > > managed and can be extended to point to additional ifnet state that
> > > does not fit in the immutable one...
> >
> > Why is it important to avoid changing 'struct ifnet' ?
>
>
On Wed, Mar 27, 2002 at 09:53:00AM -0800, Archie Cobbs wrote:
...
> > managed and can be extended to point to additional ifnet state that
> > does not fit in the immutable one...
>
> Why is it important to avoid changing 'struct ifnet' ?
backward compatibility with binary-only drivers ...
Not th
Archie Cobbs writes:
> Luigi Rizzo writes:
> > > Is if_tx_rdy() something that can be used generally or does it only
> > > work with dummynet ?
> >
> > well, the function is dummynet-specific, but I would certainly like
> > a generic callback list to be implemented in ifnet which is
> > i
Luigi Rizzo writes:
> > Is if_tx_rdy() something that can be used generally or does it only
> > work with dummynet ?
>
> well, the function is dummynet-specific, but I would certainly like
> a generic callback list to be implemented in ifnet which is
> invoked on tx_empty events.
Me too :-)
> T
clock
> > packets out of the dummynet pipes. A patch for if_tun.c is below,
>
> So if_tx_rdy() sounds like my if_get_next().. guess you already did that :-)
yes, but it does not solve the problem of the original poster who
wanted to block/wakeup processes getting enobufs. Signal just
Luigi Rizzo writes:
> > Along those lines, this might be a handy thing to add...
> >
> > int if_get_next(struct ifnet *ifp); /* runs at splimp() */
> >
> > This function tries to "get" the next packet scheduled to go
> > out interface 'ifp' and, if successful, puts it on &ifp->if
Luigi Rizzo writes:
> As a matter of fact, i even implemented a similar thing in dummynet,
> and if device drivers call if_tx_rdy() when they complete a
> transmission, then the tx interrupt can be used to clock
> packets out of the dummynet pipes. A patch for if_tun.c is below,
So if_tx_rdy() so
the ENOBUFS is very typical with UDP applications that try to
send as fast as possible (e.g. the various network test utilities
in ports), and as i said in a previous message, putting up a mechanism to
pass around queue full/queue not full events is expensive because
it might trigger on every
On Tue, Mar 26, 2002 at 10:10:05PM -0800, Archie Cobbs wrote:
> Luigi Rizzo writes:
...
> Along those lines, this might be a handy thing to add...
>
> int if_get_next(struct ifnet *ifp); /* runs at splimp() */
>
> This function tries to "get" the next packet scheduled to go
> o
Luigi Rizzo writes:
> > >if you could suggest a few modifications that would be required, i'd like
> > >to pursue this further.
> >
> > Look at tsleep/wakeup on ifnet of if_snd.
>
> I am under the impression that implementing this mechanism would
> not be so trivial. It is not immediate to tell
Matthew Luckie wrote:
> hmm, we looked at how other protocols handled the ENOBUFS case from
> ip_output.
>
> tcp_output calls tcp_quench on this error.
>
> while the interface may not be able to send any more packets than it
> does currently, closing the congestion win
> I am under the impression that implementing this mechanism would
> not be so trivial.
hmm, we looked at how other protocols handled the ENOBUFS case from
ip_output.
tcp_output calls tcp_quench on this error.
while the interface may not be able to send any more packets than it does
cur
On Mon, Mar 25, 2002 at 02:06:19PM -0800, Lars Eggert wrote:
> Matthew Luckie wrote:
> >>>Is there a mechanism to tell when ip_output should be called again?
...
> >if you could suggest a few modifications that would be required, i'd like
> >to pursue this further.
>
> Look at tsleep/wakeup on if
On Mon, 25 Mar 2002, Matthew Luckie wrote:
> Hi
>
>
> Is there a mechanism to tell when ip_output should be called again?
> Ideally, I would block until such time as i could send it via ip_output
no, there is no such mechanism that I know of..
>
> (please CC: me on any responses)
>
> Mat
Lars Eggert wrote:
> Matthew Luckie wrote:
>
Is there a mechanism to tell when ip_output should be called again?
Ideally, I would block until such time as i could send it via ip_output
>>>
>>>
>>> You probably get that because the outbound interface queue gets full,
>>> so you want to
Matthew Luckie wrote:
>>>Is there a mechanism to tell when ip_output should be called again?
>>>Ideally, I would block until such time as i could send it via ip_output
>>
>>You probably get that because the outbound interface queue gets full, so
>>you want to block your caller until space becomes
> > Is there a mechanism to tell when ip_output should be called again?
> > Ideally, I would block until such time as i could send it via ip_output
>
> You probably get that because the outbound interface queue gets full, so
> you want to block your caller until space becomes available there. Th
Matthew Luckie wrote:
> I have written a syscall that creates a packet in kernel-space,
> timestamps it, and then sends it via ip_output
>
> If the user-space application uses this system call faster than the
> packets can be sent, ip_output will return ENOBUFS.
>
> Is ther
Hi
I have written a syscall that creates a packet in kernel-space,
timestamps it, and then sends it via ip_output
If the user-space application uses this system call faster than the
packets can be sent, ip_output will return ENOBUFS.
Is there a mechanism to tell when ip_output should be called
On Tue, 25 Sep 2001, Jeff Behl wrote:
> Any other guidelines to help tune a FreeBSD box for this sort of use
> would be greatly appreciated. Currently, the only change we make is
> increasing MAXUSERS to 128, though I'm not sure this is the preferred
> approach.
That's the simplest approach, a
I have 4.3, and soon to be 4.4, boxes dedicated to a single app which
basically 'bounces' traffic between two incoming TCP connections. After
around 240 sessions (each session consisting of two incoming connections
with traffic being passed between them), I started getting ENOBU
78 matches
Mail list logo