Re: UDP sendto() returning ENOBUFS - No buffer space available

2014-07-19 Thread Bruce Evans
() - udp_output() - ip_output() udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output can also return ENOBUFS. it doesn't look like the socket code (eg sosend_dgram()) is doing any buffering - it's just copying the frame and stuffing it up to the driver. No queuing involved before

Re: UDP sendto() returning ENOBUFS - No buffer space available

2014-07-18 Thread hiren panchasara
On Wed, Jul 16, 2014 at 11:00 AM, Adrian Chadd adr...@freebsd.org wrote: Hi! So the UDP transmit path is udp_usrreqs-pru_send() == udp_send() - udp_output() - ip_output() udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output can also return ENOBUFS. it doesn't look like

Re: UDP sendto() returning ENOBUFS - No buffer space available

2014-07-18 Thread Bruce Evans
On Fri, 18 Jul 2014, hiren panchasara wrote: On Wed, Jul 16, 2014 at 11:00 AM, Adrian Chadd adr...@freebsd.org wrote: Hi! So the UDP transmit path is udp_usrreqs-pru_send() == udp_send() - udp_output() - ip_output() udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output can

Re: UDP sendto() returning ENOBUFS - No buffer space available

2014-07-18 Thread Adrian Chadd
() udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output can also return ENOBUFS. it doesn't look like the socket code (eg sosend_dgram()) is doing any buffering - it's just copying the frame and stuffing it up to the driver. No queuing involved before the NIC. Right. Thanks

Re: UDP sendto() returning ENOBUFS - No buffer space available

2014-07-18 Thread Jim Thompson
On Jul 18, 2014, at 23:34, Adrian Chadd adr...@freebsd.org wrote: It upsets the ALTQ people too. I'm an ALTQ person (pfSense, so maybe one if the biggest) and I'm not upset. That cr*p needs to die in a fire. ___ freebsd-net@freebsd.org mailing

UDP sendto() returning ENOBUFS - No buffer space available

2014-07-16 Thread hiren panchasara
Return values in sendto() manpage says: [ENOBUFS] The system was unable to allocate an internal buffer. The operation may succeed when buffers become avail- able. [ENOBUFS] The output queue for a network interface

Re: UDP sendto() returning ENOBUFS - No buffer space available

2014-07-16 Thread Adrian Chadd
Hi! So the UDP transmit path is udp_usrreqs-pru_send() == udp_send() - udp_output() - ip_output() udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output can also return ENOBUFS. it doesn't look like the socket code (eg sosend_dgram()) is doing any buffering - it's just copying

ENOBUFS and DNS...

2003-12-15 Thread Barry Bouwsma
[Drop hostname part of IPv6-only address above to obtain IPv4-capable e-mail, or just drop me from the recipients and I'll catch up from the archives] Hello, %s! I've read in this list from a couple years ago, several discussions about ENOBUFS being returned to UDP-using applications

ENOBUFS and DNS...

2003-12-15 Thread Garrett Wollman
On Mon, 15 Dec 2003 22:17:53 +0100 (CET), Barry Bouwsma [EMAIL PROTECTED] said: If I were to tweak the sysctl net.inet.ip.intr_queue_maxlen from its default of 50 up, would that possibly help named? No, it will not have any effect on your problem. The IP input queue is only on receive, and

mpd: two links make one disconnect (ENOBUFS, LCP no reply)

2003-12-10 Thread Giovanni P. Tirloni
to the first IP returns ENOBUFS (probably because the link is being dropped). Anything related to the PPTP output window? Here is the log entries after both links are established (they show as connected in the win2k and winxp boxes and pptp0 was answering the LCP echos): Dec 10 11:02:22

Re: mpd: two links make one disconnect (ENOBUFS, LCP no reply)

2003-12-10 Thread Michael Bretterklieber
Hi, On Wed, 10 Dec 2003, Giovanni P. Tirloni wrote: common: set bundle disable multilink set bundle enable compression set bundle yes encryption ^^^ please remove this line You don't need ECP for MPPE (Microsoft Point to Point Encryption) Maybe this

RE: bug in bge driver with ENOBUFS on 4.7

2002-11-12 Thread Don Bowman
From: Don Bowman [mailto:don;sandvine.com] In bge_rxeof(), there can end up being a condition which causes the driver to endlessly interrupt. if (bge_newbuf_std(sc, sc-bge_std, NULL) == ENOBUFS) { ifp-if_ierrors++; bge_newbuf_std(sc, sc-bge_std, m); continue; } happens

bug in bge driver with ENOBUFS on 4.7

2002-11-09 Thread Don Bowman
In bge_rxeof(), there can end up being a condition which causes the driver to endlessly interrupt. if (bge_newbuf_std(sc, sc-bge_std, NULL) == ENOBUFS) { ifp-if_ierrors++; bge_newbuf_std(sc, sc-bge_std, m); continue; } happens. Now, bge_newbuf_std returns ENOBUFS. 'm' is also NULL

Performance of em driver (Was: ENOBUFS)

2002-10-30 Thread Kelly Yancey
On Fri, 18 Oct 2002, Kelly Yancey wrote: Hmm. Might that explain the abysmal performance of the em driver with packets smaller than 333 bytes? Kelly This is just a follow-up to report that thanks to Luigi and Prafulla we were able to track down the cause of the problems I was seeing

Re: ENOBUFS

2002-10-18 Thread Petri Helenius
- From: Jim McGrath [EMAIL PROTECTED] To: Luigi Rizzo [EMAIL PROTECTED]; Petri Helenius [EMAIL PROTECTED] Cc: Lars Eggert [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Friday, October 18, 2002 7:49 AM Subject: RE: ENOBUFS Careful here. Read the errata sheet!! I do not believe the em driver uses

Re: ENOBUFS

2002-10-18 Thread Petri Helenius
just reading the source code, yes, it appears that the card has support for delayed rx/tx interrupts -- see RIDV and TIDV definitions and usage in sys/dev/em/* . I don't know in what units are the values (28 and 128, respectively), but it does appear that tx interrupts are delayed a bit more

RE: ENOBUFS

2002-10-18 Thread Jim McGrath
, to process receive descriptors under low load. Jim -Original Message- From: [EMAIL PROTECTED] [mailto:owner-freebsd-net;FreeBSD.ORG]On Behalf Of Luigi Rizzo Sent: Friday, October 18, 2002 12:56 AM To: Jim McGrath Cc: Petri Helenius; Lars Eggert; [EMAIL PROTECTED] Subject: Re: ENOBUFS

Re: ENOBUFS

2002-10-18 Thread Petri Helenius
: ENOBUFS On Fri, Oct 18, 2002 at 12:49:04AM -0400, Jim McGrath wrote: Careful here. Read the errata sheet!! I do not believe the em driver uses these parameters, and possibly for a good reason. as if i had access to the data sheets :) cheers luigi Jim -Original

RE: ENOBUFS

2002-10-18 Thread Jim McGrath
The chips I have are 82546. Is your recommendation to steer away from Intel Gigabit Ethernet chips? What would be more optimal alternative? The 82543/82544 chips worked well in vanilla configurations. I never played with an 82546. The em driver is supported by Intel, so any chip features it

Re: ENOBUFS

2002-10-18 Thread Eli Dart
PROTECTED] Sent: Friday, October 18, 2002 7:49 AM Subject: RE: ENOBUFS Careful here. Read the errata sheet!! I do not believe the em driver uses these parameters, and possibly for a good reason. Jim -Original Message- From: [EMAIL PROTECTED] [mailto:owner

Re: ENOBUFS

2002-10-18 Thread Prafulla Deuskar
Transmit/Receive Interrupt Delay values are in units of 1.024 microseconds. The em driver currently uses these to enable interrupt coalescing on the cards. Thanks, Prafulla To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-net in the body of the message

Re: ENOBUFS

2002-10-18 Thread Kelly Yancey
On Fri, 18 Oct 2002, Petri Helenius wrote: just reading the source code, yes, it appears that the card has support for delayed rx/tx interrupts -- see RIDV and TIDV definitions and usage in sys/dev/em/* . I don't know in what units are the values (28 and 128, respectively), but it does

Re: ENOBUFS

2002-10-18 Thread Luigi Rizzo
On Fri, Oct 18, 2002 at 10:27:04AM -0700, Kelly Yancey wrote: ... Hmm. Might that explain the abysmal performance of the em driver with packets smaller than 333 bytes? what do you mean ? it works great for me. even on -current i can push out over 400kpps (64byte frames) on a 2.4GHz box.

Re: ENOBUFS

2002-10-18 Thread Luigi Rizzo
On Fri, Oct 18, 2002 at 06:21:37PM +0300, Petri Helenius wrote: ... Luigi´s polling work would be useful here. That would lead to incorrect timestamps on the packets, though? polling introduce an extra uncertainty which might be as large as an entire clock tick, yes. But even with interrupts,

Re: ENOBUFS

2002-10-18 Thread Kelly Yancey
On Fri, 18 Oct 2002, Luigi Rizzo wrote: On Fri, Oct 18, 2002 at 10:27:04AM -0700, Kelly Yancey wrote: ... Hmm. Might that explain the abysmal performance of the em driver with packets smaller than 333 bytes? what do you mean ? it works great for me. even on -current i can push out

Re: ENOBUFS

2002-10-18 Thread Luigi Rizzo
How is the measurement done, does the box under test act as a router with the smartbit pushing traffic in and expecting it back ? The numbers are strange, anyways. A frame of N bytes takes (N*8+160) nanoseconds on the wire, which for 330-byte frames should amount to 100/(330*8+160) ~=

Re: ENOBUFS

2002-10-18 Thread Prafulla Deuskar
FYI. 82543 doesn't support PCI-X protocol. For PCI-X support use 82544, 82545 or 82546 based cards. -Prafulla Kelly Yancey [[EMAIL PROTECTED]] wrote: On Fri, 18 Oct 2002, Luigi Rizzo wrote: On Fri, Oct 18, 2002 at 10:27:04AM -0700, Kelly Yancey wrote: ... Hmm. Might that explain

Re: ENOBUFS

2002-10-18 Thread Kelly Yancey
On Fri, 18 Oct 2002, Luigi Rizzo wrote: How is the measurement done, does the box under test act as a router with the smartbit pushing traffic in and expecting it back ? The box has 2 interfaces, a fxp and a em (or bge). The GigE interface is configured with 7 VLANs. THe SmartBit produces

Re: ENOBUFS

2002-10-18 Thread Luigi Rizzo
Oh, I *thought* the numbers you reported were pps but now i see that nowhere you mentioned that. But if things are as you say, i am seriously puzzled on what you are trying to measure -- the output interface (fxp) is a 100Mbit/s card which cannot possibly support the load you are trying to offer

Re: ENOBUFS

2002-10-18 Thread Kelly Yancey
On Fri, 18 Oct 2002, Prafulla Deuskar wrote: FYI. 82543 doesn't support PCI-X protocol. For PCI-X support use 82544, 82545 or 82546 based cards. -Prafulla That is alright, we aren't expecting PCI-X speeds. It is just that our only PCI slot on the motherboard (1U rack-mount system) is a

Re: ENOBUFS

2002-10-18 Thread Kelly Yancey
On Fri, 18 Oct 2002, Luigi Rizzo wrote: Oh, I *thought* the numbers you reported were pps but now i see that nowhere you mentioned that. Sorry. I just checked with our tester. Those are the total number of packets sent during the test. Each test lasted 10 seconds, so divde by 10 to get

Re: ENOBUFS

2002-10-18 Thread Kelly Yancey
On Fri, 18 Oct 2002, Kelly Yancey wrote: You should definitely clarify how fast the smartbits unit is pushing out traffic, and whether its speed depends on the measured RTT. It doesn't sound like the box is that smart. As it was explained to me, the test setup includes a desired

Re: ENOBUFS

2002-10-18 Thread Petri Helenius
In special cases, the error induced by having interrupts blocked causes errors which are much larger than polling alone. Which conditions block interrupts for longer than, say, a millisecond? Disk errors / wakeups? Anything occurring in normal conditions? Pete To Unsubscribe: send mail to

Re: ENOBUFS

2002-10-17 Thread Petri Helenius
Less :-) Let me tell you tomorrow, don't have the numbers here right now. I seem to get about 5-6 packets on an interrupt. Is this tunable? At 50kpps the card generates 10k interrupts a second. Sending generates way less. This is about 300Mbps so with the average packet size of 750 there should

Re: ENOBUFS

2002-10-17 Thread Luigi Rizzo
On Thu, Oct 17, 2002 at 11:55:24PM +0300, Petri Helenius wrote: ... I seem to get about 5-6 packets on an interrupt. Is this tunable? At just reading the source code, yes, it appears that the card has support for delayed rx/tx interrupts -- see RIDV and TIDV definitions and usage in sys/dev/em/*

RE: ENOBUFS

2002-10-17 Thread Jim McGrath
To: Petri Helenius Cc: Lars Eggert; [EMAIL PROTECTED] Subject: Re: ENOBUFS On Thu, Oct 17, 2002 at 11:55:24PM +0300, Petri Helenius wrote: ... I seem to get about 5-6 packets on an interrupt. Is this tunable? At just reading the source code, yes, it appears that the card has support

Re: ENOBUFS

2002-10-17 Thread Luigi Rizzo
- From: [EMAIL PROTECTED] [mailto:owner-freebsd-net;FreeBSD.ORG]On Behalf Of Luigi Rizzo Sent: Thursday, October 17, 2002 11:12 PM To: Petri Helenius Cc: Lars Eggert; [EMAIL PROTECTED] Subject: Re: ENOBUFS On Thu, Oct 17, 2002 at 11:55:24PM +0300, Petri Helenius wrote: ... I seem

Re: ENOBUFS

2002-10-16 Thread Lars Eggert
Petri Helenius wrote: The 900Mbps are similar to what I see here on similar hardware. What kind of receive performance do you observe? I haven´t got that far yet. Less :-) Let me tell you tomorrow, don't have the numbers here right now. 600Mbps per interface. I´m going to try this out also

Re: ENOBUFS

2002-10-16 Thread Luigi Rizzo
On Wed, Oct 16, 2002 at 08:57:19AM +0300, Petri Helenius wrote: how large are the packets and how fast is the box ? Packets go out at an average size of 1024 bytes. The box is dual P4 Xeon 2400/400 so I think it should qualify as fast ? I disabled yes, it qualifies as fast. With this

RE: ENOBUFS

2002-10-16 Thread Don Bowman
Sam Leffler wrote: Try my port of the netbsd kttcp kernel module. You can find it at http://www.freebsd.org/~sam this seems to use some things from netbsd like so_rcv.sb_lastrecord and SBLASTRECORDCHK/SBLASTMBUFCHK. Is there something else I need to apply to build it on freebsd -STABLE?

Re: ENOBUFS

2002-10-16 Thread Sam Leffler
Sam Leffler wrote: Try my port of the netbsd kttcp kernel module. You can find it at http://www.freebsd.org/~sam this seems to use some things from netbsd like so_rcv.sb_lastrecord and SBLASTRECORDCHK/SBLASTMBUFCHK. Is there something else I need to apply to build it on freebsd

ENOBUFS

2002-10-15 Thread Garrett Wollman
On Wed, 16 Oct 2002 00:53:46 +0300, Petri Helenius [EMAIL PROTECTED] said: My processes writing to SOCK_DGRAM sockets are getting ENOBUFS Probably means that your outgoing interface queue is filling up. ENOBUFS is the only way the kernel has to tell you ``slow down!''. -GAWollman

Re: ENOBUFS

2002-10-15 Thread Petri Helenius
What rate are you sending these packets at? A standard interface queue length is 50 packets, you get ENOBUFS when it's full. This might explain the phenomenan. (packets are going out bursty, with average hovering at ~500Mbps:ish) I recomplied kernel with IFQ_MAXLEN of 5000 but there seems

Re: ENOBUFS

2002-10-15 Thread Petri Helenius
Probably means that your outgoing interface queue is filling up. ENOBUFS is the only way the kernel has to tell you ``slow down!''. How much should I be able to send to two em interfaces on one 66/64 PCI ? Pete To Unsubscribe: send mail to [EMAIL PROTECTED] with unsubscribe freebsd-net

Re: ENOBUFS

2002-10-15 Thread Luigi Rizzo
On Wed, Oct 16, 2002 at 02:04:11AM +0300, Petri Helenius wrote: What rate are you sending these packets at? A standard interface queue length is 50 packets, you get ENOBUFS when it's full. This might explain the phenomenan. (packets are going out bursty, with average hovering

Re: ENOBUFS

2002-10-15 Thread Lars Eggert
Petri Helenius wrote: Probably means that your outgoing interface queue is filling up. ENOBUFS is the only way the kernel has to tell you ``slow down!''. How much should I be able to send to two em interfaces on one 66/64 PCI ? I've seen netperf UDP throughputs of ~950Mpbs with a fiber em

Re: ENOBUFS

2002-10-15 Thread Petri Helenius
how large are the packets and how fast is the box ? Packets go out at an average size of 1024 bytes. The box is dual P4 Xeon 2400/400 so I think it should qualify as fast ? I disabled hyperthreading to figure out if it was causing problems. I seem to be able to send packets at a rate in the

Re: ENOBUFS

2002-10-15 Thread Lars Eggert
Petri Helenius wrote: how large are the packets and how fast is the box ? Packets go out at an average size of 1024 bytes. The box is dual P4 Xeon 2400/400 so I think it should qualify as fast ? I disabled hyperthreading to figure out if it was causing problems. I seem to be able to send

Re: ENOBUFS

2002-10-15 Thread Petri Helenius
The 900Mbps are similar to what I see here on similar hardware. What kind of receive performance do you observe? I haven´t got that far yet. For your two-interface setup, are the 600Mbps aggregate send rate on both interfaces, or do you see 600Mbps per interface? In the latter 600Mbps per

Re: ip_output and ENOBUFS

2002-03-27 Thread Andrew Gallatin
Archie Cobbs writes: Luigi Rizzo writes: Is if_tx_rdy() something that can be used generally or does it only work with dummynet ? well, the function is dummynet-specific, but I would certainly like a generic callback list to be implemented in ifnet which is invoked on

Re: ip_output and ENOBUFS

2002-03-27 Thread Luigi Rizzo
On Wed, Mar 27, 2002 at 09:53:00AM -0800, Archie Cobbs wrote: ... managed and can be extended to point to additional ifnet state that does not fit in the immutable one... Why is it important to avoid changing 'struct ifnet' ? backward compatibility with binary-only drivers ... Not that i

Re: ip_output and ENOBUFS

2002-03-27 Thread Julian Elischer
On Wed, 27 Mar 2002, Luigi Rizzo wrote: On Wed, Mar 27, 2002 at 09:53:00AM -0800, Archie Cobbs wrote: ... managed and can be extended to point to additional ifnet state that does not fit in the immutable one... Why is it important to avoid changing 'struct ifnet' ? backward

Re: ip_output and ENOBUFS

2002-03-27 Thread Julian Elischer
On Wed, 27 Mar 2002, Archie Cobbs wrote: Luigi Rizzo writes: Is if_tx_rdy() something that can be used generally or does it only work with dummynet ? well, the function is dummynet-specific, but I would certainly like a generic callback list to be implemented in ifnet which is

Re: ip_output and ENOBUFS

2002-03-26 Thread Matthew Luckie
I am under the impression that implementing this mechanism would not be so trivial. hmm, we looked at how other protocols handled the ENOBUFS case from ip_output. tcp_output calls tcp_quench on this error. while the interface may not be able to send any more packets than it does currently

Re: ip_output and ENOBUFS

2002-03-26 Thread Lars Eggert
Matthew Luckie wrote: hmm, we looked at how other protocols handled the ENOBUFS case from ip_output. tcp_output calls tcp_quench on this error. while the interface may not be able to send any more packets than it does currently, closing the congestion window back to 1 segment

Re: ip_output and ENOBUFS

2002-03-26 Thread Archie Cobbs
Luigi Rizzo writes: if you could suggest a few modifications that would be required, i'd like to pursue this further. Look at tsleep/wakeup on ifnet of if_snd. I am under the impression that implementing this mechanism would not be so trivial. It is not immediate to tell back to the

Re: ip_output and ENOBUFS

2002-03-26 Thread Luigi Rizzo
the ENOBUFS is very typical with UDP applications that try to send as fast as possible (e.g. the various network test utilities in ports), and as i said in a previous message, putting up a mechanism to pass around queue full/queue not full events is expensive because it might trigger on every

Re: ip_output and ENOBUFS

2002-03-26 Thread Luigi Rizzo
On Tue, Mar 26, 2002 at 10:10:05PM -0800, Archie Cobbs wrote: Luigi Rizzo writes: ... Along those lines, this might be a handy thing to add... int if_get_next(struct ifnet *ifp); /* runs at splimp() */ This function tries to get the next packet scheduled to go out

Re: ip_output and ENOBUFS

2002-03-26 Thread Archie Cobbs
Luigi Rizzo writes: As a matter of fact, i even implemented a similar thing in dummynet, and if device drivers call if_tx_rdy() when they complete a transmission, then the tx interrupt can be used to clock packets out of the dummynet pipes. A patch for if_tun.c is below, So if_tx_rdy()

Re: ip_output and ENOBUFS

2002-03-26 Thread Luigi Rizzo
of the dummynet pipes. A patch for if_tun.c is below, So if_tx_rdy() sounds like my if_get_next().. guess you already did that :-) yes, but it does not solve the problem of the original poster who wanted to block/wakeup processes getting enobufs. Signal just do not propagate beyond the pipe

ip_output and ENOBUFS

2002-03-25 Thread Matthew Luckie
Hi I have written a syscall that creates a packet in kernel-space, timestamps it, and then sends it via ip_output If the user-space application uses this system call faster than the packets can be sent, ip_output will return ENOBUFS. Is there a mechanism to tell when ip_output should be called

Re: ip_output and ENOBUFS

2002-03-25 Thread Lars Eggert
Matthew Luckie wrote: I have written a syscall that creates a packet in kernel-space, timestamps it, and then sends it via ip_output If the user-space application uses this system call faster than the packets can be sent, ip_output will return ENOBUFS. Is there a mechanism to tell when

Re: ip_output and ENOBUFS

2002-03-25 Thread Matthew Luckie
Is there a mechanism to tell when ip_output should be called again? Ideally, I would block until such time as i could send it via ip_output You probably get that because the outbound interface queue gets full, so you want to block your caller until space becomes available there. There

Re: ip_output and ENOBUFS

2002-03-25 Thread Lars Eggert
Matthew Luckie wrote: Is there a mechanism to tell when ip_output should be called again? Ideally, I would block until such time as i could send it via ip_output You probably get that because the outbound interface queue gets full, so you want to block your caller until space becomes available

Re: ip_output and ENOBUFS

2002-03-25 Thread Lars Eggert
Lars Eggert wrote: Matthew Luckie wrote: Is there a mechanism to tell when ip_output should be called again? Ideally, I would block until such time as i could send it via ip_output You probably get that because the outbound interface queue gets full, so you want to block your caller

Re: ip_output and ENOBUFS

2002-03-25 Thread Luigi Rizzo
On Mon, Mar 25, 2002 at 02:06:19PM -0800, Lars Eggert wrote: Matthew Luckie wrote: Is there a mechanism to tell when ip_output should be called again? ... if you could suggest a few modifications that would be required, i'd like to pursue this further. Look at tsleep/wakeup on ifnet of

ENOBUFS and network performance tuning

2001-09-25 Thread Jeff Behl
I have 4.3, and soon to be 4.4, boxes dedicated to a single app which basically 'bounces' traffic between two incoming TCP connections. After around 240 sessions (each session consisting of two incoming connections with traffic being passed between them), I started getting ENOBUFS errors