() -
udp_output() - ip_output()
udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output
can also return ENOBUFS.
it doesn't look like the socket code (eg sosend_dgram()) is doing any
buffering - it's just copying the frame and stuffing it up to the
driver. No queuing involved before
On Wed, Jul 16, 2014 at 11:00 AM, Adrian Chadd adr...@freebsd.org wrote:
Hi!
So the UDP transmit path is udp_usrreqs-pru_send() == udp_send() -
udp_output() - ip_output()
udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output
can also return ENOBUFS.
it doesn't look like
On Fri, 18 Jul 2014, hiren panchasara wrote:
On Wed, Jul 16, 2014 at 11:00 AM, Adrian Chadd adr...@freebsd.org wrote:
Hi!
So the UDP transmit path is udp_usrreqs-pru_send() == udp_send() -
udp_output() - ip_output()
udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output
can
()
udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output
can also return ENOBUFS.
it doesn't look like the socket code (eg sosend_dgram()) is doing any
buffering - it's just copying the frame and stuffing it up to the
driver. No queuing involved before the NIC.
Right. Thanks
On Jul 18, 2014, at 23:34, Adrian Chadd adr...@freebsd.org wrote:
It upsets the ALTQ people too.
I'm an ALTQ person (pfSense, so maybe one if the biggest) and I'm not upset.
That cr*p needs to die in a fire.
___
freebsd-net@freebsd.org mailing
Return values in sendto() manpage says:
[ENOBUFS] The system was unable to allocate an internal buffer.
The operation may succeed when buffers become avail-
able.
[ENOBUFS] The output queue for a network interface
Hi!
So the UDP transmit path is udp_usrreqs-pru_send() == udp_send() -
udp_output() - ip_output()
udp_output() does do a M_PREPEND() which can return ENOBUFS. ip_output
can also return ENOBUFS.
it doesn't look like the socket code (eg sosend_dgram()) is doing any
buffering - it's just copying
[Drop hostname part of IPv6-only address above to obtain IPv4-capable e-mail,
or just drop me from the recipients and I'll catch up from the archives]
Hello, %s!
I've read in this list from a couple years ago, several discussions about
ENOBUFS being returned to UDP-using applications
On Mon, 15 Dec 2003 22:17:53 +0100 (CET), Barry Bouwsma [EMAIL PROTECTED] said:
If I were to tweak the sysctl net.inet.ip.intr_queue_maxlen from its
default of 50 up, would that possibly help named?
No, it will not have any effect on your problem. The IP input queue
is only on receive, and
to the
first IP returns ENOBUFS (probably because the link is being dropped).
Anything related to the PPTP output window?
Here is the log entries after both links are established (they show as
connected in the win2k and winxp boxes and pptp0 was answering the LCP
echos):
Dec 10 11:02:22
Hi,
On Wed, 10 Dec 2003, Giovanni P. Tirloni wrote:
common:
set bundle disable multilink
set bundle enable compression
set bundle yes encryption
^^^ please remove this line
You don't need ECP for MPPE (Microsoft Point to Point Encryption)
Maybe this
From: Don Bowman [mailto:don;sandvine.com]
In bge_rxeof(), there can end up being a condition which causes
the driver to endlessly interrupt.
if (bge_newbuf_std(sc, sc-bge_std, NULL) == ENOBUFS) {
ifp-if_ierrors++;
bge_newbuf_std(sc, sc-bge_std, m);
continue;
}
happens
In bge_rxeof(), there can end up being a condition which causes
the driver to endlessly interrupt.
if (bge_newbuf_std(sc, sc-bge_std, NULL) == ENOBUFS) {
ifp-if_ierrors++;
bge_newbuf_std(sc, sc-bge_std, m);
continue;
}
happens. Now, bge_newbuf_std returns ENOBUFS. 'm' is also NULL
On Fri, 18 Oct 2002, Kelly Yancey wrote:
Hmm. Might that explain the abysmal performance of the em driver with
packets smaller than 333 bytes?
Kelly
This is just a follow-up to report that thanks to Luigi and Prafulla we
were able to track down the cause of the problems I was seeing
-
From: Jim McGrath [EMAIL PROTECTED]
To: Luigi Rizzo [EMAIL PROTECTED]; Petri Helenius [EMAIL PROTECTED]
Cc: Lars Eggert [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Friday, October 18, 2002 7:49 AM
Subject: RE: ENOBUFS
Careful here. Read the errata sheet!! I do not believe the em driver uses
just reading the source code, yes, it appears that the card has
support for delayed rx/tx interrupts -- see RIDV and TIDV definitions
and usage in sys/dev/em/* . I don't know in what units are the values
(28 and 128, respectively), but it does appear that tx interrupts are
delayed a bit more
, to process receive descriptors under low load.
Jim
-Original Message-
From: [EMAIL PROTECTED]
[mailto:owner-freebsd-net;FreeBSD.ORG]On Behalf Of Luigi Rizzo
Sent: Friday, October 18, 2002 12:56 AM
To: Jim McGrath
Cc: Petri Helenius; Lars Eggert; [EMAIL PROTECTED]
Subject: Re: ENOBUFS
: ENOBUFS
On Fri, Oct 18, 2002 at 12:49:04AM -0400, Jim McGrath wrote:
Careful here. Read the errata sheet!! I do not believe the em
driver uses
these parameters, and possibly for a good reason.
as if i had access to the data sheets :)
cheers
luigi
Jim
-Original
The chips I have are 82546. Is your recommendation to steer away
from Intel
Gigabit Ethernet chips? What would be more optimal alternative?
The 82543/82544 chips worked well in vanilla configurations. I never played
with an 82546. The em driver is supported by Intel, so any chip features it
PROTECTED]
Sent: Friday, October 18, 2002 7:49 AM
Subject: RE: ENOBUFS
Careful here. Read the errata sheet!! I do not believe the em
driver uses
these parameters, and possibly for a good reason.
Jim
-Original Message-
From: [EMAIL PROTECTED]
[mailto:owner
Transmit/Receive Interrupt Delay values are in units of 1.024 microseconds.
The em driver currently uses these to enable interrupt coalescing on the cards.
Thanks,
Prafulla
To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-net in the body of the message
On Fri, 18 Oct 2002, Petri Helenius wrote:
just reading the source code, yes, it appears that the card has
support for delayed rx/tx interrupts -- see RIDV and TIDV definitions
and usage in sys/dev/em/* . I don't know in what units are the values
(28 and 128, respectively), but it does
On Fri, Oct 18, 2002 at 10:27:04AM -0700, Kelly Yancey wrote:
...
Hmm. Might that explain the abysmal performance of the em driver with
packets smaller than 333 bytes?
what do you mean ? it works great for me. even on -current i
can push out over 400kpps (64byte frames) on a 2.4GHz box.
On Fri, Oct 18, 2002 at 06:21:37PM +0300, Petri Helenius wrote:
...
Luigi´s polling work would be useful here. That would lead to incorrect
timestamps
on the packets, though?
polling introduce an extra uncertainty which might be as large as
an entire clock tick, yes.
But even with interrupts,
On Fri, 18 Oct 2002, Luigi Rizzo wrote:
On Fri, Oct 18, 2002 at 10:27:04AM -0700, Kelly Yancey wrote:
...
Hmm. Might that explain the abysmal performance of the em driver with
packets smaller than 333 bytes?
what do you mean ? it works great for me. even on -current i
can push out
How is the measurement done, does the box under test act as a router
with the smartbit pushing traffic in and expecting it back ?
The numbers are strange, anyways.
A frame of N bytes takes (N*8+160) nanoseconds on the wire, which
for 330-byte frames should amount to 100/(330*8+160) ~=
FYI. 82543 doesn't support PCI-X protocol.
For PCI-X support use 82544, 82545 or 82546 based cards.
-Prafulla
Kelly Yancey [[EMAIL PROTECTED]] wrote:
On Fri, 18 Oct 2002, Luigi Rizzo wrote:
On Fri, Oct 18, 2002 at 10:27:04AM -0700, Kelly Yancey wrote:
...
Hmm. Might that explain
On Fri, 18 Oct 2002, Luigi Rizzo wrote:
How is the measurement done, does the box under test act as a router
with the smartbit pushing traffic in and expecting it back ?
The box has 2 interfaces, a fxp and a em (or bge). The GigE interface is
configured with 7 VLANs. THe SmartBit produces
Oh, I *thought* the numbers you reported were pps but now i see that
nowhere you mentioned that.
But if things are as you say, i am seriously puzzled on what you
are trying to measure -- the output interface (fxp) is a 100Mbit/s
card which cannot possibly support the load you are trying to offer
On Fri, 18 Oct 2002, Prafulla Deuskar wrote:
FYI. 82543 doesn't support PCI-X protocol.
For PCI-X support use 82544, 82545 or 82546 based cards.
-Prafulla
That is alright, we aren't expecting PCI-X speeds. It is just that our only
PCI slot on the motherboard (1U rack-mount system) is a
On Fri, 18 Oct 2002, Luigi Rizzo wrote:
Oh, I *thought* the numbers you reported were pps but now i see that
nowhere you mentioned that.
Sorry. I just checked with our tester. Those are the total number of
packets sent during the test. Each test lasted 10 seconds, so divde by 10 to
get
On Fri, 18 Oct 2002, Kelly Yancey wrote:
You should definitely clarify how fast the smartbits unit is pushing
out traffic, and whether its speed depends on the measured RTT.
It doesn't sound like the box is that smart. As it was explained to me, the
test setup includes a desired
In special cases, the error induced by having interrupts blocked
causes errors which are much larger than polling alone.
Which conditions block interrupts for longer than, say, a millisecond?
Disk errors / wakeups? Anything occurring in normal conditions?
Pete
To Unsubscribe: send mail to
Less :-) Let me tell you tomorrow, don't have the numbers here right now.
I seem to get about 5-6 packets on an interrupt. Is this tunable? At
50kpps the card generates 10k interrupts a second. Sending generates
way less. This is about 300Mbps so with the average packet size of
750 there should
On Thu, Oct 17, 2002 at 11:55:24PM +0300, Petri Helenius wrote:
...
I seem to get about 5-6 packets on an interrupt. Is this tunable? At
just reading the source code, yes, it appears that the card has
support for delayed rx/tx interrupts -- see RIDV and TIDV definitions
and usage in sys/dev/em/*
To: Petri Helenius
Cc: Lars Eggert; [EMAIL PROTECTED]
Subject: Re: ENOBUFS
On Thu, Oct 17, 2002 at 11:55:24PM +0300, Petri Helenius wrote:
...
I seem to get about 5-6 packets on an interrupt. Is this tunable? At
just reading the source code, yes, it appears that the card has
support
-
From: [EMAIL PROTECTED]
[mailto:owner-freebsd-net;FreeBSD.ORG]On Behalf Of Luigi Rizzo
Sent: Thursday, October 17, 2002 11:12 PM
To: Petri Helenius
Cc: Lars Eggert; [EMAIL PROTECTED]
Subject: Re: ENOBUFS
On Thu, Oct 17, 2002 at 11:55:24PM +0300, Petri Helenius wrote:
...
I seem
Petri Helenius wrote:
The 900Mbps are similar to what I see here on similar hardware.
What kind of receive performance do you observe? I haven´t got that
far yet.
Less :-) Let me tell you tomorrow, don't have the numbers here right now.
600Mbps per interface. I´m going to try this out also
On Wed, Oct 16, 2002 at 08:57:19AM +0300, Petri Helenius wrote:
how large are the packets and how fast is the box ?
Packets go out at an average size of 1024 bytes. The box is dual
P4 Xeon 2400/400 so I think it should qualify as fast ? I disabled
yes, it qualifies as fast. With this
Sam Leffler wrote:
Try my port of the netbsd kttcp kernel module. You can find it at
http://www.freebsd.org/~sam
this seems to use some things from netbsd like
so_rcv.sb_lastrecord and SBLASTRECORDCHK/SBLASTMBUFCHK.
Is there something else I need to apply to build it on
freebsd -STABLE?
Sam Leffler wrote:
Try my port of the netbsd kttcp kernel module. You can find it at
http://www.freebsd.org/~sam
this seems to use some things from netbsd like
so_rcv.sb_lastrecord and SBLASTRECORDCHK/SBLASTMBUFCHK.
Is there something else I need to apply to build it on
freebsd
On Wed, 16 Oct 2002 00:53:46 +0300, Petri Helenius [EMAIL PROTECTED] said:
My processes writing to SOCK_DGRAM sockets are getting ENOBUFS
Probably means that your outgoing interface queue is filling up.
ENOBUFS is the only way the kernel has to tell you ``slow down!''.
-GAWollman
What rate are you sending these packets at? A standard interface queue
length is 50 packets, you get ENOBUFS when it's full.
This might explain the phenomenan. (packets are going out bursty, with average
hovering at ~500Mbps:ish) I recomplied kernel with IFQ_MAXLEN of 5000
but there seems
Probably means that your outgoing interface queue is filling up.
ENOBUFS is the only way the kernel has to tell you ``slow down!''.
How much should I be able to send to two em interfaces on one
66/64 PCI ?
Pete
To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-net
On Wed, Oct 16, 2002 at 02:04:11AM +0300, Petri Helenius wrote:
What rate are you sending these packets at? A standard interface queue
length is 50 packets, you get ENOBUFS when it's full.
This might explain the phenomenan. (packets are going out bursty, with average
hovering
Petri Helenius wrote:
Probably means that your outgoing interface queue is filling up.
ENOBUFS is the only way the kernel has to tell you ``slow down!''.
How much should I be able to send to two em interfaces on one
66/64 PCI ?
I've seen netperf UDP throughputs of ~950Mpbs with a fiber em
how large are the packets and how fast is the box ?
Packets go out at an average size of 1024 bytes. The box is dual
P4 Xeon 2400/400 so I think it should qualify as fast ? I disabled
hyperthreading to figure out if it was causing problems. I seem to
be able to send packets at a rate in the
Petri Helenius wrote:
how large are the packets and how fast is the box ?
Packets go out at an average size of 1024 bytes. The box is dual
P4 Xeon 2400/400 so I think it should qualify as fast ? I disabled
hyperthreading to figure out if it was causing problems. I seem to
be able to send
The 900Mbps are similar to what I see here on similar hardware.
What kind of receive performance do you observe? I haven´t got that
far yet.
For your two-interface setup, are the 600Mbps aggregate send rate on
both interfaces, or do you see 600Mbps per interface? In the latter
600Mbps per
Archie Cobbs writes:
Luigi Rizzo writes:
Is if_tx_rdy() something that can be used generally or does it only
work with dummynet ?
well, the function is dummynet-specific, but I would certainly like
a generic callback list to be implemented in ifnet which is
invoked on
On Wed, Mar 27, 2002 at 09:53:00AM -0800, Archie Cobbs wrote:
...
managed and can be extended to point to additional ifnet state that
does not fit in the immutable one...
Why is it important to avoid changing 'struct ifnet' ?
backward compatibility with binary-only drivers ...
Not that i
On Wed, 27 Mar 2002, Luigi Rizzo wrote:
On Wed, Mar 27, 2002 at 09:53:00AM -0800, Archie Cobbs wrote:
...
managed and can be extended to point to additional ifnet state that
does not fit in the immutable one...
Why is it important to avoid changing 'struct ifnet' ?
backward
On Wed, 27 Mar 2002, Archie Cobbs wrote:
Luigi Rizzo writes:
Is if_tx_rdy() something that can be used generally or does it only
work with dummynet ?
well, the function is dummynet-specific, but I would certainly like
a generic callback list to be implemented in ifnet which is
I am under the impression that implementing this mechanism would
not be so trivial.
hmm, we looked at how other protocols handled the ENOBUFS case from
ip_output.
tcp_output calls tcp_quench on this error.
while the interface may not be able to send any more packets than it does
currently
Matthew Luckie wrote:
hmm, we looked at how other protocols handled the ENOBUFS case from
ip_output.
tcp_output calls tcp_quench on this error.
while the interface may not be able to send any more packets than it
does currently, closing the congestion window back to 1 segment
Luigi Rizzo writes:
if you could suggest a few modifications that would be required, i'd like
to pursue this further.
Look at tsleep/wakeup on ifnet of if_snd.
I am under the impression that implementing this mechanism would
not be so trivial. It is not immediate to tell back to the
the ENOBUFS is very typical with UDP applications that try to
send as fast as possible (e.g. the various network test utilities
in ports), and as i said in a previous message, putting up a mechanism to
pass around queue full/queue not full events is expensive because
it might trigger on every
On Tue, Mar 26, 2002 at 10:10:05PM -0800, Archie Cobbs wrote:
Luigi Rizzo writes:
...
Along those lines, this might be a handy thing to add...
int if_get_next(struct ifnet *ifp); /* runs at splimp() */
This function tries to get the next packet scheduled to go
out
Luigi Rizzo writes:
As a matter of fact, i even implemented a similar thing in dummynet,
and if device drivers call if_tx_rdy() when they complete a
transmission, then the tx interrupt can be used to clock
packets out of the dummynet pipes. A patch for if_tun.c is below,
So if_tx_rdy()
of the dummynet pipes. A patch for if_tun.c is below,
So if_tx_rdy() sounds like my if_get_next().. guess you already did that :-)
yes, but it does not solve the problem of the original poster who
wanted to block/wakeup processes getting enobufs. Signal just
do not propagate beyond the pipe
Hi
I have written a syscall that creates a packet in kernel-space,
timestamps it, and then sends it via ip_output
If the user-space application uses this system call faster than the
packets can be sent, ip_output will return ENOBUFS.
Is there a mechanism to tell when ip_output should be called
Matthew Luckie wrote:
I have written a syscall that creates a packet in kernel-space,
timestamps it, and then sends it via ip_output
If the user-space application uses this system call faster than the
packets can be sent, ip_output will return ENOBUFS.
Is there a mechanism to tell when
Is there a mechanism to tell when ip_output should be called again?
Ideally, I would block until such time as i could send it via ip_output
You probably get that because the outbound interface queue gets full, so
you want to block your caller until space becomes available there. There
Matthew Luckie wrote:
Is there a mechanism to tell when ip_output should be called again?
Ideally, I would block until such time as i could send it via ip_output
You probably get that because the outbound interface queue gets full, so
you want to block your caller until space becomes available
Lars Eggert wrote:
Matthew Luckie wrote:
Is there a mechanism to tell when ip_output should be called again?
Ideally, I would block until such time as i could send it via ip_output
You probably get that because the outbound interface queue gets full,
so you want to block your caller
On Mon, Mar 25, 2002 at 02:06:19PM -0800, Lars Eggert wrote:
Matthew Luckie wrote:
Is there a mechanism to tell when ip_output should be called again?
...
if you could suggest a few modifications that would be required, i'd like
to pursue this further.
Look at tsleep/wakeup on ifnet of
I have 4.3, and soon to be 4.4, boxes dedicated to a single app which
basically 'bounces' traffic between two incoming TCP connections. After
around 240 sessions (each session consisting of two incoming connections
with traffic being passed between them), I started getting ENOBUFS
errors
67 matches
Mail list logo