Quick followup. What I meant by "not sending much" is the adapter, not
the network. The network is very busy. However, there is hardly any
outgoing traffic from the box.
On 6/5/13, Peter LaDow wrote:
> On 6/5/13, Ronciak, John wrote:
>> So I have a couple of questions. Does this happen with a n
On 6/5/13, Ronciak, John wrote:
> So I have a couple of questions. Does this happen with a non-preemptive
> kernel? I understand that you probably need to use a preemptive kernel but
> for testing purposes it would be good to know. We don't always test with
> preemptive kernels.
Hmmm... If you
> -Original Message-
> From: Allan, Bruce W [mailto:bruce.w.al...@intel.com]
> Sent: Monday, June 03, 2013 4:28 PM
> To: Hrvoje Habjanić; e1000-devel@lists.sourceforge.net
> Subject: Re: [E1000-devel] [PATCH] Packet drops/loss with 82579LM - fixed
>
> > -Original Message-
> > From:
Hi Peter,
So I have a couple of questions. Does this happen with a non-preemptive
kernel? I understand that you probably need to use a preemptive kernel but for
testing purposes it would be good to know. We don't always test with
preemptive kernels.
When doing the up/down transitions is t
On Wed, Jun 5, 2013 at 3:01 PM, Peter LaDow wrote:
> After some more digging, I'm wondering if this is indeed a timing
> issue. Is there a problem with bringing up an interface too soon
> after taking it down? If I change my loop to use a 30 second delay
> between interface bringup/teardown, I d
After some more digging, I'm wondering if this is indeed a timing
issue. Is there a problem with bringing up an interface too soon
after taking it down? If I change my loop to use a 30 second delay
between interface bringup/teardown, I don't get the panic.
It appears that upon a change in adapte
Nothing jumped out at me when I checked, but here's the documentation check I
did to see if there was something obvious. First, download the specification
update for your part (go to intel.com and search for "82571EB spec update").
Then, check the release date for your driver (I usually just che
We are running a PPC system with an 82540EP that is causing kernel
panics when there is heavy traffic and the interface is brought up
and/or down (we aren't sure which yet).
We are running 3.0.57-rt82, but we can re-create this issue reliably
with 3.0.80 and 3.0.80-rt109 with the base version inc
On Wed, 2013-06-05 at 18:46 +0300, Eliezer Tamir wrote:
> On 05/06/2013 18:39, Eric Dumazet wrote:
> > On Wed, 2013-06-05 at 18:30 +0300, Eliezer Tamir wrote:
> >> On 05/06/2013 18:21, Eric Dumazet wrote:
>
> >>> It would also make sense to give end_time as a parameter, so that the
> >>> polling()
On 05/06/2013 18:39, Eric Dumazet wrote:
> On Wed, 2013-06-05 at 18:30 +0300, Eliezer Tamir wrote:
>> On 05/06/2013 18:21, Eric Dumazet wrote:
>>> It would also make sense to give end_time as a parameter, so that the
>>> polling() code could really give a end_time for the whole duration of
>>> po
On 05/06/2013 18:20, Eric Dumazet wrote:
> On Wed, 2013-06-05 at 16:41 +0300, Eliezer Tamir wrote:
>> On 05/06/2013 16:30, Eric Dumazet wrote:
>
>>> I am a bit uneasy with this one, because an applicatio polling() on one
>>> thousand file descriptors using select()/poll(), will call sk_poll_ll()
>>
On Wed, 2013-06-05 at 18:30 +0300, Eliezer Tamir wrote:
> On 05/06/2013 18:21, Eric Dumazet wrote:
> > On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
> >
> >
> > This is probably too big to be inlined, and nonblock should be a bool
>
>
> > It would also make sense to give end_time as a p
On 05/06/2013 18:28, Willem de Bruijn wrote:
> On Wed, Jun 5, 2013 at 9:23 AM, Eric Dumazet wrote:
>> On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
>>> Adds an ndo_ll_poll method and the code that supports it.
>>> This method can be used by low latency applications to busy-poll
>>> Ether
On 05/06/2013 18:21, Eric Dumazet wrote:
> On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
>
>
> This is probably too big to be inlined, and nonblock should be a bool
> It would also make sense to give end_time as a parameter, so that the
> polling() code could really give a end_time fo
On Wed, Jun 5, 2013 at 9:23 AM, Eric Dumazet wrote:
> On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
>> Adds an ndo_ll_poll method and the code that supports it.
>> This method can be used by low latency applications to busy-poll
>> Ethernet device queues directly from the socket code.
>>
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
This is probably too big to be inlined, and nonblock should be a bool
It would also make sense to give end_time as a parameter, so that the
polling() code could really give a end_time for the whole duration of
poll().
(You then should te
On Wed, 2013-06-05 at 16:41 +0300, Eliezer Tamir wrote:
> On 05/06/2013 16:30, Eric Dumazet wrote:
> > I am a bit uneasy with this one, because an applicatio polling() on one
> > thousand file descriptors using select()/poll(), will call sk_poll_ll()
> > one thousand times.
>
> But we call sk_pol
On 05/06/2013 17:17, Eric Dumazet wrote:
> On Wed, 2013-06-05 at 06:56 -0700, Eric Dumazet wrote:
>
>> This looks quite easy, by adding in include/uapi/asm-generic/poll.h
>>
>> #define POLL_LL 0x8000
>>
>> And do the sk_poll_ll() call only if flag is set.
>>
>> I do not think we have to support sel
On Wed, 2013-06-05 at 06:56 -0700, Eric Dumazet wrote:
> This looks quite easy, by adding in include/uapi/asm-generic/poll.h
>
> #define POLL_LL 0x8000
>
> And do the sk_poll_ll() call only if flag is set.
>
> I do not think we have to support select(), as its legacy interface, and
> people wan
On Wed, 2013-06-05 at 14:49 +0100, David Laight wrote:
> > I am a bit uneasy with this one, because an applicatio polling() on one
> > thousand file descriptors using select()/poll(), will call sk_poll_ll()
> > one thousand times.
>
> Anything calling poll() on 1000 fds probably has performance
>
On Wed, 2013-06-05 at 16:41 +0300, Eliezer Tamir wrote:
> On 05/06/2013 16:30, Eric Dumazet wrote:
> > On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
> >> A very naive select/poll busy-poll support.
> >> Add busy-polling to sock_poll().
> >> When poll/select have nothing to report, call th
> I am a bit uneasy with this one, because an applicatio polling() on one
> thousand file descriptors using select()/poll(), will call sk_poll_ll()
> one thousand times.
Anything calling poll() on 1000 fds probably has performance
issues already! Which is why kevent schemes have been added.
At le
On 05/06/2013 16:30, Eric Dumazet wrote:
> On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
>> A very naive select/poll busy-poll support.
>> Add busy-polling to sock_poll().
>> When poll/select have nothing to report, call the low-level
>> sock_poll() again until we are out of time or we fi
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
> A very naive select/poll busy-poll support.
> Add busy-polling to sock_poll().
> When poll/select have nothing to report, call the low-level
> sock_poll() again until we are out of time or we find something.
> Right now we poll every socket
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
> Adds low latency socket poll support for TCP.
> In tcp_v[46]_rcv() add a call to sk_mark_ll() to copy the napi_id
> from the skb to the sk.
> In tcp_recvmsg(), when there is no data in the socket we busy-poll.
> This is a good example of how
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
> Add upport for busy-polling on UDP sockets.
> In __udp[46]_lib_rcv add a call to sk_mark_ll() to copy the napi_id
> from the skb into the sk.
> This is done at the earliest possible moment, right after we identify
> which socket this skb is
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
> Adds an ndo_ll_poll method and the code that supports it.
> This method can be used by low latency applications to busy-poll
> Ethernet device queues directly from the socket code.
> sysctl_net_ll_poll controls how many microseconds to poll.
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
> Adds a napi_id and a hashing mechanism to lookup a napi by id.
> This will be used by subsequent patches to implement low latency
> Ethernet device polling.
> Based on a code sample by Eric Dumazet.
>
> Signed-off-by: Eliezer Tamir
> ---
Add additional statistics to the ixgbe driver for ndo_ll_poll
Defined under LL_EXTENDED_STATS
Signed-off-by: Alexander Duyck
Signed-off-by: Jesse Brandeburg
Tested-by: Willem de Bruijn
Signed-off-by: Eliezer Tamir
---
drivers/net/ethernet/intel/ixgbe/ixgbe.h | 14
drivers/
Add the ixgbe driver code implementing ndo_ll_poll.
Adds ndo_ll_poll method and locking between it and the napi poll.
When receiving a packet we use skb_mark_ll to record the napi it came from.
Add each napi to the napi_hash right after netif_napi_add().
Signed-off-by: Alexander Duyck
Signed-off-
Adds a napi_id and a hashing mechanism to lookup a napi by id.
This will be used by subsequent patches to implement low latency
Ethernet device polling.
Based on a code sample by Eric Dumazet.
Signed-off-by: Eliezer Tamir
---
include/linux/netdevice.h | 29 ++
net/core/dev
Adds low latency socket poll support for TCP.
In tcp_v[46]_rcv() add a call to sk_mark_ll() to copy the napi_id
from the skb to the sk.
In tcp_recvmsg(), when there is no data in the socket we busy-poll.
This is a good example of how to add busy-poll support to more protocols.
Signed-off-by: Alexa
And here is v9.
Except for typo fixes in comments/description, only 2/7 and 5/7 were changed.
Thanks to everyone for their input.
-Eliezer
Change log:
v9
- correct sysctl proc_handler, reported by Eric Dumazet and Amir Vadai.
- more int -> bool changes, reported by Eric Dumazet.
- better mask te
Adds an ndo_ll_poll method and the code that supports it.
This method can be used by low latency applications to busy-poll
Ethernet device queues directly from the socket code.
sysctl_net_ll_poll controls how many microseconds to poll.
Default is zero (disabled).
Individual protocol support will be
Add upport for busy-polling on UDP sockets.
In __udp[46]_lib_rcv add a call to sk_mark_ll() to copy the napi_id
from the skb into the sk.
This is done at the earliest possible moment, right after we identify
which socket this skb is for.
In __skb_recv_datagram When there is no data and the user
tri
A very naive select/poll busy-poll support.
Add busy-polling to sock_poll().
When poll/select have nothing to report, call the low-level
sock_poll() again until we are out of time or we find something.
Right now we poll every socket once, this is suboptimal
but improves latency when the number of s
On Mon, 03 Jun 2013 at 08:02 GMT, Eliezer Tamir
wrote:
> +/* called from the device poll rutine to get ownership of a q_vector */
> +static inline bool ixgbe_qv_lock_napi(struct ixgbe_q_vector *q_vector)
> +{
> + int rc = true;
bool rc = true;
> + spin_lock(&q_vector->lock);
> +
37 matches
Mail list logo