On Wed, 2017-02-01 at 18:29 -0500, Boris Ostrovsky wrote:
>
> I could not convince myself that napi_synchronize() is sufficient here
> (mostly because I am not familiar with napi flow). At the same time I
> would rather not make changes in anticipation of possible disappearance
> of
On Mon, 2017-01-30 at 13:23 -0500, Boris Ostrovsky wrote:
> We do netif_carrier_off() first thing in xennet_disconnect_backend() and
> the only place where the timer is rearmed is xennet_alloc_rx_buffers(),
> which is guarded by netif_carrier_ok() check.
Oh well, testing netif_carrier_ok() in
On Mon, 2017-01-30 at 12:45 -0500, Boris Ostrovsky wrote:
> rx_refill_timer should be deleted as soon as we disconnect from the
> backend since otherwise it is possible for the timer to go off before
> we get to xennet_destroy_queues(). If this happens we may dereference
> queue->rx.sring which is
On Wed, 2017-01-25 at 16:26 +, Paul Durrant wrote:
> Sowmini points out two vulnerabilities in xen-netfront:
>
> a) The code assumes that skb->len is at least ETH_HLEN.
> b) The code assumes that at least ETH_HLEN octets are in the linear
>port of the socket buffer.
>
> This patch adds
On Tue, 2016-12-20 at 12:51 -0500, Konrad Rzeszutek Wilk wrote:
> On Tue, Dec 20, 2016 at 05:44:06PM +, Roger Pau Monné wrote:
> > On Tue, Dec 20, 2016 at 11:47:03AM -0500, Konrad Rzeszutek Wilk wrote:
> > > On Tue, Dec 20, 2016 at 10:02:19PM +0800, Geliang Tang wrote:
> > > > To make the code
On Mon, 2015-08-17 at 11:09 +0200, Sander Eikelenboom wrote:
Saturday, August 15, 2015, 12:39:25 AM, you wrote:
On Sat, 2015-08-15 at 00:09 +0200, Sander Eikelenboom wrote:
On 2015-08-13 00:41, Eric Dumazet wrote:
On Wed, 2015-08-12 at 23:46 +0200, Sander Eikelenboom wrote:
Thanks
On Mon, 2015-08-17 at 09:02 -0500, Jon Christopherson wrote:
This is very similar to the behavior I am seeing in this bug:
https://bugzilla.kernel.org/show_bug.cgi?id=102911
OK, but have you applied the fix ?
From: Eric Dumazet eduma...@google.com
On Mon, 2015-08-17 at 16:25 +0200, Sander Eikelenboom wrote:
Monday, August 17, 2015, 4:21:47 PM, you wrote:
On Mon, 2015-08-17 at 09:02 -0500, Jon Christopherson wrote:
This is very similar to the behavior I am seeing in this bug:
https
On Sat, 2015-08-15 at 00:09 +0200, Sander Eikelenboom wrote:
On 2015-08-13 00:41, Eric Dumazet wrote:
On Wed, 2015-08-12 at 23:46 +0200, Sander Eikelenboom wrote:
Thanks for the reminder, but luckily i was aware of that,
seen enough of your replies asking for patches to be resubmitted
On Wed, 2015-07-15 at 12:52 +0300, Konstantin Khlebnikov wrote:
These functions check should_resched() before unlocking spinlock/bh-enable:
preempt_count always non-zero = should_resched() always returns false.
cond_resched_lock() worked iff spin_needbreak is set.
Interesting, this definitely
On Mon, 2015-07-06 at 11:35 +0100, Julien Grall wrote:
__in6_dev_get requires to hold rcu_read_lock or RTNL. My knowledge on
this code is very limited. Are we sure that one this lock is hold? At
first glance, I wasn't able to find one.
You could play it safe ;)
diff --git
On Mon, 2015-07-06 at 16:26 +0800, Bob Liu wrote:
Hi,
I tried to run the latest kernel v4.2-rc1, but often got below panic during
system boot.
[ 42.118983] BUG: unable to handle kernel paging request at 003f
[ 42.119008] IP: [8161cfd0] __netdev_pick_tx+0x70/0x120
On Mon, 2015-07-06 at 19:13 +0800, Bob Liu wrote:
Thank you for the quick fix!
Tested by rebooting several times and didn't hit this panic any more.
Thanks Bob, I will submit an official patch then ;)
___
Xen-devel mailing list
On Tue, 2015-06-02 at 10:52 +0100, Wei Liu wrote:
Hi Eric
Sorry for coming late to the discussion.
On Thu, Apr 16, 2015 at 05:42:16AM -0700, Eric Dumazet wrote:
On Thu, 2015-04-16 at 11:01 +0100, George Dunlap wrote:
He suggested that after he'd been prodded by 4 more e-mails
On Mon, 2015-05-11 at 18:34 -0700, Venkat Venkatsubra wrote:
In ed1f50c3a (net: add skb_checksum_setup) some checksum functions
were introduced in core. Subsequent change b5cf66cd1 (xen-netfront:
use new skb_checksum_setup function) made use of those functions to
replace its own
On Thu, 2015-04-16 at 12:39 +0100, George Dunlap wrote:
On 04/15/2015 07:17 PM, Eric Dumazet wrote:
Do not expect me to fight bufferbloat alone. Be part of the challenge,
instead of trying to get back to proven bad solutions.
I tried that. I wrote a description of what I thought
On Thu, 2015-04-16 at 11:01 +0100, George Dunlap wrote:
He suggested that after he'd been prodded by 4 more e-mails in which two
of us guessed what he was trying to get at. That's what I was
complaining about.
My big complain is that I suggested to test to double the sysctl, which
gave good
On Wed, 2015-04-15 at 14:43 +0100, George Dunlap wrote:
On Mon, Apr 13, 2015 at 2:49 PM, Eric Dumazet eric.duma...@gmail.com wrote:
On Mon, 2015-04-13 at 11:56 +0100, George Dunlap wrote:
Is the problem perhaps that netback/netfront delays TX completion?
Would it be better to see
On Wed, 2015-04-15 at 15:36 +0100, Ian Campbell wrote:
On Wed, 2015-04-15 at 15:19 +0100, George Dunlap wrote:
On Mon, Apr 13, 2015 at 4:03 PM, Malcolm Crossley
[...]
From a networking point of view, the backend is a switch. Is it OK to
consider the packet to have been transmitted from
On Wed, 2015-04-15 at 18:23 +0100, George Dunlap wrote:
On 04/15/2015 05:38 PM, Eric Dumazet wrote:
My thoughts that instead of these long talks you should guys read the
code :
/* TCP Small Queues :
* Control number of packets in qdisc/devices to two
On Wed, 2015-04-15 at 11:19 -0700, Rick Jones wrote:
Well, I'm not sure that it is George and Jonathan themselves who don't
want to change a sysctl, but the customers who would have to tweak that
in their VMs?
Keep in mind some VM users install custom qdisc, or even custom TCP
sysctls.
On Wed, 2015-04-15 at 18:23 +0100, George Dunlap wrote:
Which means that max(2*skb-truesize, sk-sk_pacing_rate 10) is
*already* larger for Xen; that calculation mentioned in the comment is
*already* doing the right thing.
Sigh.
1ms of traffic at 40Gbit is 5 MBytes
The reason for the cap to
On Wed, 2015-04-15 at 10:55 -0700, Rick Jones wrote:
Have you tested this patch on a NIC without GSO/TSO ?
This would allow more than 500 packets for a single flow.
Hello bufferbloat.
Woudln't the fq_codel qdisc on that interface address that problem?
Last time I checked, default
On Wed, 2015-04-15 at 18:41 +0100, George Dunlap wrote:
So you'd be OK with a patch like this? (With perhaps a better changelog?)
-George
---
TSQ: Raise default static TSQ limit
A new dynamic TSQ limit was introduced in c/s 605ad7f18 based on the
size of actual packets and the
On Wed, 2015-04-15 at 19:04 +0100, George Dunlap wrote:
Maybe you should stop wasting all of our time and just tell us what
you're thinking.
I think you make me wasting my time.
I already gave all the hints in prior discussions.
Rome was not built in one day.
On Thu, 2015-04-16 at 12:20 +0800, Herbert Xu wrote:
Eric Dumazet eric.duma...@gmail.com wrote:
We already have netdev-gso_max_size and netdev-gso_max_segs
which are cached into sk-sk_gso_max_size sk-sk_gso_max_segs
It is quite dangerous to attempt tricks like this because a
tc
On Mon, 2015-04-13 at 14:46 +0100, David Vrabel wrote:
And the proof-of-concept patch for idea (b) I used was:
@@ -552,6 +552,8 @@ static int xennet_start_xmit(struct sk_buff *skb,
struct net_device *dev)
goto drop;
}
+skb_orphan(skb);
+
page =
On Mon, 2015-04-13 at 11:56 +0100, George Dunlap wrote:
Is the problem perhaps that netback/netfront delays TX completion?
Would it be better to see if that can be addressed properly, so that
the original purpose of the patch (fighting bufferbloat) can be
achieved while not degrading
that the perf regression is caused by the
prensence of the following commit in the guest kernel:
commit 605ad7f184b60cfaacbc038aa6c55ee68dee3c89
Author: Eric Dumazet eduma...@google.com
Date: Sun Dec 7 12:22:18 2014 -0800
tcp: refine TSO autosizing
A simple revert would fix
On Thu, 2015-04-09 at 17:36 +0100, Stefano Stabellini wrote:
A very big difference:
echo 262144 /proc/sys/net/ipv4/tcp_limit_output_bytes
brings us much closer to the original performance, the slowdown is just
8%
Cool.
echo 1048576 /proc/sys/net/ipv4/tcp_limit_output_bytes
fills the
On Thu, 2015-03-26 at 11:13 +, Jonathan Davies wrote:
xen-netfront limits transmitted skbs to be at most 44 segments in size.
However,
GSO permits up to 65536 bytes, which means a maximum of 45 segments of 1448
bytes each. This slight reduction in the size of packets means a slight loss
On Thu, 2015-03-26 at 16:46 +, Jonathan Davies wrote:
Network drivers with slow TX completion can experience poor network transmit
throughput, limited by hitting the sk_wmem_alloc limit check in
tcp_write_xmit.
The limit is 128 KB (by default), which means we are limited to two 64 KB skbs
On Sat, 2014-12-20 at 17:55 +1100, Herbert Xu wrote:
-- 8 --
The commit d75b1ade567ffab085e8adbbdacf0092d10cd09c (net: less
interrupt masking in NAPI) required drivers to leave poll_list
empty if the entire budget is consumed.
We have already had two broken drivers so let's add a check for
On Sat, 2014-12-20 at 11:36 +1100, Herbert Xu wrote:
On Sat, Dec 20, 2014 at 11:23:27AM +1100, Herbert Xu wrote:
A similar bug exists in virtio_net.
In order to detect other drivers doing this we should add something
like this.
-- 8 --
The commit
34 matches
Mail list logo