On Thu, Mar 23, 2017 at 06:42:01PM +0100, Daniel Lezcano wrote:
> diff --git a/drivers/clocksource/timer-nps.c b/drivers/clocksource/timer-nps.c
> index da1f798..dbdb622 100644
> --- a/drivers/clocksource/timer-nps.c
> +++ b/drivers/clocksource/timer-nps.c
> @@ -256,7 +256,7 @@ static int __init np
On Tue, Mar 21, 2017 at 08:25:39PM +0100, Thomas Gleixner wrote:
> > I just hit this while fuzzing..
> >
> > general protection fault: [#1] PREEMPT SMP DEBUG_PAGEALLOC
> > CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.11.0-rc2-think+ #1
> > task: 88017f0ed440 task.stack: c90
On Tue, Mar 14, 2017 at 11:35:33AM +0800, Xin Long wrote:
> >> > [ 245.416594] (
> >> > [ 245.424928] sk_lock-AF_INET
> >> > [ 245.433279] ){+.+.+.}
> >> > [ 245.441889] , at: [] sctp_sendmsg+0x330/0xfe0
> >> > [sctp]
> >> > [ 245.450167]
> >> >stack backtrace:
> >> >
[ 244.251557] ===
[ 244.263321] [ ERR: suspicious RCU usage. ]
[ 244.274982] 4.10.0-think+ #7 Not tainted
[ 244.286511] ---
[ 244.298008] ./include/linux/rhashtable.h:602 suspicious
rcu_dereference_check() usage!
[ 244.309665]
. As one approaches the wire limit for
bitrate, the likes of a netperf service demand can be used to
demonstrate the performance change - though there isn't an easy way to
do that for parallel flows.
happy benchmarking,
rick jones
performance improved?
happy benchmarking,
rick jones
sane defaults. For example, the issues
we've seen with VMs sending traffic getting reordered when the driver
took it upon itself to enable xps.
rick jones
On 02/03/2017 10:22 AM, Benjamin Serebrin wrote:
Thanks, Michael, I'll put this text in the commit log:
XPS settings aren't write-able from userspace, so the only way I know
to fix XPS is in the driver.
??
root@np-cp1-c0-m1-mgmt:/home/stack# cat
/sys/devices/pci:00/:00:02.0/:04:0
RSI looks kinda like slab poison here, so re-using a free'd ptr ?
general protection fault: [#1] PREEMPT SMP DEBUG_PAGEALLOC
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.10.0-rc4-think+ #2
task: 81e16500 task.stack: 81e0
RIP: 0010:prb_retire_rx_blk_timer_expired+0x42/0x130
On 01/17/2017 11:13 AM, Eric Dumazet wrote:
On Tue, Jan 17, 2017 at 11:04 AM, Rick Jones wrote:
Drifting a bit, and it doesn't change the value of dealing with it, but out
of curiosity, when you say mostly in CLOSE_WAIT, why aren't the server-side
applications reacting to the read
AIT, why aren't the
server-side applications reacting to the read return of zero triggered
by the arrival of the FIN?
happy benchmarking,
rick jones
rrors.
Straight-up defaults with netperf, or do you use specific -s/S or -m/M
options?
happy benchmarking,
rick jones
np is already assigned in the variable declaration of ping_v6_sendmsg.
At this point, we have already dereferenced np several times, so the
NULL check is also redundant.
Suggested-by: Eric Dumazet
Signed-off-by: Dave Jones
diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c
index e1f8b34d7a2e
Just noticed this on 4.9. Will try and repro on 4.10rc1 later, but hitting
unrelated boot problems on that machine right now.
===
[ INFO: suspicious RCU usage. ]
4.9.0-backup-debug+ #1 Not tainted
---
./include/linux/rcupdate.h:557 Illegal co
o, 4);
setsockopt(fd, SOL_IPV6, IPV6_DSTOPTS, &buf, LEN);
sendto(fd, buf, 1, 0, (struct sockaddr *) buf, 110);
}
Signed-off-by: Dave Jones
diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
index 291ebc260e70..ea89073c8247 100644
--- a/net/ipv6/raw.c
+++ b/net/ipv6/raw.c
@@ -591,7 +591,11 @@ st
On Wed, Dec 21, 2016 at 10:33:20PM +0100, Hannes Frederic Sowa wrote:
> > Given all of this, I think the best thing to do is validate the offset
> > after the queue walks, which is pretty much what Dave Jones's original
> > patch was doing.
>
> I think both approaches protect against the bug
On Tue, Dec 20, 2016 at 11:31:38AM -0800, Cong Wang wrote:
> On Tue, Dec 20, 2016 at 10:17 AM, Dave Jones wrote:
> > On Mon, Dec 19, 2016 at 08:36:23PM -0500, David Miller wrote:
> > > From: Dave Jones
> > > Date: Mon, 19 Dec 2016 19:40:13 -0500
> > >
On Tue, Dec 20, 2016 at 01:28:13PM -0500, David Miller wrote:
> This has to do with the SKB buffer layout and geometry, not whether
> the packet is "fragmented" in the protocol sense.
>
> So no, this isn't a criteria for packets being filtered out by this
> point.
>
> Can you try to capt
On Mon, Dec 19, 2016 at 08:36:23PM -0500, David Miller wrote:
> From: Dave Jones
> Date: Mon, 19 Dec 2016 19:40:13 -0500
>
> > On Mon, Dec 19, 2016 at 07:31:44PM -0500, Dave Jones wrote:
> >
> > > Unfortunately, this made no difference. I spent some time
On 2016-12-20 17:16, Geoff Lansberry wrote:
> From: Geoff Lansberry
>
> The TRF7970A has configuration options to support hardware designs
> which use a 27.12MHz clock. This commit adds a device tree option
> 'clock-frequency' to support configuring the this chip for default
> 13.56MHz clock or t
On Mon, Dec 19, 2016 at 07:31:44PM -0500, Dave Jones wrote:
> Unfortunately, this made no difference. I spent some time today trying
> to make a better reproducer, but failed. I'll revisit again tomorrow.
>
> Maybe I need >1 process/thread to trigger this. That would
On Mon, Dec 19, 2016 at 02:48:48PM -0500, David Miller wrote:
> One thing that's interesting is that if the user picks "IPPROTO_RAW"
> as the value of 'protocol' we set inet->hdrincl to 1.
>
> The user can also set inet->hdrincl to 1 or 0 via setsockopt().
>
> I think this is part of the p
50
> > [] SYSC_sendto+0xef/0x170
> > [] SyS_sendto+0xe/0x10
> > [] do_syscall_64+0x50/0xa0
> > [] entry_SYSCALL64_slow_path+0x25/0x25
> >
> > Handle this in rawv6_push_pending_frames and jump to the failure path.
> >
> > Signed-off-by:
On Sat, Dec 17, 2016 at 10:41:20AM -0500, David Miller wrote:
> From: Dave Jones
> Date: Wed, 14 Dec 2016 10:47:29 -0500
>
> > It seems to be possible to craft a packet for sendmsg that triggers
> > the -EFAULT path in skb_copy_bits resulting in a BUG_ON that looks
+0x693/0x830
[] inet_sendmsg+0x67/0xa0
[] sock_sendmsg+0x38/0x50
[] SYSC_sendto+0xef/0x170
[] SyS_sendto+0xe/0x10
[] do_syscall_64+0x50/0xa0
[] entry_SYSCALL64_slow_path+0x25/0x25
Handle this in rawv6_push_pending_frames and jump to the failure path.
Signed-off-by: Dave Jones
diff --git a/net
I think this has been around for a while, but for some reason I'm running into
it a lot today.
BUG: sleeping function called from invalid context at kernel/irq/manage.c:110
in_atomic(): 1, irqs_disabled(): 1, pid: 1839, name: modprobe
no locks held by modprobe/1839.
Preemption disabled at:
[] wri
tionally, even under no stress at
all, you really should complain then.
Isn't that behaviour based (in part?) on the observation/belief that it
is fewer cycles to copy the small packet into a small buffer than to
send the larger buffer up the stack and have to allocate and map a
replacement?
rick jones
- (2 * VLAN_HLEN) which this patch is
doing. It will be useful in the next patch which allows
XDP program to extend the packet by adding new header(s).
Is mlx4 the only driver doing page-per-packet?
rick jones
On 12/01/2016 02:12 PM, Tom Herbert wrote:
We have consider both request size and response side in RPC.
Presumably, something like a memcache server is most serving data as
opposed to reading it, we are looking to receiving much smaller
packets than being sent. Requests are going to be quite smal
On 12/01/2016 12:18 PM, Tom Herbert wrote:
On Thu, Dec 1, 2016 at 11:48 AM, Rick Jones wrote:
Just how much per-packet path-length are you thinking will go away under the
likes of TXDP? It is admittedly "just" netperf but losing TSO/GSO does some
non-trivial things to effectiv
even if one does have the CPU cycles to burn so to speak, the effect
on power consumption needs to be included in the calculus.
happy benchmarking,
rick jones
On 11/30/2016 02:43 AM, Jesper Dangaard Brouer wrote:
Notice the "fib_lookup" cost is still present, even when I use
option "-- -n -N" to create a connected socket. As Eric taught us,
this is because we should use syscalls "send" or "write" on a connected
socket.
In theory, once the data socke
On 11/28/2016 10:33 AM, Rick Jones wrote:
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
time to try IP_MTU_DISCOVER ;)
To Rick, maybe you can find a good solution or option with Eric's hint,
to send appropriate sized UDP packets with Don't Fragment (DF).
Jesper -
Top of t
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
time to try IP_MTU_DISCOVER ;)
To Rick, maybe you can find a good solution or option with Eric's hint,
to send appropriate sized UDP packets with Don't Fragment (DF).
Jesper -
Top of trunk has a change adding an omni, test-specific -f opt
On 11/17/2016 04:37 PM, Julian Anastasov wrote:
On Thu, 17 Nov 2016, Rick Jones wrote:
raj@tardy:~/netperf2_trunk$ strace -v -o /tmp/netperf.strace src/netperf -F
src/nettest_omni.c -t UDP_STREAM -l 1 -- -m 1472
...
socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4
getsockopt(4, SOL_SOCKET
tf(where,\n\t\ttput_fmt_1_l"..., 1472, 0,
{sa_family=AF_INET, sin_port=htons(58088),
sin_addr=inet_addr("127.0.0.1")}, 16) = 1472
Of course, it will continue to send the same messages from the send_ring
over and over instead of putting different data into the buffers each
time, but if one has a sufficiently large -W option specified...
happy benchmarking,
rick jones
t creation
wouldn't be too difficult, along with another command-line option to
cause it to happen.
Could we leave things as "make sure you don't need fragmentation when
you use this" or would netperf have to start processing ICMP messages?
happy benchmarking,
rick jones
On 11/16/2016 02:40 PM, Jesper Dangaard Brouer wrote:
On Wed, 16 Nov 2016 09:46:37 -0800
Rick Jones wrote:
It is a wild guess, but does setting SO_DONTROUTE affect whether or not
a connect() would have the desired effect? That is there to protect
people from themselves (long story about
tperf users on
Windows and there wasn't (at the time) support for git under Windows.
But I am not against the idea in principle.
happy benchmarking,
rick jones
PS - rick.jo...@hp.com no longer works. rick.jon...@hpe.com should be
used instead.
ms
with a large PAGE_SIZE?
/* avoid msg truncation on > 4096 byte PAGE_SIZE platforms */
or something like that.
rick jones
the
can, while "back in the day" (when some of the first ethtool changes to
report speeds other than the "normal" ones went in) the speed of a
flexnic was fixed, today, it can actually operate in a range. From a
minimum guarantee to an "if there is bandwidth available" cap.
rick jones
On 10/25/2016 08:31 AM, Paul Menzel wrote:
To my knowledge, the firmware files haven’t changed since years [1].
Indeed - it looks like I read "bnx2" and thought "bnx2x" Must remember
to hold-off on replying until after the morning orange juice is consumed :)
rick
version of
the firmware. Usually, finding a package "out there" with the newer
version of the firmware, and installing it onto the system is sufficient.
happy benchmarking,
rick jones
On 10/10/2016 09:08 AM, Rick Jones wrote:
On 10/09/2016 03:33 PM, Eric Dumazet wrote:
OK, I am adding/CC Rick Jones, netperf author, since it seems a netperf
bug, not a kernel one.
I believe I already mentioned fact that "UDP_STREAM -- -N" was not doing
a connect() on the receiver
On 10/09/2016 03:33 PM, Eric Dumazet wrote:
OK, I am adding/CC Rick Jones, netperf author, since it seems a netperf
bug, not a kernel one.
I believe I already mentioned fact that "UDP_STREAM -- -N" was not doing
a connect() on the receiver side.
I can confirm that the receive s
currently
selecting different TXQ.
Just for completeness, in my testing, the VMs were single-vCPU.
rick jones
nnectX-3 Pro,E5-2670v3 12421 12612
BE3, E5-26408178 8484
82599, E5-2640 8499 8549
BCM57840, E5-2640 8544 8560
Skyhawk, E5-26408537 8701
happy benchmarking,
Drew Balliet
Jeurg Haefliger
rick jones
true long-term bw
estimate
variable?
We could do that.
We used to have variables (aka module params) while BBR was cooking in
our kernels ;)
Are there better than epsilon odds of someone perhaps wanting to poke
those values as it gets exposure beyond Google?
happy benchmarking,
rick jones
How you doing today? I hope you are doing well. My name is Jones, from the US.
I'm in Syria right now fighting ISIS. I want to get to know you better, if I
may be so bold. I consider myself an easy-going man, and I am currently looking
for a relationship in which I feel loved. Please te
conn-tracking work.
What is that first sentence trying to say? It appears to be incomplete,
and is that supposed to be "L3-symmetric?"
happy benchmarking,
rick jones
with one doorbell.
With small packets and the "default" ring size for this NIC/driver
combination, is the BQL large enough that the ring fills before one hits
the BQL?
rick jones
On Tue, Sep 06, 2016 at 10:52:43AM -0700, Eric Dumazet wrote:
> > > @@ -126,8 +126,10 @@ static int ping_v6_sendmsg(struct sock *sk, struct
> > > msghdr *msg, size_t len)
> > > rt = (struct rt6_info *) dst;
> > >
> > > np = inet6_sk(sk);
> > > -if (!np)
> > > -
had been fixed post 3.10, but
it seems at least one case wasn't, where I've seen this triggered
a lot from machines doing unprivileged icmp sockets.
Cc: Martin Lau
Signed-off-by: Dave Jones
diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c
index 0900352c924c..0e983b694ee8 100644
--- a/
On 08/31/2016 04:11 PM, Eric Dumazet wrote:
On Wed, 2016-08-31 at 15:47 -0700, Rick Jones wrote:
With regard to drops, are both of you sure you're using the same socket
buffer sizes?
Does it really matter ?
At least at points in the past I have seen different drop counts at the
SO_R
With regard to drops, are both of you sure you're using the same socket
buffer sizes?
In the meantime, is anything interesting happening with TCP_RR or
TCP_STREAM?
happy benchmarking,
rick jones
kinda feel the same way about this situation.
I'm working on XFS (as the transmit analogue to RFS). We'll track
flows enough so that we should know when it's safe to move them.
Is the XFS you are working on going to subsume XPS or will the two
continue to exist in parallel a la RPS and RFS?
rick jones
From: Rick Jones
Since XPS was first introduced two things have happened. Some drivers
have started enabling XPS on their own initiative, and it has been
found that when a VM is sending data through a host interface with XPS
enabled, that traffic can end-up seriously out of order.
Signed-off
From: Rick Jones
Since XPS was first introduced two things have happened. Some drivers
have started enabling XPS on their own initiative, and it has been
found that when a VM is sending data through a host interface with XPS
enabled, that traffic can end-up seriously out of order.
Signed-off
On 08/25/2016 02:08 PM, Eric Dumazet wrote:
When XPS was submitted, it was _not_ enabled by default and 'magic'
Some NIC vendors decided it was a good thing, you should complain to
them ;)
I kindasorta am with the emails I've been sending to netdev :) And also
hopefully precluding others goi
steps to pin VMs can enable XPS in that case. It isn't clear that
one should always pin VMs - for example if a (public) cloud needed to
oversubscribe the cores.
happy benchmarking,
rick jones
On 08/25/2016 12:19 PM, Alexander Duyck wrote:
The problem is that there is no socket associated with the guest from
the host's perspective. This is resulting in the traffic bouncing
between queues because there is no saved socket to lock the interface
onto.
I was looking into this recently as
hmarking,
rick jones
On 08/24/2016 10:23 AM, Eric Dumazet wrote:
From: Eric Dumazet
per_cpu_inc() is faster (at least on x86) than per_cpu_ptr(xxx)++;
Is it possible it is non-trivially slower on other architectures?
rick jones
Signed-off-by: Eric Dumazet
---
include/net/sch_generic.h |2 +-
1 file
8695
Average 4108 8940 8859 8885 8671
happy benchmarking,
rick jones
The sample counts below may not fully support the additional statistics
but for the curious:
raj@tardy:/tmp$ ~/netperf2_trunk/doc/examples/parse_single_stream.py -r
6 waxon_performance.log
MY NFS server running 4.8-rc1 is getting flooded with this message:
e1000e :00:19.0 eth0: __pskb_pull_tail failed.
Never saw it happen with 4.7 or earlier.
That device is this onboard NIC:
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection (2) I218-V
Dave
trigger an interrupt. Presumably setting
rx_max_coalesced_frames to 1 to disable interrupt coalescing.
happy benchmarking,
rick jones
resently? I believe Phil
posted something several messages back in the thread.
happy benchmarking,
rick jones
On 07/07/2016 09:34 AM, Eric W. Biederman wrote:
Rick Jones writes:
300 routers is far from the upper limit/goal. Back in HP Public
Cloud, we were running as many as 700 routers per network node (*),
and more than four network nodes. (back then it was just the one
namespace per router and
espace per
router and network). Mileage will of course vary based on the "oomph" of
one's network node(s).
happy benchmarking,
rick jones
* Didn't want to go much higher than that because each router had a port
on a common linux bridge and getting to > 1024 would be an unpleasant day.
problematic
since it takes up server resources for sockets sitting in TCP_CLOSE_WAIT.
Isn't the server application expected to act on the read return of zero
(which is supposed to be) triggered by the receipt of the FIN segment?
rick jones
We are also in the process of contacting Appl
onnection
which has been reset? Is it limited to those errno values listed in the
read() manpage, or does it end-up getting an errno value from those
listed in the recv() manpage? Or, perhaps even one not (presently)
listed in either?
rick jones
and
so could indeed productively use TCP FastOpen.
"Overall, very good success-rate"
though tempered by
"But... middleboxes were a big issue in some ISPs..."
Though it doesn't get into how big (some connections, many, most, all?)
and how many ISPs.
rick jones
Just an anecdote.
On 06/24/2016 02:46 PM, Tom Herbert wrote:
On Fri, Jun 24, 2016 at 2:36 PM, Rick Jones wrote:
How would you define "severely?" Has it actually been more severe than for
say ECN? Or it was for say SACK or PAWS?
ECN is probably even a bigger disappointment in terms of seeing
YN packets with data have together
severely hindered what otherwise should have been straightforward and
useful feature to deploy.
How would you define "severely?" Has it actually been more severe than
for say ECN? Or it was for say SACK or PAWS?
rick jones
Found this logs after a Trinity run.
kernel BUG at net/ipv6/raw.c:592!
[ cut here ]
invalid opcode: [#1] SMP
Modules linked in: udp_diag dccp_ipv6 dccp_ipv4 dccp sctp af_key tcp_diag
inet_diag ip6table_filter xt_NFLOG nfnetlink_log xt_comment xt_statistic
iptable_
On 06/22/2016 04:10 PM, Rick Jones wrote:
My systems are presently in the midst of an install but I should be able
to demonstrate it in the morning (US Pacific time, modulo the shuttle
service of a car repair place)
The installs finished sooner than I thought. So, receiver:
root@np-cp1
On 06/22/2016 03:56 PM, Alexander Duyck wrote:
On Wed, Jun 22, 2016 at 3:47 PM, Eric Dumazet wrote:
On Wed, 2016-06-22 at 14:52 -0700, Rick Jones wrote:
Had the bnx2x-driven NICs' firmware not had that rather unfortunate
assumption about MSSes I probably would never have noticed.
It
On 06/22/2016 03:47 PM, Eric Dumazet wrote:
On Wed, 2016-06-22 at 14:52 -0700, Rick Jones wrote:
On 06/22/2016 11:22 AM, Yuval Mintz wrote:
But seriously, this isn't really anything new but rather a step forward in
the direction we've already taken - bnx2x/qede are already performin
I probably would never have noticed.
happy benchmarking,
rick jones
as would a comparison of the service demands of the different
single-stream results.
CPU and NIC models would provide excellent context for the numbers.
happy benchmarking,
rick jones
jones
On 06/02/2016 10:06 AM, Aaron Conole wrote:
Rick Jones writes:
One of the things I've been doing has been setting-up a cluster
(OpenStack) with JumboFrames, and then setting MTUs on instance vNICs
by hand to measure different MTU sizes. It would be a shame if such a
thing were not possib
aggregate small packet
performance.
happy benchmarking,
rick jones
On 05/04/2016 10:34 AM, Eric Dumazet wrote:
On Wed, 2016-05-04 at 10:24 -0700, Rick Jones wrote:
Dropping the connection attempt makes sense, but is entering/claiming
synflood really indicated in the case of a zero-length accept queue?
This is a one time message.
This is how people can
On 05/03/2016 05:25 PM, Eric Dumazet wrote:
On Tue, 2016-05-03 at 23:54 +0200, Peter Wu wrote:
When applications use listen() with a backlog of 0, the kernel would
set the maximum connection request queue to zero. This causes false
reports of SYN flooding (if tcp_syncookies is enabled) or packet
to the driver, which then either
queued them all, or none of them.
I don't recall seeing similar poor behaviour in Linux; I would have
assumed that the intra-stack flow-control "took care" of it. Perhaps
there is something specific to wpan which precludes that?
happy benchmarking,
rick jones
s in this setup
the 3.4.2 and 4.4.0 kernels perform identically - just as you would
expect.
Running in a VM will likely change things massively and could I suppose
mask other behaviour changes.
happy benchmarking,
rick jones
raj@tardy:~$ cat signatures/toppost
A: Because it fouls the order in which
default
request/response size of one byte) doesn't really care about stateless
offloads or MTUs and could show how much difference there is in basic
path length (or I suppose in interrupt coalescing behaviour if the NIC
in question has a mildly dodgy heuristic for such things).
happy benchmarking,
rick jones
Trinity and other fuzzers can hit this WARN on far too easily,
resulting in a tainted kernel that hinders automated fuzzing.
Replace it with a rate-limited printk.
Signed-off-by: Dave Jones
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index 1ecfa710ca98..f12c17f355d9 100644
races.
Yes, our team (including Van Jacobson ;) ) would be sad to not have
sequential IP ID (but then we don't have them for IPv6 ;) )
Your team would not be the only one sad to see that go away.
rick jones
Since the cost of generating them is pretty small (inet->inet_id
counter)
On 03/28/2016 01:01 PM, Eric Dumazet wrote:
Note : file structures got RCU freeing back in 2.6.14, and I do not
think named users ever complained about added cost ;)
Couldn't see the tree for the forest I guess :)
rick
On 03/28/2016 11:55 AM, Eric Dumazet wrote:
On Mon, 2016-03-28 at 11:44 -0700, Rick Jones wrote:
On 03/28/2016 10:00 AM, Eric Dumazet wrote:
If you mean that a busy DNS resolver spends _most_ of its time doing :
fd = socket()
bind(fd port=0)
< send and receive one frame >
close(fd)
On 03/28/2016 10:00 AM, Eric Dumazet wrote:
On Mon, 2016-03-28 at 09:15 -0700, Rick Jones wrote:
On 03/25/2016 03:29 PM, Eric Dumazet wrote:
UDP sockets are not short lived in the high usage case, so the added
cost of call_rcu() should not be a concern.
Even a busy DNS resolver?
If you
On 03/25/2016 03:29 PM, Eric Dumazet wrote:
UDP sockets are not short lived in the high usage case, so the added
cost of call_rcu() should not be a concern.
Even a busy DNS resolver?
rick jones
commit 911362c70d ("net: add dst_cache support") added a new
kconfig option that gets selected by other networking options.
It seems the intent wasn't to offer this as a user-selectable
option given the lack of help text, so this patch converts it
to a silent option.
Signed-off
nsaction inflight at one time.
And unless one uses the test-specific -e option to provide a very crude
retransmission mechanism based on a socket read timeout, neither does
UDP_RR recover from lost datagrams.
happy benchmarking,
rick jones
http://www.netperf.org/
may add more thorough
error handling.
How do you see this interacting with VMs getting MTU settings via DHCP?
rick jones
v2:
* Whitespace and code style cleanups from Sergei Shtylyov and Paolo Abeni
* Additional test before printing a warning
Aaron Conole (2):
virtio: Start feature MTU
should get some SNMP counters,
so that we get an idea of how many times a loss could be repaired.
And some idea of the duplication seen by receivers, assuming there isn't
already a counter for such a thing in Linux.
happy benchmarking,
rick jones
Ideally, if the path happens to be los
tting a non-zero IP ID on fragments with
DF set?
rick jones
We need to do increment IP identifier in UFO, but I only see one
device (neterion) that advertises NETIF_F_UFO-- honestly, removing
that feature might be another good simplification!
Tom
--
-Ed
e one can try to craft
things so there is no storage I/O of note, it would still be better to
use a network-specific tool such as netperf or iperf. Minimize the
number of variables.
happy benchmarking,
rick jones
401 - 500 of 1071 matches
Mail list logo