. As one approaches the wire limit for
bitrate, the likes of a netperf service demand can be used to
demonstrate the performance change - though there isn't an easy way to
do that for parallel flows.
happy benchmarking,
rick jones
performance improved?
happy benchmarking,
rick jones
sane defaults. For example, the issues
we've seen with VMs sending traffic getting reordered when the driver
took it upon itself to enable xps.
rick jones
On 02/03/2017 10:22 AM, Benjamin Serebrin wrote:
Thanks, Michael, I'll put this text in the commit log:
XPS settings aren't write-able from userspace, so the only way I know
to fix XPS is in the driver.
??
root@np-cp1-c0-m1-mgmt:/home/stack# cat
/sys/devices/pci:00/:00:02.0/:04:0
On 01/17/2017 11:13 AM, Eric Dumazet wrote:
On Tue, Jan 17, 2017 at 11:04 AM, Rick Jones wrote:
Drifting a bit, and it doesn't change the value of dealing with it, but out
of curiosity, when you say mostly in CLOSE_WAIT, why aren't the server-side
applications reacting to the read
AIT, why aren't the
server-side applications reacting to the read return of zero triggered
by the arrival of the FIN?
happy benchmarking,
rick jones
rrors.
Straight-up defaults with netperf, or do you use specific -s/S or -m/M
options?
happy benchmarking,
rick jones
tionally, even under no stress at
all, you really should complain then.
Isn't that behaviour based (in part?) on the observation/belief that it
is fewer cycles to copy the small packet into a small buffer than to
send the larger buffer up the stack and have to allocate and map a
replacement?
rick jones
- (2 * VLAN_HLEN) which this patch is
doing. It will be useful in the next patch which allows
XDP program to extend the packet by adding new header(s).
Is mlx4 the only driver doing page-per-packet?
rick jones
On 12/01/2016 02:12 PM, Tom Herbert wrote:
We have consider both request size and response side in RPC.
Presumably, something like a memcache server is most serving data as
opposed to reading it, we are looking to receiving much smaller
packets than being sent. Requests are going to be quite smal
On 12/01/2016 12:18 PM, Tom Herbert wrote:
On Thu, Dec 1, 2016 at 11:48 AM, Rick Jones wrote:
Just how much per-packet path-length are you thinking will go away under the
likes of TXDP? It is admittedly "just" netperf but losing TSO/GSO does some
non-trivial things to effectiv
even if one does have the CPU cycles to burn so to speak, the effect
on power consumption needs to be included in the calculus.
happy benchmarking,
rick jones
On 11/30/2016 02:43 AM, Jesper Dangaard Brouer wrote:
Notice the "fib_lookup" cost is still present, even when I use
option "-- -n -N" to create a connected socket. As Eric taught us,
this is because we should use syscalls "send" or "write" on a connected
socket.
In theory, once the data socke
On 11/28/2016 10:33 AM, Rick Jones wrote:
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
time to try IP_MTU_DISCOVER ;)
To Rick, maybe you can find a good solution or option with Eric's hint,
to send appropriate sized UDP packets with Don't Fragment (DF).
Jesper -
Top of t
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
time to try IP_MTU_DISCOVER ;)
To Rick, maybe you can find a good solution or option with Eric's hint,
to send appropriate sized UDP packets with Don't Fragment (DF).
Jesper -
Top of trunk has a change adding an omni, test-specific -f opt
On 11/17/2016 04:37 PM, Julian Anastasov wrote:
On Thu, 17 Nov 2016, Rick Jones wrote:
raj@tardy:~/netperf2_trunk$ strace -v -o /tmp/netperf.strace src/netperf -F
src/nettest_omni.c -t UDP_STREAM -l 1 -- -m 1472
...
socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4
getsockopt(4, SOL_SOCKET
tf(where,\n\t\ttput_fmt_1_l"..., 1472, 0,
{sa_family=AF_INET, sin_port=htons(58088),
sin_addr=inet_addr("127.0.0.1")}, 16) = 1472
Of course, it will continue to send the same messages from the send_ring
over and over instead of putting different data into the buffers each
time, but if one has a sufficiently large -W option specified...
happy benchmarking,
rick jones
t creation
wouldn't be too difficult, along with another command-line option to
cause it to happen.
Could we leave things as "make sure you don't need fragmentation when
you use this" or would netperf have to start processing ICMP messages?
happy benchmarking,
rick jones
On 11/16/2016 02:40 PM, Jesper Dangaard Brouer wrote:
On Wed, 16 Nov 2016 09:46:37 -0800
Rick Jones wrote:
It is a wild guess, but does setting SO_DONTROUTE affect whether or not
a connect() would have the desired effect? That is there to protect
people from themselves (long story about
tperf users on
Windows and there wasn't (at the time) support for git under Windows.
But I am not against the idea in principle.
happy benchmarking,
rick jones
PS - rick.jo...@hp.com no longer works. rick.jon...@hpe.com should be
used instead.
ms
with a large PAGE_SIZE?
/* avoid msg truncation on > 4096 byte PAGE_SIZE platforms */
or something like that.
rick jones
the
can, while "back in the day" (when some of the first ethtool changes to
report speeds other than the "normal" ones went in) the speed of a
flexnic was fixed, today, it can actually operate in a range. From a
minimum guarantee to an "if there is bandwidth available" cap.
rick jones
On 10/25/2016 08:31 AM, Paul Menzel wrote:
To my knowledge, the firmware files haven’t changed since years [1].
Indeed - it looks like I read "bnx2" and thought "bnx2x" Must remember
to hold-off on replying until after the morning orange juice is consumed :)
rick
version of
the firmware. Usually, finding a package "out there" with the newer
version of the firmware, and installing it onto the system is sufficient.
happy benchmarking,
rick jones
On 10/10/2016 09:08 AM, Rick Jones wrote:
On 10/09/2016 03:33 PM, Eric Dumazet wrote:
OK, I am adding/CC Rick Jones, netperf author, since it seems a netperf
bug, not a kernel one.
I believe I already mentioned fact that "UDP_STREAM -- -N" was not doing
a connect() on the receiver
On 10/09/2016 03:33 PM, Eric Dumazet wrote:
OK, I am adding/CC Rick Jones, netperf author, since it seems a netperf
bug, not a kernel one.
I believe I already mentioned fact that "UDP_STREAM -- -N" was not doing
a connect() on the receiver side.
I can confirm that the receive s
currently
selecting different TXQ.
Just for completeness, in my testing, the VMs were single-vCPU.
rick jones
nnectX-3 Pro,E5-2670v3 12421 12612
BE3, E5-26408178 8484
82599, E5-2640 8499 8549
BCM57840, E5-2640 8544 8560
Skyhawk, E5-26408537 8701
happy benchmarking,
Drew Balliet
Jeurg Haefliger
rick jones
true long-term bw
estimate
variable?
We could do that.
We used to have variables (aka module params) while BBR was cooking in
our kernels ;)
Are there better than epsilon odds of someone perhaps wanting to poke
those values as it gets exposure beyond Google?
happy benchmarking,
rick jones
conn-tracking work.
What is that first sentence trying to say? It appears to be incomplete,
and is that supposed to be "L3-symmetric?"
happy benchmarking,
rick jones
with one doorbell.
With small packets and the "default" ring size for this NIC/driver
combination, is the BQL large enough that the ring fills before one hits
the BQL?
rick jones
On 08/31/2016 04:11 PM, Eric Dumazet wrote:
On Wed, 2016-08-31 at 15:47 -0700, Rick Jones wrote:
With regard to drops, are both of you sure you're using the same socket
buffer sizes?
Does it really matter ?
At least at points in the past I have seen different drop counts at the
SO_R
With regard to drops, are both of you sure you're using the same socket
buffer sizes?
In the meantime, is anything interesting happening with TCP_RR or
TCP_STREAM?
happy benchmarking,
rick jones
kinda feel the same way about this situation.
I'm working on XFS (as the transmit analogue to RFS). We'll track
flows enough so that we should know when it's safe to move them.
Is the XFS you are working on going to subsume XPS or will the two
continue to exist in parallel a la RPS and RFS?
rick jones
From: Rick Jones
Since XPS was first introduced two things have happened. Some drivers
have started enabling XPS on their own initiative, and it has been
found that when a VM is sending data through a host interface with XPS
enabled, that traffic can end-up seriously out of order.
Signed-off
From: Rick Jones
Since XPS was first introduced two things have happened. Some drivers
have started enabling XPS on their own initiative, and it has been
found that when a VM is sending data through a host interface with XPS
enabled, that traffic can end-up seriously out of order.
Signed-off
On 08/25/2016 02:08 PM, Eric Dumazet wrote:
When XPS was submitted, it was _not_ enabled by default and 'magic'
Some NIC vendors decided it was a good thing, you should complain to
them ;)
I kindasorta am with the emails I've been sending to netdev :) And also
hopefully precluding others goi
steps to pin VMs can enable XPS in that case. It isn't clear that
one should always pin VMs - for example if a (public) cloud needed to
oversubscribe the cores.
happy benchmarking,
rick jones
On 08/25/2016 12:19 PM, Alexander Duyck wrote:
The problem is that there is no socket associated with the guest from
the host's perspective. This is resulting in the traffic bouncing
between queues because there is no saved socket to lock the interface
onto.
I was looking into this recently as
when the NIC at the sending end
is a BCM57840. It does not appear that the bnx2x driver in the 4.4
kernel is enabling XPS.
So, it would seem that there are three cases of enabling XPS resulting
in out-of-order traffic, two of which result in a non-trivial loss of
performance.
happy benc
On 08/24/2016 10:23 AM, Eric Dumazet wrote:
From: Eric Dumazet
per_cpu_inc() is faster (at least on x86) than per_cpu_ptr(xxx)++;
Is it possible it is non-trivially slower on other architectures?
rick jones
Signed-off-by: Eric Dumazet
---
include/net/sch_generic.h |2 +-
1 file
8695
Average 4108 8940 8859 8885 8671
happy benchmarking,
rick jones
The sample counts below may not fully support the additional statistics
but for the curious:
raj@tardy:/tmp$ ~/netperf2_trunk/doc/examples/parse_single_stream.py -r
6 waxon_performance.log
trigger an interrupt. Presumably setting
rx_max_coalesced_frames to 1 to disable interrupt coalescing.
happy benchmarking,
rick jones
resently? I believe Phil
posted something several messages back in the thread.
happy benchmarking,
rick jones
On 07/07/2016 09:34 AM, Eric W. Biederman wrote:
Rick Jones writes:
300 routers is far from the upper limit/goal. Back in HP Public
Cloud, we were running as many as 700 routers per network node (*),
and more than four network nodes. (back then it was just the one
namespace per router and
espace per
router and network). Mileage will of course vary based on the "oomph" of
one's network node(s).
happy benchmarking,
rick jones
* Didn't want to go much higher than that because each router had a port
on a common linux bridge and getting to > 1024 would be an unpleasant day.
problematic
since it takes up server resources for sockets sitting in TCP_CLOSE_WAIT.
Isn't the server application expected to act on the read return of zero
(which is supposed to be) triggered by the receipt of the FIN segment?
rick jones
We are also in the process of contacting Appl
onnection
which has been reset? Is it limited to those errno values listed in the
read() manpage, or does it end-up getting an errno value from those
listed in the recv() manpage? Or, perhaps even one not (presently)
listed in either?
rick jones
and
so could indeed productively use TCP FastOpen.
"Overall, very good success-rate"
though tempered by
"But... middleboxes were a big issue in some ISPs..."
Though it doesn't get into how big (some connections, many, most, all?)
and how many ISPs.
rick jones
Just an anecdote.
On 06/24/2016 02:46 PM, Tom Herbert wrote:
On Fri, Jun 24, 2016 at 2:36 PM, Rick Jones wrote:
How would you define "severely?" Has it actually been more severe than for
say ECN? Or it was for say SACK or PAWS?
ECN is probably even a bigger disappointment in terms of seeing
YN packets with data have together
severely hindered what otherwise should have been straightforward and
useful feature to deploy.
How would you define "severely?" Has it actually been more severe than
for say ECN? Or it was for say SACK or PAWS?
rick jones
On 06/22/2016 04:10 PM, Rick Jones wrote:
My systems are presently in the midst of an install but I should be able
to demonstrate it in the morning (US Pacific time, modulo the shuttle
service of a car repair place)
The installs finished sooner than I thought. So, receiver:
root@np-cp1
On 06/22/2016 03:56 PM, Alexander Duyck wrote:
On Wed, Jun 22, 2016 at 3:47 PM, Eric Dumazet wrote:
On Wed, 2016-06-22 at 14:52 -0700, Rick Jones wrote:
Had the bnx2x-driven NICs' firmware not had that rather unfortunate
assumption about MSSes I probably would never have noticed.
It
On 06/22/2016 03:47 PM, Eric Dumazet wrote:
On Wed, 2016-06-22 at 14:52 -0700, Rick Jones wrote:
On 06/22/2016 11:22 AM, Yuval Mintz wrote:
But seriously, this isn't really anything new but rather a step forward in
the direction we've already taken - bnx2x/qede are already performin
I probably would never have noticed.
happy benchmarking,
rick jones
as would a comparison of the service demands of the different
single-stream results.
CPU and NIC models would provide excellent context for the numbers.
happy benchmarking,
rick jones
On 06/08/2016 09:30 PM, pravin shelar wrote:
On Wed, Jun 8, 2016 at 6:18 PM, William Tu wrote:
+struct ovs_action_trunc {
+ uint32_t max_len; /* Max packet size in bytes. */
This could uint16_t. as it is related to packet len.
Is there something limiting MTUs to 65535 bytes?
rick
On 06/02/2016 10:06 AM, Aaron Conole wrote:
Rick Jones writes:
One of the things I've been doing has been setting-up a cluster
(OpenStack) with JumboFrames, and then setting MTUs on instance vNICs
by hand to measure different MTU sizes. It would be a shame if such a
thing were not possib
aggregate small packet
performance.
happy benchmarking,
rick jones
On 05/04/2016 10:34 AM, Eric Dumazet wrote:
On Wed, 2016-05-04 at 10:24 -0700, Rick Jones wrote:
Dropping the connection attempt makes sense, but is entering/claiming
synflood really indicated in the case of a zero-length accept queue?
This is a one time message.
This is how people can
On 05/03/2016 05:25 PM, Eric Dumazet wrote:
On Tue, 2016-05-03 at 23:54 +0200, Peter Wu wrote:
When applications use listen() with a backlog of 0, the kernel would
set the maximum connection request queue to zero. This causes false
reports of SYN flooding (if tcp_syncookies is enabled) or packet
to the driver, which then either
queued them all, or none of them.
I don't recall seeing similar poor behaviour in Linux; I would have
assumed that the intra-stack flow-control "took care" of it. Perhaps
there is something specific to wpan which precludes that?
happy benchmarking,
rick jones
s in this setup
the 3.4.2 and 4.4.0 kernels perform identically - just as you would
expect.
Running in a VM will likely change things massively and could I suppose
mask other behaviour changes.
happy benchmarking,
rick jones
raj@tardy:~$ cat signatures/toppost
A: Because it fouls the order in which
default
request/response size of one byte) doesn't really care about stateless
offloads or MTUs and could show how much difference there is in basic
path length (or I suppose in interrupt coalescing behaviour if the NIC
in question has a mildly dodgy heuristic for such things).
happy benchmarking,
rick jones
races.
Yes, our team (including Van Jacobson ;) ) would be sad to not have
sequential IP ID (but then we don't have them for IPv6 ;) )
Your team would not be the only one sad to see that go away.
rick jones
Since the cost of generating them is pretty small (inet->inet_id
counter)
On 03/28/2016 01:01 PM, Eric Dumazet wrote:
Note : file structures got RCU freeing back in 2.6.14, and I do not
think named users ever complained about added cost ;)
Couldn't see the tree for the forest I guess :)
rick
On 03/28/2016 11:55 AM, Eric Dumazet wrote:
On Mon, 2016-03-28 at 11:44 -0700, Rick Jones wrote:
On 03/28/2016 10:00 AM, Eric Dumazet wrote:
If you mean that a busy DNS resolver spends _most_ of its time doing :
fd = socket()
bind(fd port=0)
< send and receive one frame >
close(fd)
On 03/28/2016 10:00 AM, Eric Dumazet wrote:
On Mon, 2016-03-28 at 09:15 -0700, Rick Jones wrote:
On 03/25/2016 03:29 PM, Eric Dumazet wrote:
UDP sockets are not short lived in the high usage case, so the added
cost of call_rcu() should not be a concern.
Even a busy DNS resolver?
If you
On 03/25/2016 03:29 PM, Eric Dumazet wrote:
UDP sockets are not short lived in the high usage case, so the added
cost of call_rcu() should not be a concern.
Even a busy DNS resolver?
rick jones
nsaction inflight at one time.
And unless one uses the test-specific -e option to provide a very crude
retransmission mechanism based on a socket read timeout, neither does
UDP_RR recover from lost datagrams.
happy benchmarking,
rick jones
http://www.netperf.org/
may add more thorough
error handling.
How do you see this interacting with VMs getting MTU settings via DHCP?
rick jones
v2:
* Whitespace and code style cleanups from Sergei Shtylyov and Paolo Abeni
* Additional test before printing a warning
Aaron Conole (2):
virtio: Start feature MTU
should get some SNMP counters,
so that we get an idea of how many times a loss could be repaired.
And some idea of the duplication seen by receivers, assuming there isn't
already a counter for such a thing in Linux.
happy benchmarking,
rick jones
Ideally, if the path happens to be los
tting a non-zero IP ID on fragments with
DF set?
rick jones
We need to do increment IP identifier in UFO, but I only see one
device (neterion) that advertises NETIF_F_UFO-- honestly, removing
that feature might be another good simplification!
Tom
--
-Ed
e one can try to craft
things so there is no storage I/O of note, it would still be better to
use a network-specific tool such as netperf or iperf. Minimize the
number of variables.
happy benchmarking,
rick jones
/
#define BR_GROUPFWD_DEFAULT 0
/* Don't allow forwarding of control protocols like STP, MAC PAUSE and LACP */
If you are going to 9000. why not just go ahead and use the maximum size
of an IP datagram?
rick jones
accounting to show wrong results.
Fix that. Use it for rx_fifo_errors only.
Fixes: c27a02cd94d6 ('mlx4_en: Add driver for Mellanox ConnectX 10GbE NIC')
Signed-off-by: Amir Vadai
Signed-off-by: Eugenia Emantayev
Signed-off-by: Or Gerlitz
Reviewed-By: Rick Jones
rick
ors = 0;
stats->rx_fifo_errors = be32_to_cpu(mlx4_en_stats->RdropOvflw);
happy benchmarking,
rick jones
sd 20.5931
stack@fcperf-cp1-comp0001-mgmt:~$ grep "1 1" xps_tcp_rr_off_* |
awk '{t+=$6;r+=$9;s+=$10}END{print "throughput",t/NR,"recv
sd",r/NR,"send sd",s/NR}'
throughput 20883.6 recv sd 19.6255 send sd 20.0178
So that is 12% on TCP_RR throughput.
Looks like XPS shouldn't be enabled by default for ixgbe.
happy benchmarking,
rick jones
sd 0.6543 send sd 0.3606
stack@fcperf-cp1-comp0001-mgmt:~$ grep TCPOFO xps_off_* | awk '{sum +=
$NF}END{print "sum",sum/NR}'
sum 173.9
happy benchmarking,
rick jones
raw results at ftp://ftp.netperf.org/xps_4.4.0-1_ixgbe.tgz
On 02/04/2016 12:13 PM, Tom Herbert wrote:
On Thu, Feb 4, 2016 at 11:57 AM, Rick Jones wrote:
On 02/04/2016 11:38 AM, Tom Herbert wrote:
XPS has OOO avoidance for TCP, that should not be a problem.
What/how much should I read into:
With XPSTCPOFOQueue: 78206
Without XPS TCPOFOQueue
On 02/04/2016 11:38 AM, Tom Herbert wrote:
On Thu, Feb 4, 2016 at 11:13 AM, Rick Jones wrote:
The Intel folks suggested something about the process scheduler moving the
sender around and ultimately causing some packet re-ordering. That could I
suppose explain the TCP_STREAM difference, but
around and ultimately causing some packet re-ordering. That
could I suppose explain the TCP_STREAM difference, but not the TCP_RR
since that has just a single segment in flight at one time.
I can try to get perf/whatnot installed on the systems - suggestions as
to what metrics to look at are we
On 02/04/2016 04:47 AM, Michael S. Tsirkin wrote:
On Wed, Feb 03, 2016 at 03:49:04PM -0800, Rick Jones wrote:
And even for not-quite-virtual devices - such as a VC/FlexNIC in an HPE
blade server there can be just about any speed set. I think we went down a
path of patching some things to
On 02/03/2016 03:32 PM, Stephen Hemminger wrote:
But why check for valid value at all. At some point in the
future, there will be yet another speed adopted by some standard body
and the switch statement would need another value.
Why not accept any value? This is a virtual device.
And even fo
through an interface is significantly
greater than the reported link speed. I have to wonder how unique it is
in that regard.
Doesn't mean there can't be a default, but does suggest it should be
rather high.
rick jones
since it wasn't the same
per-core "horsepower" on either side and so why LRO on/off could have
also affected the TCP_STREAM results. (When LRO was off it was off on
both sides, and when on was on on both yes?)
happy benchmarking,
rick jones
--
To unsubscribe from this li
y doing is turning on LRO support via ethtool -k to see if that is the
issue you are seeing.
Hi Alex,
enabling LRO resolved the problem.
So you had the same NIC and CPUs and whatnot on both sides?
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the
socket was
created. If you want to see what they became by the end of the test,
you need to use the appropriate output selectors (or, IIRC invoking the
tests as "omni" rather than tcp_stream/tcp_maerts will report the end
values rather than the start ones.).
happy benchmarking,
ric
almost 80% on
the netserver side. That is pure "effective" path-length increase.
happy benchmarking,
rick jones
PS - the netperf commands were varations on this theme:
./netperf -P 0 -T 0 -H 10.12.49.1 -c -C -l 30 -i 30,3 -- -O
throughput,local_cpu_util,local_sd,local_cpu
On 12/01/2015 10:45 AM, Sowmini Varadhan wrote:
On (12/01/15 10:17), Rick Jones wrote:
What do the perf profiles show? Presumably, loss of TSO/GSO means
an increase in the per-packet costs, but if the ipsec path
significantly increases the per-byte costs...
For ESP-null, there's act
keeping the per-byte roughly the same.
You could also compare the likes of a single-byte netperf TCP_RR test
between ipsec enabled and not to get an idea of the basic path length
differences without TSO/GSO/whatnot muddying the waters.
happy benchmarking,
rick jones
--
To unsubscribe from this
latency on the likes of netperf TCP_RR
with JumboFrames than you would with the standard 1500 byte MTU.
Something I saw on GbE links years back anyway. I chalked it up to
getting better parallelism between the NIC and the host.
Of course the service demands were lower with JumboFrames...
rick
R and even aggregate _RR/packets per second for many VMs on
the same system would be in order.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.
etns . At least that is what an strace of that
command suggests.
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 08/31/2015 02:29 PM, David Ahern wrote:
On 8/31/15 1:48 PM, Rick Jones wrote:
My attempts to get a call-graph have been met with very limited success.
Even though I've installed the dbg package from "make deb-pkg" the
symbol resolution doesn't seem to be working.
Lo
Even though I've installed the dbg package from "make deb-pkg" the
symbol resolution doesn't seem to be working.
happy benchmarking,
rick jones
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vge
'm assuming the VM is using virtio_net) Does the behaviour
change if vhost-net is loaded into the host and used by the VM?
rick jones
For completeness, it would also be good to compare the likes of netperf
TCP_RR between VxLAN and without.
--
To unsubscribe from this list: send the line &qu
On 08/12/2015 04:46 PM, David Miller wrote:
From: r...@tardy.usa.hp.com (Rick Jones)
Date: Wed, 12 Aug 2015 10:23:14 -0700 (PDT)
From: Rick Jones
A few things have changed since the previous version of the vxlan
documentation was written, so update it and correct some grammer and
such while
From: Rick Jones
A few things have changed since the previous version of the vxlan
documentation was written, so update it and correct some grammer and
such while we are at it.
Signed-off-by: Rick Jones
---
v2: Stephen Hemminger feedback to include dstport 4789 in command line
example
On 08/11/2015 03:09 PM, Stephen Hemminger wrote:
On Tue, 11 Aug 2015 13:47:16 -0700 (PDT)
r...@tardy.usa.hp.com (Rick Jones) wrote:
+ # ip link add vxlan0 type vxlan id 42 group 239.1.1.1 dev eth1
+
+This creates a new device named vxlan0. The device uses the
+multicast group 239.1.1.1 over
1 - 100 of 505 matches
Mail list logo