Re: FreeBSD LAG LACP timeout tunable through IOCTL

2015-07-24 Thread Pokala, Ravi
Hi LN,

You also need to teach `ifconfig' how to toggle this new setting. See

sbin/ifconfig/iflagg.c:lagg_cmds[]

and how the other LACP options are handled. (Thanks to Genesys on #bsdcode
for pointing that out.)

Also, please confirm that you don't need to do any locking to walk the
list or modify any of the list elements.

Thanks,

Ravi

-Original Message-
From: Lakshmi Narasimhan Sundararajan lakshm...@msystechnologies.com
Date: 2015-07-23, Thursday at 05:25
To: freebsd-net@freebsd.org freebsd-net@freebsd.org
Cc: panasas-netw...@msystechnologies.com
panasas-netw...@msystechnologies.com, Lewis, Fred
fle...@panasas.com, Ravi Pokala rpok...@panasas.com, Tallam, Sreen
sr...@panasas.com
Subject: FreeBSD LAG LACP timeout tunable through IOCTL

Hi FreeBSD team,
In FreeBSD-10 and in Current, by default LACP supports only long timeout.
FreeBSD does not provide the way to configure LACP timeout period.
We made code changes for LACP Fast-timeout (Using IOCTL, both GET / SET)
on FreeBSD-11.

And we were able to successfully test the operation using IOCtl calls
from userland.


Initially we wanted to use sysctl, but found in FreeBSD revision history,
that sysctl in LAG results in LOR and has to be converted to IOCTL.
Please let us know your comments to take this forward.




Diffs inline:
Index: sys/net/ieee8023ad_lacp.h
===
--- sys/net/ieee8023ad_lacp.h (revision 285195)
+++ sys/net/ieee8023ad_lacp.h (working copy)
@@ -251,6 +251,7 @@
   u_int32_t lsc_tx_test;
  } lsc_debug;
  u_int32_t  lsc_strict_mode;
+ u_int32_t  lsc_fast_timeout; /* if set, fast / short timeout */
 };
 
 #define LACP_TYPE_ACTORINFO 1
Index: sys/net/if_lagg.c
===
--- sys/net/if_lagg.c (revision 285195)
+++ sys/net/if_lagg.c (working copy)
@@ -1257,6 +1257,8 @@
 ro-ro_opts |= LAGG_OPT_LACP_RXTEST;
if (lsc-lsc_strict_mode != 0)
 ro-ro_opts |= LAGG_OPT_LACP_STRICT;
+   if (lsc-lsc_fast_timeout != 0)
+ro-ro_opts |= LAGG_OPT_LACP_TIMEOUT;
 
ro-ro_active = sc-sc_active;
   } else {
@@ -1292,6 +1294,8 @@
   case -LAGG_OPT_LACP_RXTEST:
   case LAGG_OPT_LACP_STRICT:
   case -LAGG_OPT_LACP_STRICT:
+  case LAGG_OPT_LACP_TIMEOUT:
+  case -LAGG_OPT_LACP_TIMEOUT:
valid = lacp = 1;
break;
   default:
@@ -1320,6 +1324,7 @@
 sc-sc_opts = ~ro-ro_opts;
   } else {
struct lacp_softc *lsc;
+   struct lacp_port *lp;
 
lsc = (struct lacp_softc *)sc-sc_psc;
 
@@ -1342,6 +1347,16 @@
case -LAGG_OPT_LACP_STRICT:
 lsc-lsc_strict_mode = 0;
 break;
+   case LAGG_OPT_LACP_TIMEOUT:
+   LIST_FOREACH(lp, lsc-lsc_ports, lp_next)
+  lp-lp_state |= LACP_STATE_TIMEOUT;
+lsc-lsc_fast_timeout = 1;
+break;
+   case -LAGG_OPT_LACP_TIMEOUT:
+   LIST_FOREACH(lp, lsc-lsc_ports, lp_next)
+  lp-lp_state = ~LACP_STATE_TIMEOUT;
+lsc-lsc_fast_timeout = 0;
+break;
}
   }
   LAGG_WUNLOCK(sc);
Index: sys/net/if_lagg.h
===
--- sys/net/if_lagg.h (revision 285195)
+++ sys/net/if_lagg.h (working copy)
@@ -150,6 +150,7 @@
 #define LAGG_OPT_LACP_STRICT  0x10  /* LACP strict mode */
 #define LAGG_OPT_LACP_TXTEST  0x20  /* LACP debug: txtest */
 #define LAGG_OPT_LACP_RXTEST  0x40  /* LACP debug: rxtest */
+#define LAGG_OPT_LACP_TIMEOUT  0x80  /* LACP Fast timeout */
  u_int   ro_count;  /* number of ports */
  u_int   ro_active;  /* active port count */
  u_int   ro_flapping;  /* number of flapping */


Thanks,
LN


MSYS Technologies




___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Performance issues with Intel Fortville (XL710/ixl(4))

2015-05-19 Thread Pokala, Ravi
Hi folks,

At Panasas, we are working with the Intel XL710 40G NIC (aka Fortville),
and we're seeing some performance issues w/ 11-CURRENT (r282653).

Motherboard: Intel S2600KP (aka Kennedy Pass)
CPU: E5-2660 v3 @ 2.6GHz (aka Haswell Xeon)
(1 socket x 10 physical cores x 2 SMT threads) = 20 logical cores
NIC: Intel XL710, 2x40Gbps QSFP, configured in 4x10Gbps mode
RAM: 4x 16GB DDR4 DIMMs

What we've seen so far:

  - TX performance is pretty consistently lower than RX performance. All
numbers below are for unidrectional tests using `iperf':
10Gbps linksthreads/linkTX Gbps RX Gbps TX/RX
1   1   9.029.8591.57%
1   8   8.499.9185.67%
1   16  7.009.9170.63%
1   32  6.689.9267.40%

  - With multiple active links, both TX and RX performance suffer greatly;
the aggregate bandwidth tops out at about a third of the theoretical
40Gbps implied by 4x 10Gbps.
10Gbps linksthreads/linkTX Gbps RX Gbps % of 40Gbps
4   1   13.39   13.38   33.4%

  - Multi-link bidirectional throughput is absolutely terrible; the
aggregate is less than a tenth of the theoretical 40Gbps.
10Gbps linksthreads/linkTX Gbps RX Gbps % of 40Gbps
4   1   3.832.969.6% / 7.4%

  - Occasional interrupt storm messages are seen from the IRQs associated
with the NICs. Since that can impact performance, those runs were not
included in the data listed above.

Our questions:

  - How stable is ixl(4) in -CURRENT? By that, we mean both how quickly is
the driver changing, and does the driver cause any system instability?

  - What type of performance have others been getting w/ Fortville? In
40Gbps mode? In 4x10Gbps mode?

  - Does anyone have any tuning parameters they can recommend for this
card?

  - We did our testing w/ 11-CURRENT, but we will initially ship Fortville
running on 10.1-RELEASE or 10.2-RELEASE. The presence of RSS - even though
it is disabled by default - makes the driver back-port non-trivial. Is
there an estimate on when the 11-CURRENT version of the driver (1.4.1)
will get MFCed to 10-STABLE?

My colleagues Lakshmi and Fred (CCed) are working on this; please make
sure to include them if you have any comments.

Thanks,

Ravi

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org