Problem with ipfw, in-kernel NAT and port redirection to jails

2015-05-19 Thread Wojciech Wojtyniak
Hello,

I have a vps on vultr.com http://vultr.com/ running FreeBSD 10.1-p9 and a 
generic kernel:

% uname -a
FreeBSD tzar 10.1-RELEASE-p9 FreeBSD 10.1-RELEASE-p9 #0: Tue Apr  7 01:09:46 
UTC 2015 r...@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  
amd64

My goal is to run multiple services in jails (hopefully using ezjail or other 
convenient manager) and make them accessible from the Internet only on 
arbitrary ports (like 80 for http(s) server). So far my approach is as follows: 
I clone the lo0 interface and assign IP from 127.0.0.0/8 space to the jail and 
redirect port in ipfw nat definition to given address (example in configs 
below, I tried also with other addresses on vtnet0, which is my base network 
interface, with similar issues). Unfortunately this configuration doesn’t work 
for me.

I tested this for znc (IRC bouncer) and nginx. If I run them on main host 
(without NAT in front of them) everything works fine. However, if I run them in 
jail, behind NAT and send a HTTP(S) request to get some file, connections get 
dropped (znc has a web admin module, which is broken because of that). It works 
fine for small files but breaks for larger (I haven’t check the threshold but 
can do this if this is necessary). For example given curl command and znc 
service:

% curl http://$my_ip:6697/pub/jquery-1.11.2.min.js  /dev/null
# (stats cut out)
curl: (18) transfer closed with 58648 bytes remaining to read

if I tcpdump connection, transfer looks fine for some time and then ends with a 
following sequence (run on main host, jail master):

% sudo tcpdump port 6697
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vtnet0, link-type EN10MB (Ethernet), capture size 65535 bytes
23:37:28.621409 IP $my_home_host.51256  $my_vultr_host.vultr.com.6697: Flags 
[S], seq 3967654146, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 
601353780 ecr 0,sackOK,eol], length 0
23:37:28.621468 IP $my_vultr_host.vultr.com.6697  $my_home_host.51256: Flags 
[S.], seq 517055725, ack 3967654147, win 65535, options [mss 1460,nop,wscale 
6,sackOK,TS val 2553669008 ecr 601353780], length 0
23:37:28.635788 IP $my_home_host.51256  $my_vultr_host.vultr.com.6697: Flags 
[.], ack 1, win 4117, options [nop,nop,TS val 601353791 ecr 2553669008], length 0
23:37:28.635865 IP $my_home_host.51256  $my_vultr_host.vultr.com.6697: Flags 
[P.], seq 1:109, ack 1, win 4117, options [nop,nop,TS val 601353791 ecr 
2553669008], length 108
23:37:28.636122 IP $my_vultr_host.vultr.com.6697  $my_home_host.51256: Flags 
[P.], seq 1:18, ack 109, win 1040, options [nop,nop,TS val 2553669022 ecr 
601353791], length 17
23:37:28.650153 IP $my_home_host.51256  $my_vultr_host.vultr.com.6697: Flags 
[.], ack 18, win 4117, options [nop,nop,TS val 601353805 ecr 2553669022], 
length 0
23:37:29.123244 IP $my_vultr_host.vultr.com.6697  $my_home_host.51256: Flags 
[.], seq 18:1466, ack 109, win 1040, options [nop,nop,TS val 2553669510 ecr 
601353805], length 1448
(transfer goes normally)
23:37:35.519163 IP $my_vultr_host.vultr.com.6697  $my_home_host.51256: Flags 
[F.], seq 37666, ack 109, win 1040, options [nop,nop,TS val 2553675906 ecr 
601360615], length 0
23:37:35.531004 IP $my_home_host.51256  $my_vultr_host.vultr.com.6697: Flags 
[.], ack 33322, win 4096, options [nop,nop,TS val 601360640 ecr 2553675880], 
length 0
23:37:36.165352 IP $my_vultr_host.vultr.com.6697  $my_home_host.51256: Flags 
[.], seq 33322:34770, ack 109, win 1040, options [nop,nop,TS val 2553676552 ecr 
601360640], length 1448
23:37:36.184582 IP $my_home_host.51256  $my_vultr_host.vultr.com.6697: Flags 
[.], ack 34770, win 4050, options [nop,nop,TS val 601361283 ecr 2553676552], 
length 0
23:37:36.801437 IP $my_vultr_host.vultr.com.6697  $my_home_host.51256: Flags 
[.], seq 34770:36218, ack 109, win 1040, options [nop,nop,TS val 2553677188 ecr 
601361283], length 1448
23:37:36.910742 IP $my_home_host.51256  $my_vultr_host.vultr.com.6697: Flags 
[.], ack 36218, win 4050, options [nop,nop,TS val 601362012 ecr 2553677188], 
length 0
23:37:36.910796 IP $my_vultr_host.vultr.com.6697  $my_home_host.51256: Flags 
[FP.], seq 36218:37666, ack 109, win 1040, options [nop,nop,TS val 2553677297 
ecr 601362012], length 1448
23:37:36.922685 IP $my_home_host.51256  $my_vultr_host.vultr.com.6697: Flags 
[F.], seq 109, ack 37667, win 4096, options [nop,nop,TS val 601362025 ecr 
2553677297], length 0
23:37:36.922742 IP $my_vultr_host.vultr.com.6697  $my_home_host.51256: Flags 
[.], ack 110, win 1040, options [nop,nop,TS val 2553677309 ecr 601362025], 
length 0

My ipfw log doesn’t show any rejected packages in this case. For comparison 
when I run service on the main host (without NAT and port redirection) sending 
transfer is longer and ending sequence looks like follows:

% sudo tcpdump port 6696
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vtnet0, link-type EN10MB (Ethernet), capture size 65535 bytes

[Differential] [Requested Changes To] D1944: PF and VIMAGE fixes

2015-05-19 Thread glebius (Gleb Smirnoff)
glebius requested changes to this revision.
glebius added a comment.
This revision now requires changes to proceed.

Thanks a lot, Nikos.

I've fixed the problem of sleeping in UMA on kldunload. It was out the scope of 
the patch. I also committed the first part of the patch - mutexes 
initialization.

Nikos, can you please svn update and then arc update, so that updated patch is 
posted here?


REVISION DETAIL
  https://reviews.freebsd.org/D1944

EMAIL PREFERENCES
  https://reviews.freebsd.org/settings/panel/emailpreferences/

To: nvass-gmx.com, bz, zec, trociny, kristof, gnn, rodrigc, glebius
Cc: julian, robak, freebsd-virtualization, freebsd-pf, freebsd-net
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


[Bug 139387] [ipsec] Wrong lenth of PF_KEY messages in promiscuous mode

2015-05-19 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139387

Andrey V. Elsukov a...@freebsd.org changed:

   What|Removed |Added

   Assignee|freebsd-net@FreeBSD.org |a...@freebsd.org
 CC||a...@freebsd.org

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Performance issues with Intel Fortville (XL710/ixl(4))

2015-05-19 Thread Pokala, Ravi
Hi folks,

At Panasas, we are working with the Intel XL710 40G NIC (aka Fortville),
and we're seeing some performance issues w/ 11-CURRENT (r282653).

Motherboard: Intel S2600KP (aka Kennedy Pass)
CPU: E5-2660 v3 @ 2.6GHz (aka Haswell Xeon)
(1 socket x 10 physical cores x 2 SMT threads) = 20 logical cores
NIC: Intel XL710, 2x40Gbps QSFP, configured in 4x10Gbps mode
RAM: 4x 16GB DDR4 DIMMs

What we've seen so far:

  - TX performance is pretty consistently lower than RX performance. All
numbers below are for unidrectional tests using `iperf':
10Gbps linksthreads/linkTX Gbps RX Gbps TX/RX
1   1   9.029.8591.57%
1   8   8.499.9185.67%
1   16  7.009.9170.63%
1   32  6.689.9267.40%

  - With multiple active links, both TX and RX performance suffer greatly;
the aggregate bandwidth tops out at about a third of the theoretical
40Gbps implied by 4x 10Gbps.
10Gbps linksthreads/linkTX Gbps RX Gbps % of 40Gbps
4   1   13.39   13.38   33.4%

  - Multi-link bidirectional throughput is absolutely terrible; the
aggregate is less than a tenth of the theoretical 40Gbps.
10Gbps linksthreads/linkTX Gbps RX Gbps % of 40Gbps
4   1   3.832.969.6% / 7.4%

  - Occasional interrupt storm messages are seen from the IRQs associated
with the NICs. Since that can impact performance, those runs were not
included in the data listed above.

Our questions:

  - How stable is ixl(4) in -CURRENT? By that, we mean both how quickly is
the driver changing, and does the driver cause any system instability?

  - What type of performance have others been getting w/ Fortville? In
40Gbps mode? In 4x10Gbps mode?

  - Does anyone have any tuning parameters they can recommend for this
card?

  - We did our testing w/ 11-CURRENT, but we will initially ship Fortville
running on 10.1-RELEASE or 10.2-RELEASE. The presence of RSS - even though
it is disabled by default - makes the driver back-port non-trivial. Is
there an estimate on when the 11-CURRENT version of the driver (1.4.1)
will get MFCed to 10-STABLE?

My colleagues Lakshmi and Fred (CCed) are working on this; please make
sure to include them if you have any comments.

Thanks,

Ravi

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org