[Bug 241162] Panic in closefp() triggered by nginx (uwsgi with sendfile(2) enabled)
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=241162 --- Comment #7 from Dmitry Marakasov --- 12.1-RC1 also panics > Could you attach the nginx / uwsgi configurations, as an attachment That'd be quite heavy, as nginx here serves 10 sites with a wide tree of included configs. I'll try to minimize it. Reproducing is not quite easy too - it happens on production every several hours, not even sure which request leads to it. -- You are receiving this mail because: You are the assignee for the bug. You are on the CC list for the bug. ___ freebsd-net@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"
Re[2]: Network anomalies after update from 11.2 STABLE to 12.1 STABLE
Hi Michael, Thank you, for taking your time! We use physical machines. We don not have any special `pf` rules. Both sides ran `pfctl -d` before testing. `nginx` config is primitive, no secrets there: --- user www; worker_processes auto; error_log /var/log/nginx/error.log warn; events { worker_connections 81920; kqueue_changes 4096; use kqueue; } http { include mime.types; default_typeapplication/octet-stream; sendfileoff; keepalive_timeout 65; tcp_nopush on; tcp_nodelay on; # Logging log_format main'$remote_addr - $remote_user [$time_local] "$request" ' '$status $request_length $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_real_ip" "$realip_remote_addr" "$request_completion" "$request_time" ' '"$request_body"'; access_log /var/log/nginx/access.log main; server { listen 80 default; server_name localhost _; location / { return 404; } } } --- `wrk` is compiled with a default configuration. We test like this: `wrk -c 10 --header "Connection: close" -d 10 -t 1 --latency http://10.10.10.92:80/missing` Also, it seems that our issue, and the one described in this thread, are identical: https://lists.freebsd.org/pipermail/freebsd-net/2019-June/053667.html We both have the Intel network cards, BTW. Our network cards are these: em0 at pci0:10:0:0:class=0x02 card=0x15d9 chip=0x10d38086 rev=0x00 hdr=0x00 vendor = 'Intel Corporation' device = '82574L Gigabit Network Connection' ixl0 at pci0:4:0:0:class=0x02 card=0x00078086 chip=0x15728086 rev=0x01 hdr=0x00 vendor = 'Intel Corporation' device = 'Ethernet Controller X710 for 10GbE SFP+' == Additional info: During the tests, we have bonded two interfaces into a lagg: ixl0: flags=8843 metric 0 mtu 1500 options=c500b8 ether 3c:fd:fe:aa:60:20 media: Ethernet autoselect (10Gbase-SR ) status: active nd6 options=29 ixl1: flags=8843 metric 0 mtu 1500 options=c500b8 ether 3c:fd:fe:aa:60:20 hwaddr 3c:fd:fe:aa:60:21 media: Ethernet autoselect (10Gbase-SR ) status: active nd6 options=29 lagg0: flags=8843 metric 0 mtu 1500 options=c500b8 ether 3c:fd:fe:aa:60:20 inet 10.10.10.92 netmask 0x broadcast 10.10.255.255 laggproto failover lagghash l2,l3,l4 laggport: ixl0 flags=5 laggport: ixl1 flags=0<> groups: lagg media: Ethernet autoselect status: active nd6 options=29 using this config: ifconfig_ixl0="up -lro -tso -rxcsum -txcsum" (tried different options - got the same outcome) ifconfig_ixl1="up -lro -tso -rxcsum -txcsum" ifconfig_lagg0="laggproto failover laggport ixl0 laggport ixl1 10.10.10.92/24" We have randomly picked `ixl0` and restricted number of RX/TX queues to 1: /boot/loader.conf : dev.ixl.0.iflib.override_ntxqs=1 dev.ixl.0.iflib.override_nrxqs=1 leaving `ixl1` with a default number, matching number of cores (6). ixl0: mem 0xf880-0xf8ff,0xf9808000-0xf980 irq 40 at device 0.0 on pci4 ixl0: fw 5.0.40043 api 1.5 nvm 5.05 etid 80002927 oem 1.261.0 ixl0: PF-ID[0]: VFs 64, MSI-X 129, VF MSI-X 5, QPs 768, I2C ixl0: Using 1024 TX descriptors and 1024 RX descriptors ixl0: Using 1 RX queues 1 TX queues ixl0: Using MSI-X interrupts with 2 vectors ixl0: Ethernet address: 3c:fd:fe:aa:60:20 ixl0: Allocating 1 queues for PF LAN VSI; 1 queues active ixl0: PCI Express Bus: Speed 8.0GT/s Width x4 ixl0: SR-IOV ready ixl0: netmap queues/slots: TX 1/1024, RX 1/1024 ixl1: mem 0xf800-0xf87f,0xf980-0xf9807fff irq 40 at device 0.1 on pci4 ixl1: fw 5.0.40043 api 1.5 nvm 5.05 etid 80002927 oem 1.261.0 ixl1: PF-ID[1]: VFs 64, MSI-X 129, VF MSI-X 5, QPs 768, I2C ixl1: Using 1024 TX descriptors and 1024 RX descriptors ixl1: Using 6 RX queues 6 TX queues ixl1: Using MSI-X interrupts with 7 vectors ixl1: Ethernet address: 3c:fd:fe:aa:60:21 ixl1: Allocating 8 queues for PF LAN VSI; 6 queues active ixl1: PCI Express Bus: Speed 8.0GT/s Width x4 ixl1: SR-IOV ready ixl1: netmap queues/slots: TX 6/1024, RX 6/1024 This allowed us easy switch between different configurations without the need to reboot, by simply shutting down one interface or the other: `ifconfig XXX down` When testing `ixl0` that runs only a single queue: ixl0: Using 1 RX queues 1 TX queues
Re: Re[2]: Network anomalies after update from 11.2 STABLE to 12.1 STABLE
Btw, I once ran into a situation where "smart networking" was injecting RSTs into a TCP stream. The packet captures at the client and server machines were identical, except for the RSTs and the problem went away when I connected the two machines with a cable, bypassing the network. Might be worth a try, if you can do it? Good luck with it, rick From: owner-freebsd-...@freebsd.org on behalf of Paul Sent: Saturday, October 19, 2019 12:09 PM To: michael.tue...@lurchi.franken.de; freebsd-net@freebsd.org; freebsd-sta...@freebsd.org Subject: Re[2]: Network anomalies after update from 11.2 STABLE to 12.1 STABLE Hi Michael, Thank you, for taking your time! We use physical machines. We don not have any special `pf` rules. Both sides ran `pfctl -d` before testing. `nginx` config is primitive, no secrets there: --- user www; worker_processes auto; error_log /var/log/nginx/error.log warn; events { worker_connections 81920; kqueue_changes 4096; use kqueue; } http { include mime.types; default_typeapplication/octet-stream; sendfileoff; keepalive_timeout 65; tcp_nopush on; tcp_nodelay on; # Logging log_format main'$remote_addr - $remote_user [$time_local] "$request" ' '$status $request_length $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_real_ip" "$realip_remote_addr" "$request_completion" "$request_time" ' '"$request_body"'; access_log /var/log/nginx/access.log main; server { listen 80 default; server_name localhost _; location / { return 404; } } } --- `wrk` is compiled with a default configuration. We test like this: `wrk -c 10 --header "Connection: close" -d 10 -t 1 --latency http://10.10.10.92:80/missing` Also, it seems that our issue, and the one described in this thread, are identical: https://lists.freebsd.org/pipermail/freebsd-net/2019-June/053667.html We both have the Intel network cards, BTW. Our network cards are these: em0 at pci0:10:0:0:class=0x02 card=0x15d9 chip=0x10d38086 rev=0x00 hdr=0x00 vendor = 'Intel Corporation' device = '82574L Gigabit Network Connection' ixl0 at pci0:4:0:0:class=0x02 card=0x00078086 chip=0x15728086 rev=0x01 hdr=0x00 vendor = 'Intel Corporation' device = 'Ethernet Controller X710 for 10GbE SFP+' == Additional info: During the tests, we have bonded two interfaces into a lagg: ixl0: flags=8843 metric 0 mtu 1500 options=c500b8 ether 3c:fd:fe:aa:60:20 media: Ethernet autoselect (10Gbase-SR ) status: active nd6 options=29 ixl1: flags=8843 metric 0 mtu 1500 options=c500b8 ether 3c:fd:fe:aa:60:20 hwaddr 3c:fd:fe:aa:60:21 media: Ethernet autoselect (10Gbase-SR ) status: active nd6 options=29 lagg0: flags=8843 metric 0 mtu 1500 options=c500b8 ether 3c:fd:fe:aa:60:20 inet 10.10.10.92 netmask 0x broadcast 10.10.255.255 laggproto failover lagghash l2,l3,l4 laggport: ixl0 flags=5 laggport: ixl1 flags=0<> groups: lagg media: Ethernet autoselect status: active nd6 options=29 using this config: ifconfig_ixl0="up -lro -tso -rxcsum -txcsum" (tried different options - got the same outcome) ifconfig_ixl1="up -lro -tso -rxcsum -txcsum" ifconfig_lagg0="laggproto failover laggport ixl0 laggport ixl1 10.10.10.92/24" We have randomly picked `ixl0` and restricted number of RX/TX queues to 1: /boot/loader.conf : dev.ixl.0.iflib.override_ntxqs=1 dev.ixl.0.iflib.override_nrxqs=1 leaving `ixl1` with a default number, matching number of cores (6). ixl0: mem 0xf880-0xf8ff,0xf9808000-0xf980 irq 40 at device 0.0 on pci4 ixl0: fw 5.0.40043 api 1.5 nvm 5.05 etid 80002927 oem 1.261.0 ixl0: PF-ID[0]: VFs 64, MSI-X 129, VF MSI-X 5, QPs 768, I2C ixl0: Using 1024 TX descriptors and 1024 RX descriptors ixl0: Using 1 RX queues 1 TX queues ixl0: Using MSI-X interrupts with 2 vectors ixl0: Ethernet address: 3c:fd:fe:aa:60:20 ixl0: Allocating 1 queues for PF LAN VSI; 1 queues active ixl0: PCI Express Bus: Speed 8.0GT/s Width x4 ixl0: SR-IOV ready ixl0: netmap queues/slots: TX 1/1024, RX 1/1024 ixl1: mem 0xf800-0xf87f,0xf980-0xf9807fff irq 40 at device 0.1 on pci4 ixl1: fw 5.0.40043 api 1.5 nvm 5.05 etid 80002927 oem 1.261.0 ixl1: PF-ID[1]: VFs 64, MSI-X 129, VF MSI-
Re: Network anomalies after update from 11.2 STABLE to 12.1 STABLE
> On 19. Oct 2019, at 18:09, Paul wrote: > > Hi Michael, > > Thank you, for taking your time! > > We use physical machines. We don not have any special `pf` rules. > Both sides ran `pfctl -d` before testing. Hi Paul, OK. How are the physical machines connected to each other? What happens when you don't use a lagg interface, but the physical ones? (Trying to localise the problem...) Best regards Michael > > > `nginx` config is primitive, no secrets there: > > --- > user www; > worker_processes auto; > > error_log /var/log/nginx/error.log warn; > > events { > worker_connections 81920; > kqueue_changes 4096; > use kqueue; > } > > http { > include mime.types; > default_typeapplication/octet-stream; > > sendfileoff; > keepalive_timeout 65; > tcp_nopush on; > tcp_nodelay on; > > # Logging > log_format main'$remote_addr - $remote_user [$time_local] > "$request" ' > '$status $request_length $body_bytes_sent > "$http_referer" ' > '"$http_user_agent" "$http_x_real_ip" > "$realip_remote_addr" "$request_completion" "$request_time" ' > '"$request_body"'; > > access_log /var/log/nginx/access.log main; > > server { > listen 80 default; > > server_name localhost _; > > location / { > return 404; > } > } > } > --- > > > `wrk` is compiled with a default configuration. We test like this: > > `wrk -c 10 --header "Connection: close" -d 10 -t 1 --latency > http://10.10.10.92:80/missing` > > > Also, it seems that our issue, and the one described in this thread, are > identical: > >https://lists.freebsd.org/pipermail/freebsd-net/2019-June/053667.html > > We both have the Intel network cards, BTW. Our network cards are these: > > em0 at pci0:10:0:0:class=0x02 card=0x15d9 chip=0x10d38086 > rev=0x00 hdr=0x00 > vendor = 'Intel Corporation' > device = '82574L Gigabit Network Connection' > > ixl0 at pci0:4:0:0:class=0x02 card=0x00078086 chip=0x15728086 > rev=0x01 hdr=0x00 > vendor = 'Intel Corporation' > device = 'Ethernet Controller X710 for 10GbE SFP+' > > > == > > Additional info: > > During the tests, we have bonded two interfaces into a lagg: > > ixl0: flags=8843 metric 0 mtu 1500 > > options=c500b8 > ether 3c:fd:fe:aa:60:20 > media: Ethernet autoselect (10Gbase-SR ) > status: active > nd6 options=29 > ixl1: flags=8843 metric 0 mtu 1500 > > options=c500b8 > ether 3c:fd:fe:aa:60:20 > hwaddr 3c:fd:fe:aa:60:21 > media: Ethernet autoselect (10Gbase-SR ) > status: active > nd6 options=29 > > > lagg0: flags=8843 metric 0 mtu 1500 > > options=c500b8 > ether 3c:fd:fe:aa:60:20 > inet 10.10.10.92 netmask 0x broadcast 10.10.255.255 > laggproto failover lagghash l2,l3,l4 > laggport: ixl0 flags=5 > laggport: ixl1 flags=0<> > groups: lagg > media: Ethernet autoselect > status: active > nd6 options=29 > > using this config: > > ifconfig_ixl0="up -lro -tso -rxcsum -txcsum" (tried different options - > got the same outcome) > ifconfig_ixl1="up -lro -tso -rxcsum -txcsum" > ifconfig_lagg0="laggproto failover laggport ixl0 laggport ixl1 > 10.10.10.92/24" > > > We have randomly picked `ixl0` and restricted number of RX/TX queues to 1: > /boot/loader.conf : > dev.ixl.0.iflib.override_ntxqs=1 > dev.ixl.0.iflib.override_nrxqs=1 > > leaving `ixl1` with a default number, matching number of cores (6). > > > ixl0: mem > 0xf880-0xf8ff,0xf9808000-0xf980 irq 40 at device 0.0 on pci4 > ixl0: fw 5.0.40043 api 1.5 nvm 5.05 etid 80002927 oem 1.261.0 > ixl0: PF-ID[0]: VFs 64, MSI-X 129, VF MSI-X 5, QPs 768, I2C > ixl0: Using 1024 TX descriptors and 1024 RX descriptors > ixl0: Using 1 RX queues 1 TX queues > ixl0: Using MSI-X interrupts with 2 vectors > ixl0: Ethernet address: 3c:fd:fe:aa:60:20 > ixl0: Allocating 1 queues for PF LAN VSI; 1 queues active > ixl0: PCI Express Bus: Speed 8.0GT/s Width x4 > ixl0: SR-IOV ready > ixl0: netmap queues/slots: TX 1/1024, RX 1/1024 > ixl1: mem > 0xf800-0xf87f,0xf980-0xf9807fff irq 40 at device 0.1 on pci4 > ixl1: fw 5.0.40043 api 1.5 nvm 5.05 etid 80002927 oem 1.261.0 > ixl1: PF-ID[1]: VFs 64, MSI-X 129, VF MSI-X 5, QPs 768, I2C > ixl1: Using 1024 TX descriptors and 1024 RX descriptors > ixl1: Using 6 RX queues 6 TX q
Re[2]: Network anomalies after update from 11.2 STABLE to 12.1 STABLE
19 October 2019, 19:35:24, by "Michael Tuexen" : > > On 19. Oct 2019, at 18:09, Paul wrote: > > > > Hi Michael, > > > > Thank you, for taking your time! > > > > We use physical machines. We don not have any special `pf` rules. > > Both sides ran `pfctl -d` before testing. > Hi Paul, > > OK. How are the physical machines connected to each other? We have tested different connections. The old, copper ethernet, cable, as well as optics connection with an identical outcome. Machines are connected through Juniper QFX5100. > > What happens when you don't use a lagg interface, but the physical ones? > > (Trying to localise the problem...) Same thing, lagg does not change anything. Originally, the problem was observed on a regular interface. We have tested a on different hardware. Results are consistently stable on 11.2-STABLE and consistently unstable on 12.1-STABLE. The only unchanged thing is the network card vendor, it's Intel. > > Best regards > Michael > > > > > > `nginx` config is primitive, no secrets there: > > > > --- > > user www; > > worker_processes auto; > > > > error_log /var/log/nginx/error.log warn; > > > > events { > > worker_connections 81920; > > kqueue_changes 4096; > > use kqueue; > > } > > > > http { > > include mime.types; > > default_typeapplication/octet-stream; > > > > sendfileoff; > > keepalive_timeout 65; > > tcp_nopush on; > > tcp_nodelay on; > > > > # Logging > > log_format main'$remote_addr - $remote_user [$time_local] > > "$request" ' > > '$status $request_length $body_bytes_sent > > "$http_referer" ' > > '"$http_user_agent" "$http_x_real_ip" > > "$realip_remote_addr" "$request_completion" "$request_time" ' > > '"$request_body"'; > > > > access_log /var/log/nginx/access.log main; > > > > server { > > listen 80 default; > > > > server_name localhost _; > > > > location / { > > return 404; > > } > > } > > } > > --- > > > > > > `wrk` is compiled with a default configuration. We test like this: > > > > `wrk -c 10 --header "Connection: close" -d 10 -t 1 --latency > > http://10.10.10.92:80/missing` > > > > > > Also, it seems that our issue, and the one described in this thread, are > > identical: > > > >https://lists.freebsd.org/pipermail/freebsd-net/2019-June/053667.html > > > > We both have the Intel network cards, BTW. Our network cards are these: > > > > em0 at pci0:10:0:0:class=0x02 card=0x15d9 chip=0x10d38086 > > rev=0x00 hdr=0x00 > > vendor = 'Intel Corporation' > > device = '82574L Gigabit Network Connection' > > > > ixl0 at pci0:4:0:0:class=0x02 card=0x00078086 chip=0x15728086 > > rev=0x01 hdr=0x00 > > vendor = 'Intel Corporation' > > device = 'Ethernet Controller X710 for 10GbE SFP+' > > > > > > == > > > > Additional info: > > > > During the tests, we have bonded two interfaces into a lagg: > > > > ixl0: flags=8843 metric 0 mtu 1500 > > > > options=c500b8 > > ether 3c:fd:fe:aa:60:20 > > media: Ethernet autoselect (10Gbase-SR ) > > status: active > > nd6 options=29 > > ixl1: flags=8843 metric 0 mtu 1500 > > > > options=c500b8 > > ether 3c:fd:fe:aa:60:20 > > hwaddr 3c:fd:fe:aa:60:21 > > media: Ethernet autoselect (10Gbase-SR ) > > status: active > > nd6 options=29 > > > > > > lagg0: flags=8843 metric 0 mtu 1500 > > > > options=c500b8 > > ether 3c:fd:fe:aa:60:20 > > inet 10.10.10.92 netmask 0x broadcast 10.10.255.255 > > laggproto failover lagghash l2,l3,l4 > > laggport: ixl0 flags=5 > > laggport: ixl1 flags=0<> > > groups: lagg > > media: Ethernet autoselect > > status: active > > nd6 options=29 > > > > using this config: > > > > ifconfig_ixl0="up -lro -tso -rxcsum -txcsum" (tried different options > > - got the same outcome) > > ifconfig_ixl1="up -lro -tso -rxcsum -txcsum" > > ifconfig_lagg0="laggproto failover laggport ixl0 laggport ixl1 > > 10.10.10.92/24" > > > > > > We have randomly picked `ixl0` and restricted number of RX/TX queues to 1: > > /boot/loader.conf : > > dev.ixl.0.iflib.override_ntxqs=1 > > dev.ixl.0.iflib.override_nrxqs=1 > > > > leaving `ixl1` with a default number, matching number of cores (6). > > > > > > ixl0: mem > > 0xf880-0xf8ff,0xf9808000-0xf980 irq 40 at device 0.0 on pci4 > > ixl0: fw 5.0.40043
Re[2]: Re[2]: Network anomalies after update from 11.2 STABLE to 12.1 STABLE
Hi Rick, RST is only one part of a syndrome. Apart from it, we have a ton of different other issues. For example: a lot (50+) of ACK and [FIN, ACK] re-transmissions in cases where they are definitely not needed, as seen in tspdump, unless the packets that we see in the dump are not actually processed by the kernel(?), therefore leading to re-transmissions? It definitely has something to do with races, because issue completely disappears when only single queue is enabled. In other cases, we have observed that 12.1-STABLE has sent FIN, but then, when sending the ACK it didn't actually increment SEQ, as if those two packets FIN an ACK were sent concurrently, though ACK was dispatched later. Also, I want to focus on a weird behavior, as I wrote in the original post: issue also disappears if, multiple TCP streams each use different DST port. It's as if it has anything to do with sharing a port. 19 October 2019, 19:24:43, by "Rick Macklem" : > Btw, I once ran into a situation where "smart networking" was injecting > RSTs into a TCP stream. The packet captures at the client and server > machines were identical, except for the RSTs and the problem went away > when I connected the two machines with a cable, bypassing the network. > Might be worth a try, if you can do it? > > Good luck with it, rick > > > From: owner-freebsd-...@freebsd.org on behalf > of Paul > Sent: Saturday, October 19, 2019 12:09 PM > To: michael.tue...@lurchi.franken.de; freebsd-net@freebsd.org; > freebsd-sta...@freebsd.org > Subject: Re[2]: Network anomalies after update from 11.2 STABLE to 12.1 STABLE > > Hi Michael, > > Thank you, for taking your time! > > We use physical machines. We don not have any special `pf` rules. > Both sides ran `pfctl -d` before testing. > > > `nginx` config is primitive, no secrets there: > > --- > user www; > worker_processes auto; > > error_log /var/log/nginx/error.log warn; > > events { > worker_connections 81920; > kqueue_changes 4096; > use kqueue; > } > > http { > include mime.types; > default_typeapplication/octet-stream; > > sendfileoff; > keepalive_timeout 65; > tcp_nopush on; > tcp_nodelay on; > > # Logging > log_format main'$remote_addr - $remote_user [$time_local] > "$request" ' > '$status $request_length $body_bytes_sent > "$http_referer" ' > '"$http_user_agent" "$http_x_real_ip" > "$realip_remote_addr" "$request_completion" "$request_time" ' > '"$request_body"'; > > access_log /var/log/nginx/access.log main; > > server { > listen 80 default; > > server_name localhost _; > > location / { > return 404; > } > } > } > --- > > > `wrk` is compiled with a default configuration. We test like this: > > `wrk -c 10 --header "Connection: close" -d 10 -t 1 --latency > http://10.10.10.92:80/missing` > > > Also, it seems that our issue, and the one described in this thread, are > identical: > >https://lists.freebsd.org/pipermail/freebsd-net/2019-June/053667.html > > We both have the Intel network cards, BTW. Our network cards are these: > > em0 at pci0:10:0:0:class=0x02 card=0x15d9 chip=0x10d38086 > rev=0x00 hdr=0x00 > vendor = 'Intel Corporation' > device = '82574L Gigabit Network Connection' > > ixl0 at pci0:4:0:0:class=0x02 card=0x00078086 chip=0x15728086 > rev=0x01 hdr=0x00 > vendor = 'Intel Corporation' > device = 'Ethernet Controller X710 for 10GbE SFP+' > > > == > > Additional info: > > During the tests, we have bonded two interfaces into a lagg: > > ixl0: flags=8843 metric 0 mtu 1500 > > options=c500b8 > ether 3c:fd:fe:aa:60:20 > media: Ethernet autoselect (10Gbase-SR ) > status: active > nd6 options=29 > ixl1: flags=8843 metric 0 mtu 1500 > > options=c500b8 > ether 3c:fd:fe:aa:60:20 > hwaddr 3c:fd:fe:aa:60:21 > media: Ethernet autoselect (10Gbase-SR ) > status: active > nd6 options=29 > > > lagg0: flags=8843 metric 0 mtu 1500 > > options=c500b8 > ether 3c:fd:fe:aa:60:20 > inet 10.10.10.92 netmask 0x broadcast 10.10.255.255 > laggproto failover lagghash l2,l3,l4 > laggport: ixl0 flags=5 > laggport: ixl1 flags=0<> > groups: lagg > media: Ethernet autoselect > status: active > nd6 options=29 > > using this config: > > ifconfig_ixl0="up -lro -tso -rxcsum -txcsum"