Re: FreeBSD 10G forwarding performance @Intel
On 04.07.2012 01:29, Doug Barton wrote: On 07/03/2012 14:44, Luigi Rizzo wrote: On Tue, Jul 03, 2012 at 02:19:06PM -0700, Doug Barton wrote: Just curious ... what's the MTU on your FreeBSD box, and the Linux box? he is (correctly) using min-sized packets, and counting packets not bps. In this particular setup - 1500. You're probably meaning type of mbufs which are allocated by ixgbe driver? Yes, I know. That wasn't what I asked. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: FreeBSD 10G forwarding performance @Intel
On 07/03/2012 23:29, Alexander V. Chernikov wrote: On 04.07.2012 01:29, Doug Barton wrote: Just curious ... what's the MTU on your FreeBSD box, and the Linux box? In this particular setup - 1500. You're probably meaning type of mbufs which are allocated by ixgbe driver? 1500 for both? And no, I'm not thinking of the mbufs directly, although that may be a side effect. I've seen cases on FreeBSD with em where setting the MTU to 9000 had unexpected (albeit pleasant) side effects on throughput vs. system load. Since it was working better I didn't take the time to find out why. However since you're obviously interested in finding out the nitty-gritty details (and thank you for that) you might want to give it a look, and a few test runs. hth, Doug -- This .signature sanitized for your protection ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: FreeBSD 10G forwarding performance @Intel
On 04.07.2012 12:13, Doug Barton wrote: On 07/03/2012 23:29, Alexander V. Chernikov wrote: On 04.07.2012 01:29, Doug Barton wrote: Just curious ... what's the MTU on your FreeBSD box, and the Linux box? In this particular setup - 1500. You're probably meaning type of mbufs which are allocated by ixgbe driver? 1500 for both? Well, AFAIR it was 1500. We've done a variety of tests half a year ago with similar server and Intel and Mellanox equipment. Test results vary from 4 to 6mpps in different setups (and mellanox seems to behave better on Linux). If you're particularly interested in exact Linux performance on exactly the same box I can try to do this possibly next week. My point actually is the following: It is possible to do linerate 10G (14.8mpps) forwarding with current market-available hardware. Linux is going that way and it is much more close than we do. Even dragonfly performs _much_ better than we do in routing. http://shader.kaist.edu/packetshader/ (and links there) are good example of what is going on. And no, I'm not thinking of the mbufs directly, although that may be a side effect. I've seen cases on FreeBSD with em where setting the MTU to 9000 had unexpected (albeit pleasant) side effects on throughput vs. Yes. Stock drivers has this problem, especially with IPv6 addresses. We actually use our versions of em/igb/ixgbe drivers in production which are free from several problems in stock driver. (Tests, however, were done using stock driver) system load. Since it was working better I didn't take the time to find out why. However since you're obviously interested in finding out the nitty-gritty details (and thank you for that) you might want to give it a look, and a few test runs. hth, Doug ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: FreeBSD 10G forwarding performance @Intel
On Wed, Jul 04, 2012 at 12:46:09PM +0400, Alexander V. Chernikov wrote: On 04.07.2012 12:13, Doug Barton wrote: On 07/03/2012 23:29, Alexander V. Chernikov wrote: On 04.07.2012 01:29, Doug Barton wrote: Just curious ... what's the MTU on your FreeBSD box, and the Linux box? In this particular setup - 1500. You're probably meaning type of mbufs which are allocated by ixgbe driver? 1500 for both? Well, AFAIR it was 1500. We've done a variety of tests half a year ago with similar server and Intel and Mellanox equipment. Test results vary from 4 to 6mpps in different setups (and mellanox seems to behave better on Linux). If you're particularly interested in exact Linux performance on exactly the same box I can try to do this possibly next week. My point actually is the following: It is possible to do linerate 10G (14.8mpps) forwarding with current market-available hardware. Linux is going that way and it is much more close than we do. Even dragonfly performs _much_ better than we do in routing. http://shader.kaist.edu/packetshader/ (and links there) are good example of what is going on. Alex, i am sure you are aware that in FreeBSD we have netmap too http://info.iet.unipi.it/~luigi/netmap/ which is probably a lot more usable than packetshader (hw independent, included in the OS, also works on linux...) cheers luigi ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: FreeBSD 10G forwarding performance @Intel
On 04.07.2012 13:12, Luigi Rizzo wrote: Alex, i am sure you are aware that in FreeBSD we have netmap too Yes, I'm aware of that :) which is probably a lot more usable than packetshader (hw independent, included in the OS, also works on linux...) I'm actually not talking about usability and comparison here :). Thay have nice idea and nice performance graphs. And packetshader is actually _platform_ with fast packet delivery being one (and the only open) part of platform. Their graphs shows 40MPPS (27G/64byte) CPU-only IPv4 packet forwarding on two four-core Intel Nehalem CPUs (2.66GHz) which illustrates software routing possibilities quite clearly. cheers luigi ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
lagg speed trouble
i have sever with two 1G links (em) aggregated by lagg0 after 1700Megabits i have collisions/errors on lagg0 port, but not on em0 or em1 I'm using nginx in own CDN. and server don't limited my mbufs, irq, or anything else.. only lagg0 errors ( netstat -w 1 -I em0 input (em0) output packets errs idrops bytespackets errs bytes colls 43871 0 02726437 38304 0 108063773 0 43417 0 02700512 39084 0 109215143 0 43474 0 02701344 39303 0 108373730 0 43755 0 02717689 39023 0 108766820 0 43960 0 02733462 39476 0 109030307 0 44675 0 02776654 38708 0 107313936 0 44082 0 02734293 39089 0 108897889 0 netstat -w 1 -I em1 input (em1) output packets errs idrops bytespackets errs bytes colls 43754 0 02722677 38943 0 108504216 0 44561 0 02778854 39107 0 108418763 0 44773 0 02784606 39006 0 108799148 0 45134 0 02814622 39137 0 108557494 0 44604 0 02770745 38998 0 107942619 0 44813 0 02789991 38901 0 108438247 0 netstat -w 1 -I lagg0 input(lagg0) output packets errs idrops bytespackets errs bytes colls 87964 0 05474019 78172 1964 20549 0 88842 0 05533987 78852 1811 222578109 0 87687 0 05454717 77279 2416 86391 0 87995 0 05471653 78090 2040 223488046 0 88314 0 05493348 78495 1994 222548964 0 88411 0 05502818 78228 1949 14374 0 how i can get full link speed on this server? ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: lagg speed trouble
On 7/4/12 1:30 PM, Vyacheslav Kulikovskyy wrote: i have sever with two 1G links (em) aggregated by lagg0 after 1700Megabits i have collisions/errors on lagg0 port, but not on em0 or em1 I'm using nginx in own CDN. and server don't limited my mbufs, irq, or anything else.. only lagg0 errors ( netstat -w 1 -I em0 input (em0) output packets errs idrops bytespackets errs bytes colls 43871 0 02726437 38304 0 108063773 0 43417 0 02700512 39084 0 109215143 0 43474 0 02701344 39303 0 108373730 0 43755 0 02717689 39023 0 108766820 0 43960 0 02733462 39476 0 109030307 0 44675 0 02776654 38708 0 107313936 0 44082 0 02734293 39089 0 108897889 0 netstat -w 1 -I em1 input (em1) output packets errs idrops bytespackets errs bytes colls 43754 0 02722677 38943 0 108504216 0 44561 0 02778854 39107 0 108418763 0 44773 0 02784606 39006 0 108799148 0 45134 0 02814622 39137 0 108557494 0 44604 0 02770745 38998 0 107942619 0 44813 0 02789991 38901 0 108438247 0 netstat -w 1 -I lagg0 input(lagg0) output packets errs idrops bytespackets errs bytes colls 87964 0 05474019 78172 1964 20549 0 88842 0 05533987 78852 1811 222578109 0 87687 0 05454717 77279 2416 86391 0 87995 0 05471653 78090 2040 223488046 0 88314 0 05493348 78495 1994 222548964 0 88411 0 05502818 78228 1949 14374 0 how i can get full link speed on this server? Do the ports on the switch report any layer 2 error, by chance ? ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: lagg speed trouble
Do the ports on the switch report any layer 2 error, by chance ? I don't have access to swith, but without lagg0 i have near 980Mbit's on one em0 network link. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
problem on ipfw using mac addresses
Hi all, I have a problem using ipfw firewall. I have a topology connected as below: A(192.168.1.55) - (192.168.1.1)my_sys(192.168.2.1) ---(192.168.2.12)B I've set the rule ipfw add 1 deny icmp from any to any on my_sys, which works correctly. I can't ping from A to B by the rule. Then I've added mac part to the rule as the format of ipfw add 1 deny icmp from any to any ma any any which seems the same as before but after that I could ping the B from A. What's the reason? I'm really confused with what I saw! Is it a bug? Any hints or suggestions are really appreciated. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
setting up dns server
Hi all. I want to config FreeBSD as a dns server. I did below configuration but when I use nslookup command it doesn't work. I also enabled named service in rc.conf file and put my ip as a nameserver in resolv.conf. what am I missing?is there anything else I should do? any help would be appriciated. My named.conf file: --- options { directory /etc/namedb; pid-file /var/run/named/pid; dump-file/var/dump/named_dump.db; statistics-file /var/stats/named.stats; }; zone . { type hint; file /etc/namedb/named.root; }; zone 0.0.127.IN-ADDR.ARPA { type master; file master/localhost.rev; }; zone ictptk.net { type master; file /etc/namedb/master/db.domain; }; zone 10.10.10.in-addr.arpa { type master; file /etc/named/master/db.ict; }; --- my db.ict file : --- $TTL 3600 @IN SOA ns.ictptk.net. root.ns.ictptk.net. ( 2001220200 ;Serial 3600 ;Refresh 900 ;Retry 360 ;Expire 3600 ) ;Minimum IN NS ns.ictptk.net. 1 IN PTRictptk.net. --- my db.domain file : --- $TTL 3600 @IN SOA ns.ictptk.net. root.ns.ictptk.net. ( 2001220200 ;Serial 3600 ;Refresh 900 ;Retry 360 ;Expire 3600 ) ;Minimum INNS ns.ictptk.net. ictptk.net IN A 10.10.10.1 www.ictptk.net. INCNAME ictptk.net. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: problem on ipfw using mac addresses
On 04.07.2012 17:04, h bagade wrote: Hi all, I have a problem using ipfw firewall. I have a topology connected as below: A(192.168.1.55) - (192.168.1.1)my_sys(192.168.2.1) ---(192.168.2.12)B I've set the rule ipfw add 1 deny icmp from any to any on my_sys, which works correctly. I can't ping from A to B by the rule. Then I've added mac part to the rule as the format of ipfw add 1 deny icmp from any to any ma any any which seems the same as before but after that I could ping the B from A. What's the reason? I'm really confused with what I saw! Is it a bug? Any hints or suggestions are really appreciated. Please, read the ipfw(4) manual page about the sysctl variable net.link.ether.ipfw. -- WBR, Andrey V. Elsukov ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: problem on ipfw using mac addresses
Have you set net.link.ether.ipfw? ~Paul On Wed, Jul 04, 2012 at 05:34:04PM +0430, h bagade wrote: Hi all, I have a problem using ipfw firewall. I have a topology connected as below: A(192.168.1.55) - (192.168.1.1)my_sys(192.168.2.1) ---(192.168.2.12)B I've set the rule ipfw add 1 deny icmp from any to any on my_sys, which works correctly. I can't ping from A to B by the rule. Then I've added mac part to the rule as the format of ipfw add 1 deny icmp from any to any ma any any which seems the same as before but after that I could ping the B from A. What's the reason? I'm really confused with what I saw! Is it a bug? Any hints or suggestions are really appreciated. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org This message may contain confidential or privileged information. If you are not the intended recipient, please advise us immediately and delete this message. See http://www.datapipe.com/legal/email_disclaimer/ for further information on confidentiality and the risks of non-secure electronic communication. If you cannot access these links, please notify us by reply message and we will send the contents to you. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: setting up dns server
- What bind listening? (Can you see it with netstat?) - What port is it listening to? - What errors (if any) are in the error log? I'm afraid your question really isn't a specific FreeBSD problem. You might have better luck on the BIND mailing list. ~Paul On Wed, Jul 04, 2012 at 06:43:00AM -0700, m s wrote: Hi all. I want to config FreeBSD as a dns server. I did below configuration but when I use nslookup command it doesn't work. I also enabled named service in rc.conf file and put my ip as a nameserver in resolv.conf. what am I missing?is there anything else I should do? any help would be appriciated. My named.conf file: --- options { directory /etc/namedb; pid-file /var/run/named/pid; dump-file/var/dump/named_dump.db; statistics-file /var/stats/named.stats; }; zone . { type hint; file /etc/namedb/named.root; }; zone 0.0.127.IN-ADDR.ARPA { type master; file master/localhost.rev; }; zone ictptk.net { type master; file /etc/namedb/master/db.domain; }; zone 10.10.10.in-addr.arpa { type master; file /etc/named/master/db.ict; }; --- my db.ict file : --- $TTL 3600 @IN SOA ns.ictptk.net. root.ns.ictptk.net. ( 2001220200 ;Serial 3600 ;Refresh 900 ;Retry 360 ;Expire 3600 ) ;Minimum IN NS ns.ictptk.net. 1 IN PTRictptk.net. --- my db.domain file : --- $TTL 3600 @IN SOA ns.ictptk.net. root.ns.ictptk.net. ( 2001220200 ;Serial 3600 ;Refresh 900 ;Retry 360 ;Expire 3600 ) ;Minimum INNS ns.ictptk.net. ictptk.net IN A 10.10.10.1 www.ictptk.net. INCNAME ictptk.net. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org This message may contain confidential or privileged information. If you are not the intended recipient, please advise us immediately and delete this message. See http://www.datapipe.com/legal/email_disclaimer/ for further information on confidentiality and the risks of non-secure electronic communication. If you cannot access these links, please notify us by reply message and we will send the contents to you. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
how to correctly distinguish broadcast udp packets vs unicast (socket, pcap or bpf)?
Good day to all. What is the correct way to distinguish udp packets that obtained by application and were send on 255.255.255.255 ip addr from those that were send to unicast ip? Seems it is impossible with read/recvfrom so we'v made that with libpcap. It coul be done with directly bpf api without pcap wrapper but i'm not sure about how big pcap overhead is. The questions is if we have about 1Gb incoming traffic and using pcap filter for specific port how big is impact of using pcap in such situation? Is it possbile to estimate? Target traffic is about 1Mbit and while testing CPU is about 1-2% but i'm not sure about all the conditions. recfrom recieves all the data without loss in such condition, is it possible that pcap because of its filtering nature(i dont know in details how bpf is realized deep in kernel:( ) will add big overhead while listening? ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: FreeBSD 10G forwarding performance @Intel
On Wed, Jul 04, 2012 at 01:54:01PM +0400, Alexander V. Chernikov wrote: On 04.07.2012 13:12, Luigi Rizzo wrote: Alex, i am sure you are aware that in FreeBSD we have netmap too Yes, I'm aware of that :) which is probably a lot more usable than packetshader (hw independent, included in the OS, also works on linux...) I'm actually not talking about usability and comparison here :). Thay have nice idea and nice performance graphs. And packetshader is actually _platform_ with fast packet delivery being one (and the only open) part of platform. i am not sure if i should read the above as a feature or a limitation :) Their graphs shows 40MPPS (27G/64byte) CPU-only IPv4 packet forwarding on two four-core Intel Nehalem CPUs (2.66GHz) which illustrates software routing possibilities quite clearly. i suggest to be cautious about graphs in papers (including mine) and rely on numbers you can reproduce yourself. As your nice experiments showed (i especially liked when you moved from one /24 to four /28 routes), at these speeds a factor of 2 or more in throughput can easily arise from tiny changes in configurations, bus, memory and CPU speeds, and so on. cheers luigi ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: how to correctly distinguish broadcast udp packets vs unicast (socket, pcap or bpf)?
On Jul 4, 2012, at 6:08 PM, Budnev Vladimir wrote: Good day to all. What is the correct way to distinguish udp packets that obtained by application and were send on 255.255.255.255 ip addr from those that were send to unicast ip? Seems it is impossible with read/recvfrom so we'v made that with libpcap. It coul be done with directly bpf api without pcap wrapper but i'm not sure about how big pcap overhead is. The questions is if we have about 1Gb incoming traffic and using pcap filter for specific port how big is impact of using pcap in such situation? Is it possbile to estimate? Target traffic is about 1Mbit and while testing CPU is about 1-2% but i'm not sure about all the conditions. recfrom recieves all the data without loss in such condition, is it possible that pcap because of its filtering nature(i dont know in details how bpf is realized deep in kernel:( ) will add big overhead while listening? ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org If I'm understanding your question correctly you can lookup the ip(4) manual page : If the IP_RECVDSTADDR option is enabled on a SOCK_DGRAM socket, the recvmsg call will return the destination IP address for a UDP datagram. The msg_control field in the msghdr structure points to a buffer that contains a cmsghdr structure followed by the IP address. The cmsghdr fields have the following values: You can use this in you application and get the destination address of the packets be it unicast IP or the broadcast address.___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: how to correctly distinguish broadcast udp packets vs unicast (socket, pcap or bpf)?
07/04/12 19:37, Nikolay Denev пишет: On Jul 4, 2012, at 6:08 PM, Budnev Vladimir wrote: Good day to all. What is the correct way to distinguish udp packets that obtained by application and were send on 255.255.255.255 ip addr from those that were send to unicast ip? Seems it is impossible with read/recvfrom so we'v made that with libpcap. It coul be done with directly bpf api without pcap wrapper but i'm not sure about how big pcap overhead is. The questions is if we have about 1Gb incoming traffic and using pcap filter for specific port how big is impact of using pcap in such situation? Is it possbile to estimate? Target traffic is about 1Mbit and while testing CPU is about 1-2% but i'm not sure about all the conditions. recfrom recieves all the data without loss in such condition, is it possible that pcap because of its filtering nature(i dont know in details how bpf is realized deep in kernel:( ) will add big overhead while listening? ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org If I'm understanding your question correctly you can lookup the ip(4) manual page : If the IP_RECVDSTADDR option is enabled on a SOCK_DGRAM socket, the recvmsg call will return the destination IP address for a UDP datagram. The msg_control field in the msghdr structure points to a buffer that contains a cmsghdr structure followed by the IP address. The cmsghdr fields have the following values: You can use this in you application and get the destination address of the packets be it unicast IP or the broadcast address. Tnx for fast response! Hm... seems if it will work it will help. I'll test that as soon as possbile! ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: lagg speed trouble
An ifconfig -v lagg0 might be useful here netstat -m and maybe more that others can advise on. On Wed, Jul 04, 2012 at 02:58:30PM +0300, Vyacheslav Kulikovskyy wrote: Do the ports on the switch report any layer 2 error, by chance ? I don't have access to swith, but without lagg0 i have near 980Mbit's on one em0 network link. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org -- - (2^(N-1)) ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: FreeBSD 10G forwarding performance @Intel
Hello, Alexander. You wrote 4 июля 2012 г., 12:46:09: AVC http://shader.kaist.edu/packetshader/ (and links there) are good example AVC of what is going on. But HOW?! GPU has very high preparation and data transfer cost, how it could be used for such small packets of data, as 1.5-9K datagrams?! -- // Black Lion AKA Lev Serebryakov l...@freebsd.org ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: FreeBSD 10G forwarding performance @Intel
On 04.07.2012 23:37, Lev Serebryakov wrote: Hello, Alexander. You wrote 4 июля 2012 г., 12:46:09: AVC http://shader.kaist.edu/packetshader/ (and links there) are good example AVC of what is going on. But HOW?! GPU has very high preparation and data transfer cost, how it could be used for such small packets of data, as 1.5-9K datagrams?! According to http://www.ndsl.kaist.edu/~kyoungsoo/papers/packetshader.pdf - cumulative dispatch latency is between 3.8-4.1 microseconds (section 2.2). And GPU is doing routing lookup only (at least for IPv4/IPv6 forwarding), so we're always transferring/receving fixed amount of data. Btw, there are exact hardware specifications in this document. ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
Re: lagg speed trouble
On 4 July 2012 23:30, Vyacheslav Kulikovskyy coolsy...@gmail.com wrote: i have sever with two 1G links (em) aggregated by lagg0 after 1700Megabits i have collisions/errors on lagg0 port, but not on em0 or em1 I'm using nginx in own CDN. and server don't limited my mbufs, irq, or anything else.. only lagg0 errors ( netstat -w 1 -I lagg0 input(lagg0) output packets errs idrops bytespackets errs bytes colls 87964 0 05474019 78172 1964 20549 0 88842 0 05533987 78852 1811 222578109 0 87687 0 05454717 77279 2416 86391 0 87995 0 05471653 78090 2040 223488046 0 88314 0 05493348 78495 1994 222548964 0 88411 0 05502818 78228 1949 14374 0 how i can get full link speed on this server? This probably means the packet could not be queued on the lagg interface send queue. Please try this patch. Andrew lagg_transmit.diff Description: Binary data ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org