You do not have any errors at nic. Probably some other hardware loses packets. How do you count lost packets? Describe your network.

04.03.2012 2:24, Welisson пишет:
Sorry Bokhan,
Following result of command bellow.
ethtool -S eth4
NIC statistics:
     rx_packets: 7263913628
     tx_packets: 8747598287
     rx_bytes: 2464378673600
     tx_bytes: 8639557156341
     rx_broadcast: 44878
     tx_broadcast: 125857
     rx_multicast: 1806927
     tx_multicast: 1569245
     multicast: 1806927
     collisions: 0
     rx_crc_errors: 0
     rx_no_buffer_count: 0
     rx_missed_errors: 0
     tx_aborted_errors: 0
     tx_carrier_errors: 0
     tx_window_errors: 0
     tx_abort_late_coll: 0
     tx_deferred_ok: 0
     tx_single_coll_ok: 0
     tx_multi_coll_ok: 0
     tx_timeout_count: 0
     rx_long_length_errors: 0
     rx_short_length_errors: 0
     rx_align_errors: 0
     tx_tcp_seg_good: 0
     tx_tcp_seg_failed: 0
     rx_flow_control_xon: 0
     rx_flow_control_xoff: 0
     tx_flow_control_xon: 0
     tx_flow_control_xoff: 0
     rx_long_byte_count: 2464378673600
     tx_dma_out_of_sync: 0
     tx_smbus: 0
     rx_smbus: 0
     dropped_smbus: 0
     rx_errors: 0
     tx_errors: 0
     tx_dropped: 0
     rx_length_errors: 0
     rx_over_errors: 0
     rx_frame_errors: 0
     rx_fifo_errors: 0
     tx_fifo_errors: 0
     tx_heartbeat_errors: 0
     tx_queue_0_packets: 28643071
     tx_queue_0_bytes: 12868911573
     tx_queue_0_restart: 0
     tx_queue_1_packets: 1564502044
     tx_queue_1_bytes: 1572436942978
     tx_queue_1_restart: 0
     tx_queue_2_packets: 1250862870
     tx_queue_2_bytes: 1217175782241
     tx_queue_2_restart: 0
     tx_queue_3_packets: 1178214642
     tx_queue_3_bytes: 1152493173846
     tx_queue_3_restart: 0
     tx_queue_4_packets: 1191175728
     tx_queue_4_bytes: 1177069762738
     tx_queue_4_restart: 0
     tx_queue_5_packets: 1195290594
     tx_queue_5_bytes: 1173186363711
     tx_queue_5_restart: 0
     tx_queue_6_packets: 1166480030
     tx_queue_6_bytes: 1134686878270
     tx_queue_6_restart: 0
     tx_queue_7_packets: 1172429314
     tx_queue_7_bytes: 1159135953772
     tx_queue_7_restart: 0
     rx_queue_0_packets: 913225737
     rx_queue_0_bytes: 301962745937
     rx_queue_0_drops: 0
     rx_queue_0_csum_err: 0
     rx_queue_0_alloc_failed: 0
     rx_queue_1_packets: 914107009
     rx_queue_1_bytes: 305308109057
     rx_queue_1_drops: 0
     rx_queue_1_csum_err: 0
     rx_queue_1_alloc_failed: 0
     rx_queue_2_packets: 884392073
     rx_queue_2_bytes: 292775571534
     rx_queue_2_drops: 0
     rx_queue_2_csum_err: 0
     rx_queue_2_alloc_failed: 0
     rx_queue_3_packets: 905867155
     rx_queue_3_bytes: 304553264988
     rx_queue_3_drops: 0
     rx_queue_3_csum_err: 0
     rx_queue_3_alloc_failed: 0
     rx_queue_4_packets: 901414852
     rx_queue_4_bytes: 296216840175
     rx_queue_4_drops: 0
     rx_queue_4_csum_err: 0
     rx_queue_4_alloc_failed: 0
     rx_queue_5_packets: 906832178
     rx_queue_5_bytes: 306427939590
     rx_queue_5_drops: 0
     rx_queue_5_csum_err: 0
     rx_queue_5_alloc_failed: 0
     rx_queue_6_packets: 928393565
     rx_queue_6_bytes: 323459406099
     rx_queue_6_drops: 0
     rx_queue_6_csum_err: 0
     rx_queue_6_alloc_failed: 0
     rx_queue_7_packets: 909681064
     rx_queue_7_bytes: 304619142810
     rx_queue_7_drops: 0
     rx_queue_7_csum_err: 0
     rx_queue_7_alloc_failed: 0

ethtool -a eth4
Pause parameters for eth4:
Autonegotiate:on
RX:on
TX:off
ethtool -k eth4
Offload parameters for eth4:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off
ntuple-filters: off
receive-hashing: off

ethtool -g eth4
Ring parameters for eth4:
Pre-set maximums:
RX:4096
RX Mini:0
RX Jumbo:0
TX:4096
Current hardware settings:
RX:4096
RX Mini:0
RX Jumbo:0
TX:4096

ethtool -c eth4
all values = 0

still have a packet loss ranging from 1-2%, -s 1024, but at or below 512 have lost, but is 0%.
2012/1/24 Bokhan Artem <a...@eml.ru <mailto:a...@eml.ru>>


    Btw, check ethtool -S ethX for errors.


    On 24.01.2012 22:22, Bokhan Artem wrote:

        On 24.01.2012 21:58, Welisson wrote:

            Hi Bokhan,

            I tried using the latest version (3.2.10) of the driver,
            but it does the
            balance between the core so I decided to leave the
            driver's default version of
            ubuntu.

        By unknown to me reason it is necessary to force that manually
        sometimes.

        cat /etc/modprobe.d/igb.conf
        options igb RSS=4,4 QueuePairs=0,0

        Look "modinfo igb" for help. Dont forget to update initramfs.

        dmesg | grep igb
        [    1.641096] igb 0000:01:00.0: PCI INT A ->  GSI 16 (level,
        low) ->  IRQ 16
        [    1.641117] igb 0000:01:00.0: setting latency timer to 64
        [    1.641331] igb: 0000:01:00.0: igb_validate_option: RSS -
        RSS multiqueue
        receive count set to 4
        [    1.641333] igb: 0000:01:00.0: igb_validate_option:
        QueuePairs - TX/RX queue
        pairs for interrupt handling Disabled
        [    1.641373] igb 0000:01:00.0: irq 34 for MSI/MSI-X
        [    1.641378] igb 0000:01:00.0: irq 35 for MSI/MSI-X
        [    1.641383] igb 0000:01:00.0: irq 36 for MSI/MSI-X
        [    1.641388] igb 0000:01:00.0: irq 37 for MSI/MSI-X
        [    1.641393] igb 0000:01:00.0: irq 38 for MSI/MSI-X
        [    1.641398] igb 0000:01:00.0: irq 39 for MSI/MSI-X
        [    1.641404] igb 0000:01:00.0: irq 40 for MSI/MSI-X
        [    1.641409] igb 0000:01:00.0: irq 41 for MSI/MSI-X
        [    1.641414] igb 0000:01:00.0: irq 42 for MSI/MSI-X
        [    1.886555] igb 0000:01:00.0: Intel(R) Gigabit Ethernet
        Network Connection
        [    1.886558] igb 0000:01:00.0: eth0: (PCIe:2.5GT/s:Width x4)
        00:1b:21:8c:b9:40
        [    1.886636] igb 0000:01:00.0: eth0: PBA No: E43709-005
        [    1.886638] igb 0000:01:00.0: Using MSI-X interrupts. 4 rx
        queue(s), 4 tx
        queue(s)
        [    1.886654] igb 0000:01:00.1: PCI INT B ->  GSI 17 (level,
        low) ->  IRQ 17
        [    1.886673] igb 0000:01:00.1: setting latency timer to 64
        [    1.886886] igb: 0000:01:00.1: igb_validate_option: RSS -
        RSS multiqueue
        receive count set to 4
        [    1.886888] igb: 0000:01:00.1: igb_validate_option:
        QueuePairs - TX/RX queue
        pairs for interrupt handling Disabled
        [    1.886925] igb 0000:01:00.1: irq 44 for MSI/MSI-X
        [    1.886930] igb 0000:01:00.1: irq 45 for MSI/MSI-X
        [    1.886935] igb 0000:01:00.1: irq 46 for MSI/MSI-X
        [    1.886940] igb 0000:01:00.1: irq 47 for MSI/MSI-X
        [    1.886945] igb 0000:01:00.1: irq 48 for MSI/MSI-X
        [    1.886950] igb 0000:01:00.1: irq 49 for MSI/MSI-X
        [    1.886954] igb 0000:01:00.1: irq 50 for MSI/MSI-X
        [    1.886959] igb 0000:01:00.1: irq 51 for MSI/MSI-X
        [    1.886964] igb 0000:01:00.1: irq 52 for MSI/MSI-X
        [    2.136175] igb 0000:01:00.1: Intel(R) Gigabit Ethernet
        Network Connection
        [    2.136178] igb 0000:01:00.1: eth1: (PCIe:2.5GT/s:Width x4)
        00:1b:21:8c:b9:41
        [    2.136256] igb 0000:01:00.1: eth1: PBA No: E43709-005
        [    2.136259] igb 0000:01:00.1: Using MSI-X interrupts. 4 rx
        queue(s), 4 tx
        queue(s)

            ls -la /proc/acpi/processor/CPU0/
            total 0

        We have Ubuntu 10.04 LTS with 2.6.32-server kernel version. I
        do not know
        anything about "/proc/acpi/processor/CPU0/power" in other
        distros/kernels. You
        may google that or just disable powersaving options in BIOS. I
        also have "power
        management:        no".

        cat /proc/acpi/processor/CPU0/info
        processor id:            0
        acpi id:                 1
        bus mastering control:   yes
        power management:        no
        throttling control:      no
        limit interface:         no


            2012/1/24 Bokhan Artem<a...@eml.ru
            <mailto:a...@eml.ru><mailto:a...@eml.ru <mailto:a...@eml.ru>>>

                On 24.01.2012 20:09, Welisson wrote:

                    The problema is that when the traffic to up 500 or
                700 Mbp/s, i have packet
                    loss 1% at  3% in my other servers.

                This resolved similar problem:

                - Update your driver to the last version
                - Disable C-states other then C0-C1 in bios
                - Check your driver version supports queues
                - Check that you have several rx/tx queues spread over
            all CPU cores.


                *cat /proc/acpi/processor/CPU0/power*

                active state:            C0
                max_cstate:              C8
                maximum allowed latency: 2000000000 usec
                states:
                    C1:                  type[C1] promotion[--]
            demotion[--] latency[000]
                usage[00000000] duration[00000000000000000000]

                *modinfo igb*

filename: /lib/modules/2.6.32-32-server/kernel/igb/igb.ko
                version:        3.0.22

                *cat /proc/interrupts  | grep eth0*

44: 0 0 1 0 PCI-MSI-edge eth0 45: 105862872 0 2 0 PCI-MSI-edge eth0-rx-0 46: 0 108392651 0 2 PCI-MSI-edge eth0-rx-1 47: 0 0 106273251 2 PCI-MSI-edge eth0-rx-2 48: 2 0 0 107610662 PCI-MSI-edge eth0-rx-3 49: 86453979 0 0 0 PCI-MSI-edge eth0-tx-0 50: 0 86477489 0 0 PCI-MSI-edge eth0-tx-1 51: 0 2 95241899 0 PCI-MSI-edge eth0-tx-2 52: 0 0 2 88540674 PCI-MSI-edge eth0-tx-3



                    Load average 0.10

                    Anyone else have this problem, but this does not
                happen with my broadcom
                    nic.



------------------------------------------------------------------------------
                    Keep Your Developer Skills Current with LearnDevNow!
                    The most comprehensive online learning library for
                Microsoft developers
                    is just $99.99! Visual Studio, SharePoint, SQL -
                plus HTML5, CSS3, MVC3,
                    Metro Style Apps, more. Free future releases when
                you subscribe now!
                http://p.sf.net/sfu/learndevnow-d2d


                    _______________________________________________
                    E1000-devel mailing list
                E1000-devel@lists.sourceforge.net
                
<mailto:E1000-devel@lists.sourceforge.net><mailto:E1000-devel@lists.sourceforge.net
                <mailto:E1000-devel@lists.sourceforge.net>>
                https://lists.sourceforge.net/lists/listinfo/e1000-devel
                    To learn more about Intel&#174; Ethernet,
                visithttp://communities.intel.com/community/wired
                <http://communities.intel.com/community/wired>



        
------------------------------------------------------------------------------
        Keep Your Developer Skills Current with LearnDevNow!
        The most comprehensive online learning library for Microsoft
        developers
        is just $99.99! Visual Studio, SharePoint, SQL - plus HTML5,
        CSS3, MVC3,
        Metro Style Apps, more. Free future releases when you
        subscribe now!
        http://p.sf.net/sfu/learndevnow-d2d
        _______________________________________________
        E1000-devel mailing list
        E1000-devel@lists.sourceforge.net
        <mailto:E1000-devel@lists.sourceforge.net>
        https://lists.sourceforge.net/lists/listinfo/e1000-devel
        To learn more about Intel&#174; Ethernet, visit
        http://communities.intel.com/community/wired




------------------------------------------------------------------------------
Virtualization & Cloud Management Using Capacity Planning
Cloud computing makes use of virtualization - but cloud computing 
also focuses on allowing computing to be delivered as a service.
http://www.accelacomm.com/jaw/sfnl/114/51521223/
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to