Let me rephrase my first question below. What I would like to know is how well can LVS code scale with multiple CPU. I do not know that balancing out IRQ with the NIC works pretty well with Linux 2.6.18 kernel. Certain Broadcom NICs seem to have problem with MSI turned on, but Intel NICs works fine. So, *assume* the kernel can linearly scale up with SMP with NICs' interrupts, can the LVS code do the same?
Jason, handling of NIC interrupts does balance over multiple CPU when you either manually set the SMP affinity or use irqbalance. Graeme, it's a good thing the CPU is being used, but I expect our traffic to increase dramatically, so there is not much room left for growth. That's why we are considering something much "beefier" to handle the load. The load (1/5/15) average on the machine is almost nothing when CPU usage is at 160%. It is mostly taken up by Soft IRQ. ----- Original Message ---- From: New User <[email protected]> To: [email protected] Sent: Tuesday, March 3, 2009 1:45:12 PM Subject: LVS performance on SMP Hi, We are currently running dedicated LVS-NAT (with the exception of iptables): Intel Xeon 3060 @ 2.4GHz (2 cores) 2G RAM 2 Gb/s NIC (one external, one internal) LVS-NAT Linux 2.6.18 kernel It appears we are running out of CPU (usage reached 160, with 40 idle, 2 cores) when we reach: ~10K CPS ~100K InPPS ~120K OutPPS ~1 million+ active and number of objects in the ip_vs_conn entry in /proc/slabinfo We are considering upgrading the setup to the following: 2 x Intel Xeon 5430 (2 CPUs, 4 cores each) 8G RAM 8 Gb/s NIC (4x1Gb in bonding for external, 4x1Gb in bonding for internal) LVS-NAT Linux 2.6.18 (possible newer if it offers performance increase) The question is how well can the LVS code take advantages of the multiple CPU. There are some conflicting answers to this question on the FAQ and mailing list (in gernal, it helps with the system as a whole). In our actual usage experience, it appears SMP helps, since it did reach 160 CPU usage. However, I do not know if this is real or just an illusion. I do not know if LVS is actually doing real work that caused the usage to be above 100 or is it because of SMP that causing locks that resulted that. Also, in our limited testing of the new setup, we find the SMP helps because with 8 NICs, just to process the interrupts can overwhelm a single CPU (core). Can anyone share their knowledge or experience for the following: 1. Can LVS increase linearly in capacity when we go from the setup we currently have (2 cores with 2 NICs) to what we are considering (8 cores, with 8 NICs) 2. Has anyone else used similiar setup that can share their experience? There was a posting to the mailing list in Dec. 2007 that indicated the performance limit is ~450K (don't know if it's the sum of InPPS and OutPPS) pps. There was no conclusive answer as to this was the limit of LVS or not 3. Has anyone have a setup that can process 1+ million pps (LVS-NAT) on a single LVS machine (we already use multipel LVS with keepalived in a live-live configuration)? Thanks _______________________________________________ Please read the documentation before posting - it's available at: http://www.linuxvirtualserver.org/ LinuxVirtualServer.org mailing list - [email protected] Send requests to [email protected] or go to http://lists.graemef.net/mailman/listinfo/lvs-users
