Le Thu, 19 Oct 2017 17:23:45 -0400 Becky Ligon <[email protected]> écrivait:
> Since you are a flash specialist, I have a question for you. We are
> running tests on a box that has 6 nvme pcie cards. Each of the six
> cards are defined with their own filesystem (XFS). If we run a dd to
> each filesystem at the same time, the I/O performance is horrible.
> Is there a way to tune this environment, or cannot the pcie bus
> handle such a load?
>
Hi Becky, this depends heavily on the motherboard/CPU combination.
Most recent systems have at most 48 PCIe lanes available, and they can
be unbalanced across PCIe slots on the motherboard, and the nodes if
this is a dual CPU machine.
Normally an NVMe drive requires 4 PCIe lanes, and most mobos should
provide at least that to each slot however. It would be intersting to
see the output of "lspci -tv" to see what CPU drives which card... then
check the CPU affinity of the dd processes.
I'm curious, what's the output from "iostat -mx 5" like while you're
running 6 dd in parallel?
Another possibility is interrupt overload. look at /proc/interrupts
before and after running parallel dd's, and see what happens on the
PCIe part.
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <[email protected]>
| +33 1 78 94 84 02
------------------------------------------------------------------------
pgp5XohslcEr1.pgp
Description: Signature digitale OpenPGP
_______________________________________________ Pvfs2-users mailing list [email protected] http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
