Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon wrote: > > Hello, > > I have a server with very high load using four NVMe SSDs and therefore no HW > RAID. Instead I used SW RAID with the mdadm tool. Using one RAID5 volume does > not work well since the driver can only utilize one CPU core which spikes at > 100% and harms performance. Therefore I created 8 partitions on each disk, > and 8 RAID5s across the four disks. > > Now I want to bring them together with LVM. If I do not use a striped volume > I get high performance (in expected magnitude according to disk specs). But > when I use a striped volume, performance drops to a magnitude below. The > reason I am looking for a striped setup is to ensure that data is spread well > over the drive to guarantee a good worst-case performance. With linear > allocation rather than striped, if load is directed to files on the first PV > (a SW RAID) the system is again exposed to the 1-core limitation. > > I tried "--stripes 8 --stripesize 512", and would appreciate any ideas of > other things to try. I guess the performance hit can be in the file system as > well. I tried XFS and EXT4 with default settings. Daniel, a bit more about your system? Like kernel version, io scheduler, etc.. Have you tried with recent kernels MQ (multi-queue) schedulers (noop, deadline) ? ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
Did you thought about RAID 50 ? Messaggio originale Da: mator...@gmail.com Inviato: 7 dicembre 2019 17:17 A: linux-lvm@redhat.com Rispondi a: linux-lvm@redhat.com Oggetto: Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs? On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon wrote: > > Hello, > > I have a server with very high load using four NVMe SSDs and therefore no HW > RAID. Instead I used SW RAID with the mdadm tool. Using one RAID5 volume does > not work well since the driver can only utilize one CPU core which spikes at > 100% and harms performance. Therefore I created 8 partitions on each disk, > and 8 RAID5s across the four disks. > > Now I want to bring them together with LVM. If I do not use a striped volume > I get high performance (in expected magnitude according to disk specs). But > when I use a striped volume, performance drops to a magnitude below. The > reason I am looking for a striped setup is to ensure that data is spread well > over the drive to guarantee a good worst-case performance. With linear > allocation rather than striped, if load is directed to files on the first PV > (a SW RAID) the system is again exposed to the 1-core limitation. > > I tried "--stripes 8 --stripesize 512", and would appreciate any ideas of > other things to try. I guess the performance hit can be in the file system as > well. I tried XFS and EXT4 with default settings. Daniel, a bit more about your system? Like kernel version, io scheduler, etc.. Have you tried with recent kernels MQ (multi-queue) schedulers (noop, deadline) ? ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon wrote: I have a server with very high load using four NVMe SSDs and therefore no HW RAID. Instead I used SW RAID with the mdadm tool. Using one RAID5 volume does not work well since the driver can only utilize one CPU core which spikes at 100% and harms performance. Therefore I created 8 partitions on each disk, and 8 RAID5s across the four disks. Now I want to bring them together with LVM. If I do not use a striped volume I get high performance (in expected magnitude according to disk specs). But when I use a striped volume, performance drops to a magnitude below. The reason I am looking for a striped setup is to The mdadm layer already does the striping. So doing it again in the LVM layer completely screws it up. You want plain JBOD (Just a Bunch Of Disks). -- Stuart D. Gathman "Confutatis maledictis, flammis acribus addictis" - background song for a Microsoft sponsored "Where do you want to go from here?" commercial. ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
> "Stuart" == Stuart D Gathman writes: Stuart> On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon wrote: >> I have a server with very high load using four NVMe SSDs and >> therefore no HW RAID. Instead I used SW RAID with the mdadm tool. >> Using one RAID5 volume does not work well since the driver can only >> utilize one CPU core which spikes at 100% and harms performance. >> Therefore I created 8 partitions on each disk, and 8 RAID5s across >> the four disks. >> Now I want to bring them together with LVM. If I do not use a striped >> volume I get high performance (in expected magnitude according to disk >> specs). But when I use a striped volume, performance drops to a >> magnitude below. The reason I am looking for a striped setup is to Stuart> The mdadm layer already does the striping. So doing it again Stuart> in the LVM layer completely screws it up. You want plain JBOD Stuart> (Just a Bunch Of Disks). Umm... not really. The problem here is more the MD layer not being able to run RAID5 across multiple cores at the same time, which is why he split things the way he did. But we don't know the Kernel version, the LVM version, or the OS release so as to give better ideas of what to do. The biggest harm to performance here is really the RAID5, and if you can instead move to RAID 10 (mirror then stripe across mirrors) then you should be a performance boost. As Daniel says, he's got lots of disk load, but plenty of CPU, so the single thread for RAID5 is a big bottleneck. I assume he wants to use LVM so he can create volume(s) larger than individual RAID5 volumes, so in that case, I'd probably just build a regular non-striped LVM VG holding all your RAID5 disks. Hopefully the Parity disk is spread across all the partitions, though NVMe drives should have enough IOPs capacity to mask the RMW cost of RAID5 to a degree. In any case, I'd just build it like: pvcreate /dev/md# (do for each of 8 RAID5 MD devices) vgcreate datavg /dev/md[#-#] (give all 8 RAID5 MD devices here. lvcreate -n "name" -L datavg And then test your performance. Since you only have four disks, the 8 RAID5 volumes in your VG are all going to suck for small writes, but NVMe SSDs will mask that to an extent. If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead, though you do have the problem where a double disk failure could kill your data if it happens to both halves of a mirror. But, numbers talk, BS walks. So if the original poster can provide some details and numbers... then maybe we can help more. John ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
On Sat, 7 Dec 2019, John Stoffel wrote: The biggest harm to performance here is really the RAID5, and if you can instead move to RAID 10 (mirror then stripe across mirrors) then you should be a performance boost. Yeah, That's what I do. RAID10, and use LVM to join together as JBOD. I forgot about the raid 5 bottleneck part, sorry. As Daniel says, he's got lots of disk load, but plenty of CPU, so the single thread for RAID5 is a big bottleneck. I assume he wants to use LVM so he can create volume(s) larger than individual RAID5 volumes, so in that case, I'd probably just build a regular non-striped LVM VG holding all your RAID5 disks. Hopefully Wait, that's what I suggested! If you can, I'd get more SSDs and move to RAID1+0 (RAID10) instead, though you do have the problem where a double disk failure could kill your data if it happens to both halves of a mirror. No worse than raid5. In fact, better because the 2nd fault always kills the raid5, but only has a 33% or less chance of killing the raid10. (And in either case, it is usually just specific sectors, not the entire drive, and other manual recovery techniques can come into play.) -- Stuart D. Gathman "Confutatis maledictis, flammis acribus addictis" - background song for a Microsoft sponsored "Where do you want to go from here?" commercial. ___ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/