On Mon, 2011-03-14 at 23:50 -0700, Matthew Marlowe wrote:
> > My problem is that LVM2 is not supported in parted which is the
> > recommended tool to deal with this.
> > 
> > I suspect I only need to map the individual PE to a particular start
> > sector on each drive, not btrfs, but then there is stripe/block sizes to
> > consider as well ... WD also are recommending 1mb sector boundaries for
> > best performance - I can see a reinstall coming up :)
> >
>  
> I have on my workstation:
>     2 WD 2TB Black Drives
>     5 WD 2TB RE4 Drives
> 
> Some notes:
> - The black drives have horrible reliability, poor sector remapping, and have 
> certain standard drive features to make them unusable in raid.  I would not 
> buy them again. I'm not sure how similar the green drives are.
> - Many of the recent WD drives have a tendency to power down/up frequently 
> which can reduce drive lifetime (research it and ensure it is set 
> appropriately for your needs).
> - Due to reliability concerns, you'll may need to run smartd to give adequate 
> pre-failure warnings
> 
> Anyhow, in my config I have:
> 
> 1 RE4 Drive as Server Boot Disk
> 4 RE4 Drives in SW RAID10 (extremely good performance and reliability)
> 2 Black Drives in LVM RAID0 for disk-to-disk backups (thats about all I trust 
> them with).
> 
> When I setup the LVM RAID0, I used the following commands to get good 
> performance:
>       fdisk (remove all partitions, you don't need them for lvm)
>      pvcreate --dataalignmentoffset 7s /dev/sdd
>      pvcreate --dataalignmentoffset 7s /dev/sdf
>      vgcreate -s 64M -M 2 vgArchive /dev/sdd /dev/sdf
>      lvcreate -i 2 -l 100%FREE -I 256 -n lvArchive -r auto vgArchive
>     mkfs.ext4 -c -b 4096 -E stride=64,stripe_width=128 -j -i 1048576 -L 
> /archive /dev/vgArchive/lvArchive
> 
> I may have the ext4 stride/stripe settings wrong above, I didn't have my 
> normal notes when I selected them - but the rest of the config I scrounged 
> from other blogs and seemed to make sense (the --dataalignmentoffset 7s) 
> seems 
> to be the key.
> 
> My RAID10 drives are configured slightly different w/ 1 partition that starts 
> on sector 2048 if I remember and extends to the end of the drive.
> 
> The 4 Disk SW RAID10 array gives me 255MB/s reads, 135MB/s block writes, and 
> 98MB/s rewrites (old test, may need to rerun for latest changes/etc).
> 
> LVM 2 Disk RAID0 gives 303MB/s reads, 190MB/s block writes, and 102MB/s 
> rewrites (test ran last week).  
> 
> Regards,
> Matt

Thanks Matthew,
 some good ideas here.  I have other partitions on the disks such as
swap and rescue so LVM doesnt get all the space.  I have steered away
from striping as I have lost an occasional disk over the years and worry
that a stripe will take out a larger block of data than a linear BOD but
your performance numbers look ... great!

As the stripe size is hard to change after creation it looks like I'll
have to migrate the data and recreate from scratch to get the best out
of the hardware.

In the short term, I'll just do some shuffling and delete then readd the
LVM partition on the green drive to the volume group which should
improve the performance a lot.  If I am reading it right, I have to get
the disk partitioning right first, them make sure the PV is also created
at the right boundaries on the LVM.  Then I will see how to tune btrfs
which I am becoming quite sold on - solid, and online fsck is better
than reiserfs which is just as solid, but you have to take offline to
check - not that either corrupt often.

BillK




Reply via email to