Re: [gentoo-user] terrible performance with btrfs on LVM2 using a WD 2TB green drive

2011-03-16 Thread Florian Philipp
Am 15.03.2011 07:50, schrieb Matthew Marlowe:
 
 My problem is that LVM2 is not supported in parted which is the
 recommended tool to deal with this.

 I suspect I only need to map the individual PE to a particular start
 sector on each drive, not btrfs, but then there is stripe/block sizes to
 consider as well ... WD also are recommending 1mb sector boundaries for
 best performance - I can see a reinstall coming up :)

  
 I have on my workstation:
 2 WD 2TB Black Drives
 5 WD 2TB RE4 Drives
 
 Some notes:
 - The black drives have horrible reliability, poor sector remapping, and have 
 certain standard drive features to make them unusable in raid.  I would not 
 buy them again. I'm not sure how similar the green drives are.

Green drives also seem to be affected:
http://doug.warner.fm/d/blog/2009/11/Western-Digital-15TB-Green-Drives-Not-your-Linux-Software-RAID



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] terrible performance with btrfs on LVM2 using a WD 2TB green drive

2011-03-16 Thread Joost Roeleveld
On Wednesday 16 March 2011 09:53:37 Florian Philipp wrote:
 Am 15.03.2011 07:50, schrieb Matthew Marlowe:
  My problem is that LVM2 is not supported in parted which is the
  recommended tool to deal with this.
  
  I suspect I only need to map the individual PE to a particular start
  sector on each drive, not btrfs, but then there is stripe/block sizes
  to
  consider as well ... WD also are recommending 1mb sector boundaries
  for
  best performance - I can see a reinstall coming up :)
  
  I have on my workstation:
  2 WD 2TB Black Drives
  5 WD 2TB RE4 Drives
  
  Some notes:
  - The black drives have horrible reliability, poor sector remapping, and
  have certain standard drive features to make them unusable in raid.  I
  would not buy them again. I'm not sure how similar the green drives
  are.
 
 Green drives also seem to be affected:
 http://doug.warner.fm/d/blog/2009/11/Western-Digital-15TB-Green-Drives-Not-y
 our-Linux-Software-RAID

I have 6 Green drives (WDC WD15EARS) in a RAID5 and I have not seen any 
issues.
Only issue I saw was 1 drive with reallocated sectors as also mentioned by 
one of the commenters on that page.
Replaced that drive and no further problems so far.

I did, however, spent time to align the sectors correctly for the Raid-
partitions, striping, LVM blocksize and the mkfs-statements.
Without those, performance was really bad.

I would prefer to see proper support for 4K-sector-size drives.

--
Joost



Re: [gentoo-user] terrible performance with btrfs on LVM2 using a WD 2TB green drive

2011-03-15 Thread Matthew Marlowe

 My problem is that LVM2 is not supported in parted which is the
 recommended tool to deal with this.
 
 I suspect I only need to map the individual PE to a particular start
 sector on each drive, not btrfs, but then there is stripe/block sizes to
 consider as well ... WD also are recommending 1mb sector boundaries for
 best performance - I can see a reinstall coming up :)

 
I have on my workstation:
2 WD 2TB Black Drives
5 WD 2TB RE4 Drives

Some notes:
- The black drives have horrible reliability, poor sector remapping, and have 
certain standard drive features to make them unusable in raid.  I would not 
buy them again. I'm not sure how similar the green drives are.
- Many of the recent WD drives have a tendency to power down/up frequently 
which can reduce drive lifetime (research it and ensure it is set 
appropriately for your needs).
- Due to reliability concerns, you'll may need to run smartd to give adequate 
pre-failure warnings

Anyhow, in my config I have:

1 RE4 Drive as Server Boot Disk
4 RE4 Drives in SW RAID10 (extremely good performance and reliability)
2 Black Drives in LVM RAID0 for disk-to-disk backups (thats about all I trust 
them with).

When I setup the LVM RAID0, I used the following commands to get good 
performance:
  fdisk (remove all partitions, you don't need them for lvm)
 pvcreate --dataalignmentoffset 7s /dev/sdd
 pvcreate --dataalignmentoffset 7s /dev/sdf
 vgcreate -s 64M -M 2 vgArchive /dev/sdd /dev/sdf
 lvcreate -i 2 -l 100%FREE -I 256 -n lvArchive -r auto vgArchive
mkfs.ext4 -c -b 4096 -E stride=64,stripe_width=128 -j -i 1048576 -L 
/archive /dev/vgArchive/lvArchive

I may have the ext4 stride/stripe settings wrong above, I didn't have my 
normal notes when I selected them - but the rest of the config I scrounged 
from other blogs and seemed to make sense (the --dataalignmentoffset 7s) seems 
to be the key.

My RAID10 drives are configured slightly different w/ 1 partition that starts 
on sector 2048 if I remember and extends to the end of the drive.

The 4 Disk SW RAID10 array gives me 255MB/s reads, 135MB/s block writes, and 
98MB/s rewrites (old test, may need to rerun for latest changes/etc).

LVM 2 Disk RAID0 gives 303MB/s reads, 190MB/s block writes, and 102MB/s 
rewrites (test ran last week).  

Regards,
Matt
-- 
Matthew Marlowe/  858-400-7430  /DeployLinux Consulting, Inc
  Professional Linux Hosting and Systems Administration Services
  www.deploylinux.net   *   m...@deploylinux.net
 'MattM' @ irc.freenode.net
   



Re: [gentoo-user] terrible performance with btrfs on LVM2 using a WD 2TB green drive

2011-03-15 Thread William Kenworthy
On Mon, 2011-03-14 at 23:50 -0700, Matthew Marlowe wrote:
  My problem is that LVM2 is not supported in parted which is the
  recommended tool to deal with this.
  
  I suspect I only need to map the individual PE to a particular start
  sector on each drive, not btrfs, but then there is stripe/block sizes to
  consider as well ... WD also are recommending 1mb sector boundaries for
  best performance - I can see a reinstall coming up :)
 
  
 I have on my workstation:
 2 WD 2TB Black Drives
 5 WD 2TB RE4 Drives
 
 Some notes:
 - The black drives have horrible reliability, poor sector remapping, and have 
 certain standard drive features to make them unusable in raid.  I would not 
 buy them again. I'm not sure how similar the green drives are.
 - Many of the recent WD drives have a tendency to power down/up frequently 
 which can reduce drive lifetime (research it and ensure it is set 
 appropriately for your needs).
 - Due to reliability concerns, you'll may need to run smartd to give adequate 
 pre-failure warnings
 
 Anyhow, in my config I have:
 
 1 RE4 Drive as Server Boot Disk
 4 RE4 Drives in SW RAID10 (extremely good performance and reliability)
 2 Black Drives in LVM RAID0 for disk-to-disk backups (thats about all I trust 
 them with).
 
 When I setup the LVM RAID0, I used the following commands to get good 
 performance:
   fdisk (remove all partitions, you don't need them for lvm)
  pvcreate --dataalignmentoffset 7s /dev/sdd
  pvcreate --dataalignmentoffset 7s /dev/sdf
  vgcreate -s 64M -M 2 vgArchive /dev/sdd /dev/sdf
  lvcreate -i 2 -l 100%FREE -I 256 -n lvArchive -r auto vgArchive
 mkfs.ext4 -c -b 4096 -E stride=64,stripe_width=128 -j -i 1048576 -L 
 /archive /dev/vgArchive/lvArchive
 
 I may have the ext4 stride/stripe settings wrong above, I didn't have my 
 normal notes when I selected them - but the rest of the config I scrounged 
 from other blogs and seemed to make sense (the --dataalignmentoffset 7s) 
 seems 
 to be the key.
 
 My RAID10 drives are configured slightly different w/ 1 partition that starts 
 on sector 2048 if I remember and extends to the end of the drive.
 
 The 4 Disk SW RAID10 array gives me 255MB/s reads, 135MB/s block writes, and 
 98MB/s rewrites (old test, may need to rerun for latest changes/etc).
 
 LVM 2 Disk RAID0 gives 303MB/s reads, 190MB/s block writes, and 102MB/s 
 rewrites (test ran last week).  
 
 Regards,
 Matt

Thanks Matthew,
 some good ideas here.  I have other partitions on the disks such as
swap and rescue so LVM doesnt get all the space.  I have steered away
from striping as I have lost an occasional disk over the years and worry
that a stripe will take out a larger block of data than a linear BOD but
your performance numbers look ... great!

As the stripe size is hard to change after creation it looks like I'll
have to migrate the data and recreate from scratch to get the best out
of the hardware.

In the short term, I'll just do some shuffling and delete then readd the
LVM partition on the green drive to the volume group which should
improve the performance a lot.  If I am reading it right, I have to get
the disk partitioning right first, them make sure the PV is also created
at the right boundaries on the LVM.  Then I will see how to tune btrfs
which I am becoming quite sold on - solid, and online fsck is better
than reiserfs which is just as solid, but you have to take offline to
check - not that either corrupt often.

BillK






Re: [gentoo-user] terrible performance with btrfs on LVM2 using a WD 2TB green drive

2011-03-15 Thread Volker Armin Hemmann
On Tuesday 15 March 2011 13:37:46 Bill Kenworthy wrote:
 I have recently added a WD 2TB green drive to two systems and am finding
 terrible performance with btrfs on an LVM using these drives.
 
 I just saw on the mythtv list about the sector size problem these drives
 have where they have poor performance unless you can map the partitions
 onto certain sector boundaries.
 
 My problem is that LVM2 is not supported in parted which is the
 recommended tool to deal with this.
 
use google. fdisk is fine.