On Thu, 26 Nov 2015 01:08:21 +0700 Robert Elz <k...@munnari.oz.au> wrote:
> Date: Wed, 25 Nov 2015 10:52:30 -0600 > From: Greg Oster <os...@netbsd.org> > Message-ID: <20151125105230.209c5...@mickey.usask.ca> > > | Just to recap: You have a RAID set that is not 4K aligned with the > | underlying disks. > > Actually, I think it is - though we haven't seen all the data yet to > prove it. The unaligned assumption was based upon misreading fdisk > output. That was incorrect - but doesn't actually mean that it is > properly aligned, it probably is OK, but we have not yet seen the > BSD disklabel for the data disks to be sure (we know the BSD fdisk > partition is OK, but that's then divided into pieces, and we have > not - not recently anyway - been told that breakup). We also haven't > seen the filesystem layout within the raid. > > Everything else makes sense, and William, that does come from a raid > expert, you can believe what Greg says, take what I say as more of a > semi-literate guess. > > One question though, Greg you suggest 32 block stripe, and 64K filesys > block size. Would 16 block stripe and 32K blocksize work as well? Yes, though because the IO chunks are smaller the performance would likely be slightly poorer..... (and a 64K write of data would end up spanning two stripes, instead of one, meaning twice the locking, twice the number of read requests, etc., given that those things don't get combined at the component level by RAIDframe...) > So, William, two more command outputs needed ... > > disklabel wd0 > > (or any of the drives that are all identical, if they are not all > identical, then disklabel for each different layout). We need to > see where wd0f (and wd1f etc) start... > > And, for the raid set itself, > > gpt show raid2 Yes.. the results of this, and of 'disklabel wd0' will get to the heart of the issue.. Later... Greg Oster