Re: [zfs-discuss] Sudden drop in disk performance - WD20EURS & 4k sectors to blame?

2011-08-16 Thread krad
On 15 August 2011 15:55, Andrew Gabriel  wrote:

> David Wragg wrote:
>
>> I've not done anything different this time from when I created the
>> original (512b)  pool. How would I check ashift?
>>
>>
>
> For a zpool called "export"...
>
> # zdb export | grep ashift
> ashift: 12
> ^C
> #
>
> As far as I know (although I don't have any WD's), all the current 4k
> sectorsize hard drives claim to be 512b sectorsize, so if you didn't do
> anything special, you'll probably have ashift=9.
>
> I would look at a zpool iostat -v to see what the IOPS rate is (you may
> have bottomed out on that), and I would also work out average transfer size
> (although that alone doesn't necessarily tell you much - a dtrace quantize
> aggregation would be better). Also check service times on the disks (iostat)
> to see if there's one which is significantly worse and might be going bad.
>
> --
> Andrew Gabriel
>
> __**_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss
>


from what i have read you really do need to 4k align your drives and
ashift=12 them on the western digitals. Unfortunately that probably means
you have to rebuild your pool. 4k aligining is fairly easy it just means
when you partition a disk, just make sure the 1st sector and the size of
the partition is /8. Ie dont start the 1st partition at sector 34 as
normally happens, start it at say 40. eg here are my 4k alinged drives from
a freebsd system

# gpart show ada0
=>34  3907029101  ada0  GPT  (1.8T)
  34   6- free -  (3.0k)
  40 128 1  freebsd-boot  (64k)
 168 6291456 2  freebsd-swap  (3.0G)
 6291624  3900213229 3  freebsd-zfs  (1.8T)


making ashift=12 is a little more tricky. I have seen a patch posted on the
mailing lists for zpool which forces it. Alternatively you could boot into a
freebsd live cd and create the pool with the 'gnop -S 4096' trick. Its
possible there is another way to do it now on opensolaris that i havent
come across yet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sudden drop in disk performance - WD20EURS & 4k sectors to blame?

2011-08-15 Thread Andrew Gabriel

David Wragg wrote:

I've not done anything different this time from when I created the original 
(512b)  pool. How would I check ashift?
  


For a zpool called "export"...

# zdb export | grep ashift
ashift: 12
^C
#

As far as I know (although I don't have any WD's), all the current 4k 
sectorsize hard drives claim to be 512b sectorsize, so if you didn't do 
anything special, you'll probably have ashift=9.


I would look at a zpool iostat -v to see what the IOPS rate is (you may 
have bottomed out on that), and I would also work out average transfer 
size (although that alone doesn't necessarily tell you much - a dtrace 
quantize aggregation would be better). Also check service times on the 
disks (iostat) to see if there's one which is significantly worse and 
might be going bad.


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sudden drop in disk performance - WD20EURS & 4k sectors to blame?

2011-08-15 Thread David Wragg
I've not done anything different this time from when I created the original 
(512b)  pool. How would I check ashift?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sudden drop in disk performance - WD20EURS & 4k sectors to blame?

2011-08-15 Thread chris scott
Did you 4k align your partition table and is ashift=12?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Sudden drop in disk performance - WD20EURS & 4k sectors to blame?

2011-08-15 Thread David Wragg
Hi all, first post to this mailing list so please forgive me if I miss 
something obvious. Earlier this year I went over 80% disk utilisation on my 
home server and saw performance start to degrade. I migrated from the old pool 
of 4 x 1TB WD RE2-GPs (raidz1) to a new pool made of 6 x 2TB WD EURS (raidz2). 
My original plan of swapping one disk at a time was swiftly banjaxed when I 
found out about the change in sector size, but I got there in the end.

Things have been fine for a couple of months - files transferred between 
filesystems on the pool at 60-80MiB/sec. In the meantime I read about appalling 
zfs performance on disks with 4k sectors, and breathed a sigh of relief as it 
seemed I'd somehow managed to avoid it. However, for the last week the average 
transfer rate has dropped to 8.6MiB/sec with no obvious changes to the system, 
other than free space ticking down a little (3.76TB free from 8TB usable). 
zpool status reports no errors, a scrub took around 8.5 hours (and repaired 0), 
and generally the rest of the systems seems as normal. I'm running oi148, zfs 
v28, 4GiB ECC RAM on an old Athlon BE-4050. The rpool is on a separate SSD, no 
recent hardware or software changes I can think of.

I'd be really interested to hear of potential causes (and, with any luck, 
remedies!) for this behaviour, as all of a sudden moving ISOs around has become 
something I have to plan. I'm not especially experienced with this sort of 
thing (or zfs in general beyond following setup guides), but I'm keen to learn 
from the best. 

Thanks very much for your time,
Dave
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss