Re: [zfs-discuss] scrub differs in execute time?

2009-11-15 Thread Orvar Korvar
Yes that might be the cause. Thanks for identifying that. So I would gain 
bandwidth if I tucked some drives on the mobo SATA and some drives on the AOC 
card, instead of having all drives on the AOC card.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-15 Thread Brandon High
On Sun, Nov 15, 2009 at 10:39 AM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
 Yes that might be the cause. Thanks for identifying that. So I would gain 
 bandwidth if I tucked some drives on the mobo SATA and some drives on the AOC 
 card, instead of having all drives on the AOC card.

Yup! The ICH10 is connected at 10Gb/sec to the northbridge, so it
shouldn't have bandwidth issues.

2 modern drives will be able to fully saturate the PCI bus. You could
get away with more however, since most activity isn't large sequential
reads. Things like scrubs (which are lots of sequential reads) will
take a more noticeable performance hit than everyday use.

-B

-- 
Brandon High : bh...@freaks.com
God is big, so don't fuck with him.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Orvar Korvar
I use Intel Q9450 + P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot, 
not PCI-x. About the HBA, I have no idea.

So I had half of the drives in the AOC card, and the other half on the mobo 
SATA ports. Now I have all drives to the AOC card, and suddenly a scrub takes 
15h instead of 8h. Same data. This is weird. I dont get it. I dont care too 
much about it, but just wanted to tell you this. Thanks for your attention.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan

 P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot

I'm not sure how many half your disks are or how your vdevs 
are configured, but the ICH10 has 6 sata ports at 300MB and 
one PCI port at 266MB (that's also shared with the IT8213 IDE chip) 

so in an ideal world your scrub bandwidth would be 

300*6 MB with 6 disks on ICH10, in a strip
300*1 MB with 6 disks on ICH10, in a raidz
300*3+(266/3) MB with 3 disks on ICH10, and 3 on shared PCI in a strip
266/3 MB with 3 disks on ICH10, and 3 on shared PCI in a raidz
266/6 MB with 6 disks on shared PCI in a stripe
266/6 MB with 6 disks on shared PCI in a raidz

we know disk don't go that fast anyway, but going from a 8h to 15h 
scrub is very reasonable depending on vdev config.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Eric D. Mudama

On Sat, Nov 14 at 11:23, Rob Logan wrote:



P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot


I'm not sure how many half your disks are or how your vdevs
are configured, but the ICH10 has 6 sata ports at 300MB and
one PCI port at 266MB (that's also shared with the IT8213 IDE chip)

so in an ideal world your scrub bandwidth would be

300*6 MB with 6 disks on ICH10, in a strip
300*1 MB with 6 disks on ICH10, in a raidz
300*3+(266/3) MB with 3 disks on ICH10, and 3 on shared PCI in a strip
266/3 MB with 3 disks on ICH10, and 3 on shared PCI in a raidz
266/6 MB with 6 disks on shared PCI in a stripe
266/6 MB with 6 disks on shared PCI in a raidz

we know disk don't go that fast anyway, but going from a 8h to 15h
scrub is very reasonable depending on vdev config.

Rob


Agreed, sounds like you're saturating the PCI port.

I'm pretty sure that when Thumper uses that board, they have 6 of them
in PCI-X slots, which of course wouldn't have the same bandwidth
limitation.

--eric


--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Brandon High
On Sat, Nov 14, 2009 at 7:00 AM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
 I use Intel Q9450 + P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI 
 slot, not PCI-x. About the HBA, I have no idea.

It sounds like you're saturating the PCI port. The ICH10 has a
32-bit/33MHz PCI bus which provides 133MB/s at half duplex. This a
much less than the full bandwidth from the number of drives you have
on the AOC card.

Getting a mobo with a PCI-X slot, getting a PCIe controller, or
leaving as many drives as you can on the ICH will help performance.

-B

-- 
Brandon High : bh...@freaks.com
War is Peace. Slavery is Freedom. AOL is the Internet.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-14 Thread Rob Logan

 The ICH10 has a 32-bit/33MHz PCI bus which provides 133MB/s at half duplex.

you are correct, I thought ICH10 used a 66Mhz bus, when infact its 33Mhz. The
AOC card works fine in a PCI-X 64Bit/133Mhz slot good for 1,067 MB/s 
even if the motherboard uses a PXH chip via 8 lane PCIE.

Rob

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Orvar Korvar
I have a raidz2 and did a scrub, it took 8h. Then I reconnected some drives to 
other SATA ports, and now it takes 15h to scrub??

Why is that?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Henrik Johansson

How do you do,

On 13 nov 2009, at 11.07, Orvar Korvar  
knatte_fnatte_tja...@yahoo.com wrote:


I have a raidz2 and did a scrub, it took 8h. Then I reconnected some  
drives to other SATA ports, and now it takes 15h to scrub??


Why is that?


Could you perhaps provid some more info?

Which OSOL release?  are the new disks utilized? Have the pool data  
changed? Is there a difference in how much data that is read from the  
disks? Is the system otherwise idle? Which SATA controller? Does  
iostat show any errors?


Regards

Henrik
http://sparcv9.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Orvar Korvar
Yes I do fine. How do you do-be-do-be-do?

I have OpenSolaris b125 and filled a zpool with data. I did scrub on it, which 
took 8 hours. Some of the drives were connected to the mobo, some of the drives 
were connected to the AOC-MV8... marvellsx88 card which is used in Thumper. 
Then I connected all drives to the AOC-MV8... card and did a scrub again. With 
the same data, nothing changed. And suddenly it took 15hours. zpool iostat 
doesnt show any errors or something unusual. The system is otherwise idle. When 
I issue a scrub, there will be a forecast and in the first case it said 8h. The 
scrub finished in 8h. Then I reconnected the cables and the scrub forecast said 
15h, which it did.

So, if nothing is changed on the zpool, why does scrub finish in 8h, and then 
rearranging the SATA cables, it takes 15h - with the same data?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Tim Cook
On Fri, Nov 13, 2009 at 2:48 PM, Orvar Korvar 
knatte_fnatte_tja...@yahoo.com wrote:

 Yes I do fine. How do you do-be-do-be-do?

 I have OpenSolaris b125 and filled a zpool with data. I did scrub on it,
 which took 8 hours. Some of the drives were connected to the mobo, some of
 the drives were connected to the AOC-MV8... marvellsx88 card which is used
 in Thumper. Then I connected all drives to the AOC-MV8... card and did a
 scrub again. With the same data, nothing changed. And suddenly it took
 15hours. zpool iostat doesnt show any errors or something unusual. The
 system is otherwise idle. When I issue a scrub, there will be a forecast and
 in the first case it said 8h. The scrub finished in 8h. Then I reconnected
 the cables and the scrub forecast said 15h, which it did.

 So, if nothing is changed on the zpool, why does scrub finish in 8h, and
 then rearranging the SATA cables, it takes 15h - with the same data?




What's the motherboard model?

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrub differs in execute time?

2009-11-13 Thread Eric D. Mudama

On Fri, Nov 13 at 15:58, Tim Cook wrote:

On Fri, Nov 13, 2009 at 2:48 PM, Orvar Korvar 
knatte_fnatte_tja...@yahoo.com wrote:


Yes I do fine. How do you do-be-do-be-do?

I have OpenSolaris b125 and filled a zpool with data. I did scrub on it,
which took 8 hours. Some of the drives were connected to the mobo, some of
the drives were connected to the AOC-MV8... marvellsx88 card which is used
in Thumper. Then I connected all drives to the AOC-MV8... card and did a
scrub again. With the same data, nothing changed. And suddenly it took
15hours. zpool iostat doesnt show any errors or something unusual. The
system is otherwise idle. When I issue a scrub, there will be a forecast and
in the first case it said 8h. The scrub finished in 8h. Then I reconnected
the cables and the scrub forecast said 15h, which it did.

So, if nothing is changed on the zpool, why does scrub finish in 8h, and
then rearranging the SATA cables, it takes 15h - with the same data?


What's the motherboard model?


Is the AOC-MV8 plugged into a PCI or PCI-X slot?  Is the HBA saturated?

When you were putting half on this card and half on the motherboard,
you were using multiple IO paths that you aren't once all the IO is
through the card.

--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss