Re: [zfs-discuss] Has anyone switched from IR - IT firmware on the fly ? (existing zpool on LSI 9211-8i)

2012-07-18 Thread Cindy Swearingen

Here's a better link below.

I have seen enough bad things happen to pool devices when hardware is
changed or firmware is updated to recommend that the pool is exported
first, even an HBA firmware update.

Either shutting the system down (where pool is hosted) or exporting
the pool should do it.

Always have good backups.

Thanks,

Cindy

http://docs.oracle.com/cd/E23824_01/html/821-1448/gcfog.html#scrolltoc

Considerations for ZFS Storage Pools - see the last bullet

On 07/17/12 18:47, Damon Pollard wrote:

Correct.

LSI 1068E has IR and IT firmwares + I have gone from IR - IT and IT -
IR without hassle.

Damon Pollard


On Wed, Jul 18, 2012 at 8:13 AM, Jason Usher jushe...@yahoo.com
mailto:jushe...@yahoo.com wrote:


Ok, and your LSI 1068E also had alternate IR and IT firmwares, and
you went from IR - IT ?

Is that correct ?

Thanks.


--- On Tue, 7/17/12, Damon Pollard damon.poll...@birchmangroup.com
mailto:damon.poll...@birchmangroup.com wrote:

From: Damon Pollard damon.poll...@birchmangroup.com
mailto:damon.poll...@birchmangroup.com
Subject: Re: [zfs-discuss] Has anyone switched from IR - IT
firmware on the fly ? (existing zpool on LSI 9211-8i)
To: Jason Usher jushe...@yahoo.com mailto:jushe...@yahoo.com
Cc: zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org
Date: Tuesday, July 17, 2012, 5:05 PM

Hi Jason,
I have done this in the past. (3x LSI 1068E - IBM BR10i).
Your pool has no tie with the hardware used to host it (including
your HBA). You could change all your hardware, and still import your
pool correctly.

If you really want to be on the safe side; you can export your pool
before the firmware change and then import when
your satisfied the firmware change is complete.
Export: http://docs.oracle.com/cd/E19082-01/817-2271/gazqr/index.html
Import: http://docs.oracle.com/cd/E19082-01/817-2271/gazuf/index.html
Damon Pollard


On Wed, Jul 18, 2012 at 6:14 AM, Jason Usher jushe...@yahoo.com
mailto:jushe...@yahoo.com wrote:

We have a running zpool with a 12 disk raidz3 vdev in it ... we gave
ZFS the full, raw disks ... all is well.



However, we built it on two LSI 9211-8i cards and we forgot to
change from IR firmware to IT firmware.



Is there any danger in shutting down the OS, flashing the cards to
IT firmware, and then booting back up ?



We did not create any raid configuration - as far as we know, the
LSI cards are just passing through the disks to ZFS ... but maybe not ?



I'd like to hear of someone else doing this successfully before we
try it ...





We created the zpool with raw disks:



zpool create -m /mount/point MYPOOL raidz3 da{0,1,2,3,4,5,6,7,8,9,10,11}



and diskinfo tells us that each disk is:



da1 512 3000592982016   5860533168



The physical label (the sticker) on the disk also says 5860533168
sectors ... so that seems to line up ...





Someone else in the world has made this change while inflight and
can confirm ?



Thanks.

___

zfs-discuss mailing list

zfs-discuss@opensolaris.org mailto:zfs-discuss@opensolaris.org

http://mail.opensolaris.org/mailman/listinfo/zfs-discuss







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Question on 4k sectors

2012-07-18 Thread Dave U . Random
Hi. Is the problem with ZFS supporting 4k sectors or is the problem mixing
512 byte and 4k sector disks in one pool, or something else? I have seen
alot of discussion on the 4k issue but I haven't understood what the actual
problem ZFS has with 4k sectors is. It's getting harder and harder to find
large disks with 512 byte sectors so what should we do? TIA...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Very poor small-block random write performance

2012-07-18 Thread Michael Traffanstead
I have an 8 drive ZFS array (RAIDZ2 - 1 Spare) using 5900rpm 2TB SATA drives 
with an hpt27xx controller under FreeBSD 10 (but I've seen the same issue with 
FreeBSD 9). 

The system has 8gigs and I'm letting FreeBSD auto-size the ARC.

Running iozone (from ports), everything is fine for file sizes up to 8GB, but 
when it runs with a 16GB file the random write performance plummets using 64K 
record sizes.

8G - 64K - 52mB/s
8G - 128K - 713mB/s
8G - 256K - 442mB/s

16G - 64K - 7mB/s

16G - 128K - 380mB/s
16G - 256K - 392mB/s

Also, sequential small block performance doesn't show such a dramatic slowdown 
either.

16G - 64K - 108mB/s (sequential) 

There's nothing else using the zpool at the moment, the system is on a separate 
ssd.

I was expecting performance to drop off at 16GB b/c that's well above the 
available ARC but see that dramatic of a drop off and then the sharp 
improvement at 128K and 256K is surprising.

Are there any configuration settings I should be looking at?

Mike 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss