[zfs-discuss] convert raidz from osx

2009-10-08 Thread dirk schelfhout
I am converting a 4 disk raidz from osx to opensolaris. And I want to keep the 
data intact.
I want zfs to get access to the full disk instead of a slice.
I believe like c8d0 instead off c8d0s1.
I wanted to do this 1 disk at a time and let it resilver.
what is the proper way to do this.
I tried, I believe from memory : zpool replace -f rpool c8d1s1 c8d1
but it didn't let me do that.
then I tried to put the disk offline first , but same result.

Thanks,

Dirk
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] strange pool disks usage pattern

2009-10-08 Thread Maurilio Longo
By the way,

there are more than fifty bugs logged for marevell88sx, many of them about 
problems with DMA handling and/or driver behaviour under stress.

Can it be that I'm stumbling upon something along these lines?

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6826483

Maurilio.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] convert raidz from osx

2009-10-08 Thread Cindy Swearingen

Dirk,

I'm not sure I'm following you exactly but this is what I think you are
trying to do:

You have a RAIDZ pool that is built with slices and you are trying to
convert the slice configuration to whole disks. This isn't possible
because you are trying replace the same disk. This is what happens:

# zpool create test raidz c0t4d0s0 c0t5d0s0 c0t6d0s0
# zpool replace test c0t6d0s0 c0t6d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c0t6d0s0 is part of active ZFS pool test. Please see zpool(1M).

You could replace the disk slice with a different disk like this:

# zpool replace test c0t6d0s0 c0t7d0

If you don't have any additional disks then I think you will have to
backup the data and recreate the pool. Maybe someone else has a better
idea.

Also, you refer to rpool, which is the default name of the ZFS root
pool in the Opensolaris release. This pool cannot be RAIDZ pool nor
can it contain whole disks. It must be created with disk slices.

Cindy

On 10/08/09 02:15, dirk schelfhout wrote:

I am converting a 4 disk raidz from osx to opensolaris. And I want to keep the 
data intact.
I want zfs to get access to the full disk instead of a slice.
I believe like c8d0 instead off c8d0s1.
I wanted to do this 1 disk at a time and let it resilver.
what is the proper way to do this.
I tried, I believe from memory : zpool replace -f rpool c8d1s1 c8d1
but it didn't let me do that.
then I tried to put the disk offline first , but same result.

Thanks,

Dirk

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] convert raidz from osx

2009-10-08 Thread Dirk Schelfhout

yes, that was what I was doing.
I wanted to give the raidz whole disks because grub didn't want to  
install.

(  I forgot which command I used . bootadm ? )
I have another slice free on a shared disk with osx and win7 but I  am  
having problems with grub.
I will try that again and document it so I can ask a proper question  
about it.
As the installer has problems and I couldn't get  the fdisk workaround  
to work with the osol-1002-118-x86.iso

Dirk
On 08 Oct 2009, at 16:03, Cindy Swearingen wrote:


Dirk,

I'm not sure I'm following you exactly but this is what I think you  
are

trying to do:

You have a RAIDZ pool that is built with slices and you are trying to
convert the slice configuration to whole disks. This isn't possible
because you are trying replace the same disk. This is what happens:

# zpool create test raidz c0t4d0s0 c0t5d0s0 c0t6d0s0
# zpool replace test c0t6d0s0 c0t6d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c0t6d0s0 is part of active ZFS pool test. Please see zpool 
(1M).


You could replace the disk slice with a different disk like this:

# zpool replace test c0t6d0s0 c0t7d0

If you don't have any additional disks then I think you will have to
backup the data and recreate the pool. Maybe someone else has a better
idea.

Also, you refer to rpool, which is the default name of the ZFS root
pool in the Opensolaris release. This pool cannot be RAIDZ pool nor
can it contain whole disks. It must be created with disk slices.

Cindy

On 10/08/09 02:15, dirk schelfhout wrote:
I am converting a 4 disk raidz from osx to opensolaris. And I want  
to keep the data intact.

I want zfs to get access to the full disk instead of a slice.
I believe like c8d0 instead off c8d0s1.
I wanted to do this 1 disk at a time and let it resilver.
what is the proper way to do this.
I tried, I believe from memory : zpool replace -f rpool c8d1s1 c8d1
but it didn't let me do that.
then I tried to put the disk offline first , but same result.
Thanks,
Dirk


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send/receive performance concern

2009-10-08 Thread Rani Raj
I am running zfs send/receive on a ~1.2Tb zfs spread across 10x200Gb LUNs.
Has copied only 650Gb in ~42Hrs. Source pool and destination pool are from
the same storage sub system.   Last time when ran, took ~20Hrs.

Something is terribly wrong here. What do i need to look to figure out the
reason?

ran zpool iostat and iostat on the given pool for some clue. but still in a
state of confusion now.

RR.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] scrubing/resilvering - controller problem

2009-10-08 Thread Roch Bourbonnais


You might try setting zfs_scrub_limit to 1 or 2 and attach a customer  
service record to :


6494473 ZFS needs a way to slow down resilvering

-r


Le 7 oct. 09 à 06:14, John a écrit :


Hi,

We are running b118, with a LSI 3801 controller which is connected  
to 44 drives (yes it's a lot behind a single controller). We also  
use a pair of ssd connected to another controller for read cache.
Everything works fine and we achieve acceptable performance for our  
needs.
However, during scrubbing or resilvering operations, it seems ZFS  
generates so much traffic that it overwhelmes the controller. The  
controller then logs the following errors:


Oct  6 07:30:04 nas101 scsi: [ID 107833 kern.warning] WARNING: / 
p...@0,0/pci8086,6...@4/pci1000,3...@0/s...@16,0 (sd19):

Oct  6 07:30:04 nas101incomplete read- retrying


Is there anything that can be done to slow down zfs operations such  
as resilvering/scrubbing? We tried tuning  zfs:zfs_vdev_max_pending  
but it did not really help.
This is a bit frustrating because this configuration works well to  
serve data. It's just too aggressive when the kernel accesses drives  
for some operations.



Iostat looks liks this:
   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w  
trn tot device
9681.5   37.3 116403.3   35.1  0.0 30.20.03.1   0 1000   0   
21   8  29 c9
 420.31.5 5058.21.4  0.0  1.30.03.1   0  44   0
0   0   0 c9t8d0




Any help would be appreciated.

thanks,

JJ
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Hot Spares spin down?

2009-10-08 Thread bjbm
Sorry if this is a noob question but I can't seem to find this info anywhere.

Are hot spares generally spunned down until they are needed?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS saved my data success story

2009-10-08 Thread Carson Gaspar
To recap for those who don't recall my plaintive cries for help, I lost a pool 
due to the following sequence of events:


- One drive in my raidz array becomes flaky, has frequent stuck I/Os due to 
drive error recovery, trashing performance

- I take flaky drive offline (zpool offline...)
- I bring down the server and swap out the disk
- When the server comes back up, the pool won't import, because a _different_ 
drive has decided to not spin up

- I swap the flaky disk back in
- Pool won't import because the flaky disk is several TxGs behind (and, if I 
boot into a boot CD, because it's offline)


Victor Latushkin from Sun's Moscow office once again comes to the rescue, 
rolling back the uberblocks to a (mostly) sane TxG ID, and providing new zfs 
bits with a read-only import option (to make sure I didn't trash things worse 
than they already were...)


I have finally completed recovery of my 3+ TB of data. I lost one email account, 
about 10 non-spam non-trash emails from my email account, and one ISO image. I'm 
fairly certain that rolling back another few TxGs would have gotten the lost 
email account directory back, but as the user had a complete offline copy on his 
desktop, it wasn't worth it. I suspect the emails that were not recoverable from 
my account were actually ones I had recently deleted / filed as spam, but I 
can't be certain (I know the file names, but 1013456. isn't very descriptive...)


If I had been using any other file system, I would have lost everything. I had 
partial backups of my most critical data, but the loss would have been extremely 
painful. Due to the design of ZFS and the help from Victor (hopefully soon to no 
longer be needed with delivery of the zpool rollback tool(s)), not only did I 
recover the vast majority of my data, I know what I didn't recover, and I know 
that what I did recover is not corrupt.


Thanks, ZFS! I am _so_ glad I became an early adopter many years ago when I 
built this home server.


For the record, my new, larger pool is raidz2, and I am plugging the holes in my 
offsite backup syncs to cover more of my data (not all of it, as 8+ TB of data 
is a lot... I will probably purchase some big, slow disks next year to allow me 
to keep a full offline backup locally).


--
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hot Spares spin down?

2009-10-08 Thread Henrik Johansson

Hi there,

On Oct 8, 2009, at 9:46 PM, bjbm wrote:

Sorry if this is a noob question but I can't seem to find this info  
anywhere.


Are hot spares generally spunned down until they are needed?


No, but have a look at power.conf(4) and the device-thresholds keyword  
to spin down disks.


Here is a bigadmin article also: 
http://www.sun.com/bigadmin/features/articles/disk_power_saving.jsp

Regards

Henrik
http://sparcv9.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] MPT questions

2009-10-08 Thread Frank Middleton

In an attempt to recycle some old PATA disks, we bought some
really cheap PATA/SATA adapters, some of which actually work
to the point where it is possible to boot from a ZFS installation
(e.g., c1t2d0s0). Not all PATA disks work, just Seagates, it would
seem, but not Maxstors. I wonder why? probe-scsi-all sees
Seagate but not Maxstor disks plugged into the same adapter.

Such disks have proven invaluable as a substitute for rescue
CDs until such CDs become possible.

The odd thing is that booting from another disk, ZFS can't see
the adapted disk even though it is bootable. Could the reason
be that there's no /dev/rdsk/c1t2d0, but there are c1t0d0, etc.?
Format sees the disk but zpool import doesn't (this is on SPARC
sun4u). This isn't at all important, just curious as to why this
might be and why zpool import can't see it at all, but zpool
create can.

Gotta say how happy we are with the MPT driver and the LSI
SAS controller - fast and reliable - petabytes of i/o and not a
single zfs checksum error!

This has little to do with ZFS, but should it be possible to
see a PATA CD or DVD connected to an MPT (LSI) SAS controller
via one of these adapters? Though I'd ask before forking out
for a SATA DVD drive - just hate to put perfectly good drives
out for recycling. Maybe someone can recommend a writable
BlueRay SAS drive  that is known to work with the MPT driver
instead...

Thanks -- Frank

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] MPT questions

2009-10-08 Thread James C. McPherson

Frank Middleton wrote:

In an attempt to recycle some old PATA disks, we bought some
really cheap PATA/SATA adapters, some of which actually work
to the point where it is possible to boot from a ZFS installation
(e.g., c1t2d0s0). Not all PATA disks work, just Seagates, it would
seem, but not Maxstors. I wonder why? probe-scsi-all sees
Seagate but not Maxstor disks plugged into the same adapter.

Such disks have proven invaluable as a substitute for rescue
CDs until such CDs become possible.

The odd thing is that booting from another disk, ZFS can't see
the adapted disk even though it is bootable. Could the reason
be that there's no /dev/rdsk/c1t2d0, but there are c1t0d0, etc.?
Format sees the disk but zpool import doesn't (this is on SPARC
sun4u). This isn't at all important, just curious as to why this
might be and why zpool import can't see it at all, but zpool
create can.

Gotta say how happy we are with the MPT driver and the LSI
SAS controller - fast and reliable - petabytes of i/o and not a
single zfs checksum error!

This has little to do with ZFS, but should it be possible to
see a PATA CD or DVD connected to an MPT (LSI) SAS controller
via one of these adapters? Though I'd ask before forking out
for a SATA DVD drive - just hate to put perfectly good drives
out for recycling.


It might work. It certainly wouldn't hurt to try.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss