Re: [zfs-discuss] corrupt pool?

2010-07-27 Thread James
I have been working on the same problem now for almost 48 straight hours.  I 
have managed to recover some of my data using

zpool import -f pool 

The command never completes, but you can do a

zpool list
and
zpool status

and you will see the pool.

Then you do

zfs list

and the file systems should be present.

In my case I corrupted a data file system, but the user systems were some 
what still intact.  So I did

zfs send pool/snapshot | zfs receive newpool/snapshot_backup

This allowed me to recover some of my data.  But in large the data is gone.  I 
am going to be migrating to 2009.06.

Here is the thread where I have been discussing this:

http://opensolaris.org/jive/thread.jspa?messageID=493644tstart=0
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] core dumps eating space in snapshots

2010-07-27 Thread devsk
I have many core files stuck in snapshots eating up gigs of my disk space. Most 
of these are BE's which I don't really want to delete right now.

Is there a way to get rid of them? I know snapshots are RO but can I do some 
magic with clones and reclaim my space?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] orignal raidz code required

2010-07-27 Thread v
Hi all,
In Jeff's blog:http://blogs.sun.com/bonwick/entry/raid_z
It mentions original raid-z codes are 599 lines, where can I find it to learn, 
current codes are a little big.

regards
Victor
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] orignal raidz code required

2010-07-27 Thread Darren J Moffat

On 27/07/2010 09:20, v wrote:

Hi all,
In Jeff's blog:http://blogs.sun.com/bonwick/entry/raid_z
It mentions original raid-z codes are 599 lines, where can I find it to learn, 
current codes are a little big.


From the source code repository, use 'hg log' and 'hg cat' to find and 
show the version you want.


Or you can use {OpenGrok on src.opensolaris.org and look at the history 
there:


http://src.opensolaris.org/source/history/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_raidz.c

And as promised 599 lines of code (and that includes all the CDDL 
copyright headers etc):


http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_raidz.c?r=789%3Ab348f31ed315

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-27 Thread Dav Banks
The reason for wanting raidz was to have some redundancy in the backup without 
the big hit on space that duplicating the data would have.
The other issue is the switching process. More likely to have screwups if every 
week I, or someone else when I'm out, have to break and reset 24 mirrors 
instead of just one.
I do need to look more at the copies property though. That could be useful in 
some other situations.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-27 Thread Dav Banks
How's that working for you? Seems like it would be as straightforward as I was 
thinking - only possible.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-27 Thread Dav Banks
Thanks Cindy - I've been looking for an admin guide!
I'll play with the split command - sounds interesting.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-27 Thread Dav Banks
Yeah, that's starting to sound like a fairly simple but equally robust 
solution. That may be the final solution. Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-27 Thread Dav Banks
True! I don't need the same level of redundancy on the backup as the primary.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] core dumps eating space in snapshots

2010-07-27 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of devsk
 
 I have many core files stuck in snapshots eating up gigs of my disk
 space. Most of these are BE's which I don't really want to delete right
 now.

Ok, you don't want to delete them ...


 Is there a way to get rid of them? I know snapshots are RO but can I do
 some magic with clones and reclaim my space?

You don't want to delete them, but you don't want them to take up space
either?  Um ... Sorry, can't be done.  Move them to a different disk ...

Or clarify what it is that you want.

If you're saying you have core files in your present filesystem that you
don't want to delete ... And you also have core files in snapshots that you
*do* want to delete ...  As long as the file hasn't been changing, it's not
consuming space beyond what's in the current filesystem.  (See the output of
zfs list, looking at sizes and you'll see that.)  If it has been changing
... the cores in snapshot are in fact different from the cores in present
filesytem ... then the only way to delete them is to destroy snapshots.

Or have I still misunderstood the question?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-27 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Dav Banks

This message:
 How's that working for you? Seems like it would be as straightforward
 as I was thinking - only possible.

And this message:
 Yeah, that's starting to sound like a fairly simple but equally robust
 solution. That may be the final solution. Thanks!

Didn't include any reference to what you were replying about.  So I don't
know which messages you were replying to when you sent those.

If you're using the jive forums, and you wish to carry on dialogue with
people who are using email, it's recommended to copy  paste what you're
replying to, into your reply, so the recipients know what you're replying
to.

I am guessing you're replying to people saying use zfs send

So my answer is:  It works very well.

Another feature, in favor of zfs send instead of mirrors, is the fact that
you can have your backup media compressed while your main pool probably
isn't.  And so forth.

The opposite is also true.  If you have any special properties set on your
main pool, they won't automatically be set on your receiving pool.  So I
personally recommend saving zpool get all and zfs get all into a txt
file, and store it along with your backup media.  So you have it available,
if ever there were any confusion about it at all.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How does zil work

2010-07-27 Thread Roch

v writes:
  Hi,
  A basic question regarding how zil works:
  For asynchronous write, will zil be used?
  For synchronous write, and if io is small, will the whole io be place on 
  zil? or just the pointer be save into zil? what about large size io?
  

Let me try.

ZIL : code and data structure to track system calls into a zvol or zfs 
filesystem
LOG : stable storage log  managed by the zil keeping track of synchronous 
operation.
SLOG: log device seperate from the regular pool of disk; typically an SSD or 
NVRAM based.

For asynchronous writes, the ZIL keeps track of those
operations; but does not write stable LOG records unless an
fsync is issued. Of course we recently added zfs property
sync. If set to sync=always; then there are no more
asynchronous writes.

For synchronous writes, the ZIL keeps track of those
operations and generates a stable LOG record. There are 2
options open to the ZIL here. Either issue an I/O for a full
record and another I/O that points to it, or issue a single
I/O containing ZIL metadata and file data. When issuing a 1
Byte synchronous write; it's intuitively best to have a
single I/O with the 1Byte of new data (partial zfs record)
and all the ZIL metadata to handle it. Later during a Pool
TXG, the whole record will be updated in the main disk
pool. For a large synchronous write, it's best to have the
modified whole records be sent into the main disk pool and
have the zil record only track pointers to the modified
records. In between you need to make a decision between the
2 options. That decision depends on, the write size, the
recordsize, the presence of log devices, the logbias
setting, the current load on a given filesystem etc.

The goal here is both to handle the current operation as
fast as possible but also keep SLOG device available for
fast handling of synchronous writes by other threads. So
it's a fairly complex set of requirements but it seems to be
evolving in the right direction.

-r




  Regards
  Victor
  -- 
  This message posted from opensolaris.org
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] core dumps eating space in snapshots

2010-07-27 Thread Michael Schuster

On 27.07.10 14:21, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of devsk

I have many core files stuck in snapshots eating up gigs of my disk
space. Most of these are BE's which I don't really want to delete right
now.


Ok, you don't want to delete them ...



Is there a way to get rid of them? I know snapshots are RO but can I do
some magic with clones and reclaim my space?


You don't want to delete them, but you don't want them to take up space
either?  Um ... Sorry, can't be done.  Move them to a different disk ...

Or clarify what it is that you want.

If you're saying you have core files in your present filesystem that you
don't want to delete ... And you also have core files in snapshots that you
*do* want to delete ...  As long as the file hasn't been changing, it's not
consuming space beyond what's in the current filesystem.  (See the output of
zfs list, looking at sizes and you'll see that.)  If it has been changing
... the cores in snapshot are in fact different from the cores in present
filesytem ... then the only way to delete them is to destroy snapshots.

Or have I still misunderstood the question?


yes, I think so.

Here's how I read it: the snapshots contain lots more than the core files, 
and OP wants to remove only the core files (I'm assuming they weren't 
discovered before the snapshot was taken) but retain the rest.


does that explain it better?

HTH
Michael
--
michael.schus...@oracle.com http://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFSroot LiveUpgrade

2010-07-27 Thread Ketan
i have 2 file systems on my primary disk /  /zones . i want to convert it to 
zfs root with live upgrade but when i live upgrade it creates the ZFS BE but 
instead of creating a separate /zones dataset it uses the same dataset from the 
primary BE (c3t1d0s3 ) ... is there any way i can do it so that i uses /zones 
in zfsBE ?  

/dev/dsk/c3t1d0s0   15G   8.5G   6.1G58%/
/dev/dsk/c3t1d0s3   39G16G23G43%/zones

# lustatus
Boot Environment   Is   Active ActiveCanCopy
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
Sol10_u7   yes  yesyes   no -
lustatus
 # lucreate -c Sol10_u7 -n zfsBE -p rpool
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-27 Thread Darren J Moffat

On 27/07/2010 13:28, Edward Ned Harvey wrote:

The opposite is also true.  If you have any special properties set on your
main pool, they won't automatically be set on your receiving pool.  So I
personally recommend saving zpool get all and zfs get all into a txt
file, and store it along with your backup media.  So you have it available,
if ever there were any confusion about it at all.


PSARC/2010/193 defines a solution to solve that problem without having 
to save away a copy of 'zfs get all'.


http://arc.opensolaris.org/caselog/PSARC/2010/193/mail

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] core dumps eating space in snapshots

2010-07-27 Thread devsk
Thanks, Michael. That's exactly right.

I think my requirement is: writable snapshots.

And I was wondering if someone knowledgeable here could tell me if I could do 
this magically by using clones without creating a tangled mess of branches, 
because clones in a way are writable snapshots.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs destroy - weird output ( cannot destroy '': dataset already exists )

2010-07-27 Thread Bruno Sousa
Hi all,

I'm running snv_134 and i'm testing the COMSTAR framework and during
those tests i've created an ISCSI zvol and exported to a server.
Now that the tests are done i have renamed the zvol and so far so
good..things get really weird (at least to me) when i try to destroy
this zvol.

*r...@santest:~# zfs destroy vol0/ISCSI/2delete*

*cannot destroy 'vol0/ISCSI/2delete': dataset already exists*

What does it means dataset already exists ? I've already destroyed the
iscsi-lu within the stmfadm and i've offlined the iscsi target, and
there's no snapshots of this zvol.

Thanks for your time,
Bruno


Here are the properties of this zvol.

r...@santest:~# zfs get all vol0/ISCSI/2delete

NAMEPROPERTY  VALUE  SOURCE
vol0/ISCSI/2delete  type  volume -
vol0/ISCSI/2delete  creation  Thu Jul 15 23:02 2010  -
vol0/ISCSI/2delete  used  57.9G  -
vol0/ISCSI/2delete  available 7.24T  -
vol0/ISCSI/2delete  referenced57.9G  -
vol0/ISCSI/2delete  compressratio 1.00x  -
vol0/ISCSI/2delete  reservation   none   default
vol0/ISCSI/2delete  volsize   150G   local
vol0/ISCSI/2delete  volblocksize  8K -
vol0/ISCSI/2delete  checksum  on default
vol0/ISCSI/2delete  compression   offdefault
vol0/ISCSI/2delete  readonly  offdefault
vol0/ISCSI/2delete  shareiscsioffdefault
vol0/ISCSI/2delete  copies1  default
vol0/ISCSI/2delete  refreservationnone   default
vol0/ISCSI/2delete  primarycache  alldefault
vol0/ISCSI/2delete  secondarycachealldefault
vol0/ISCSI/2delete  usedbysnapshots   0  -
vol0/ISCSI/2delete  usedbydataset 57.9G  -
vol0/ISCSI/2delete  usedbychildren1K -
vol0/ISCSI/2delete  usedbyrefreservation  0  -
vol0/ISCSI/2delete  logbias   latencydefault
vol0/ISCSI/2delete  dedup offdefault
vol0/ISCSI/2delete  mlslabel  none   default


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destroy - weird output ( cannot destroy '': dataset already exists )

2010-07-27 Thread Bruno Sousa
Hi all,

It seems this issue has to do with CR 6860996 (
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6860996 ),
but following the following tips from Cindy Swearingen did the trick :

%temporary clones are not automatically destroyed on error

A temporary clone is created for an incremental receive and
in some cases, is not removed automatically.

Victor might be able to describe this better, but consider
the following steps as further diagnosis or a workaround:

1. Determine clone names:

# zdb -d poolname | grep %

2. Destroy identified clones:
# zfs destroy clone-with-%-in-the-name

It will complain that 'dataset does not exist', but you can check
again(see 1)

3. Destroy snapshot(s) that could not be destroyed previously


So my thanks goes to * Cindy Swearingen*, but i wonder...wasn't this bug
fixed in build 122 as seen in the OpenSolaris bug database?

Bruno


On 27-7-2010 19:36, Bruno Sousa wrote:
 Hi all,

 I'm running snv_134 and i'm testing the COMSTAR framework and during
 those tests i've created an ISCSI zvol and exported to a server.
 Now that the tests are done i have renamed the zvol and so far so
 good..things get really weird (at least to me) when i try to destroy
 this zvol.

 *r...@santest:~# zfs destroy vol0/ISCSI/2delete*

 *cannot destroy 'vol0/ISCSI/2delete': dataset already exists*

 What does it means dataset already exists ? I've already destroyed the
 iscsi-lu within the stmfadm and i've offlined the iscsi target, and
 there's no snapshots of this zvol.

 Thanks for your time,
 Bruno


 Here are the properties of this zvol.

 r...@santest:~# zfs get all vol0/ISCSI/2delete

 NAMEPROPERTY  VALUE  SOURCE
 vol0/ISCSI/2delete  type  volume -
 vol0/ISCSI/2delete  creation  Thu Jul 15 23:02 2010  -
 vol0/ISCSI/2delete  used  57.9G  -
 vol0/ISCSI/2delete  available 7.24T  -
 vol0/ISCSI/2delete  referenced57.9G  -
 vol0/ISCSI/2delete  compressratio 1.00x  -
 vol0/ISCSI/2delete  reservation   none   default
 vol0/ISCSI/2delete  volsize   150G   local
 vol0/ISCSI/2delete  volblocksize  8K -
 vol0/ISCSI/2delete  checksum  on default
 vol0/ISCSI/2delete  compression   offdefault
 vol0/ISCSI/2delete  readonly  offdefault
 vol0/ISCSI/2delete  shareiscsioffdefault
 vol0/ISCSI/2delete  copies1  default
 vol0/ISCSI/2delete  refreservationnone   default
 vol0/ISCSI/2delete  primarycache  alldefault
 vol0/ISCSI/2delete  secondarycachealldefault
 vol0/ISCSI/2delete  usedbysnapshots   0  -
 vol0/ISCSI/2delete  usedbydataset 57.9G  -
 vol0/ISCSI/2delete  usedbychildren1K -
 vol0/ISCSI/2delete  usedbyrefreservation  0  -
 vol0/ISCSI/2delete  logbias   latencydefault
 vol0/ISCSI/2delete  dedup offdefault
 vol0/ISCSI/2delete  mlslabel  none   default


   


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFSroot LiveUpgrade

2010-07-27 Thread Cindy Swearingen

Hi Ketan,

The supported LU + zone configuration migration scenarios
are described here:

http://docs.sun.com/app/docs/doc/819-5461/gihfj?l=ena=view

I think the problem is that the /zones is a mountpoint.

You might have better results if /zones was just a directory.

See the examples in this section as well:

http://docs.sun.com/app/docs/doc/819-5461/gihit?l=ena=view

Thanks,

Cindy
On 07/27/10 06:55, Ketan wrote:
i have 2 file systems on my primary disk /  /zones . i want to convert it to zfs root with live upgrade but when i live upgrade it creates the ZFS BE but instead of creating a separate /zones dataset it uses the same dataset from the primary BE (c3t1d0s3 ) ... is there any way i can do it so that i uses /zones in zfsBE ?  


/dev/dsk/c3t1d0s0   15G   8.5G   6.1G58%/
/dev/dsk/c3t1d0s3   39G16G23G43%/zones

# lustatus
Boot Environment   Is   Active ActiveCanCopy
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
Sol10_u7   yes  yesyes   no -
lustatus
 # lucreate -c Sol10_u7 -n zfsBE -p rpool

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD 8.1 out, has zfs vserion 14 and can boot from zfs

2010-07-27 Thread Andrey V. Elsukov
On 27.07.2010 1:57, Peter Jeremy wrote:
 Note that ZFS v15 has been integrated into the development branches
 (-current and 8-stable) and will be in FreeBSD 8.2 (or you can run it

ZFS v15 is not yet in 8-stable. Only in HEAD. Perhaps it will be merged
into stable after 2 months.

-- 
WBR, Andrey V. Elsukov



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] FreeBSD 8.1 out, has zfs vserion 14 and can boot from zfs

2010-07-27 Thread Peter Jeremy
On 2010-Jul-27 19:43:50 +0800, Andrey V. Elsukov bu7c...@yandex.ru wrote:
On 27.07.2010 1:57, Peter Jeremy wrote:
 Note that ZFS v15 has been integrated into the development branches
 (-current and 8-stable) and will be in FreeBSD 8.2 (or you can run it

ZFS v15 is not yet in 8-stable. Only in HEAD. Perhaps it will be merged
into stable after 2 months.

Oops, sorry.  There are patches available for 8-stable (which I'm running).
I misremembered the commit message.

-- 
Peter Jeremy


pgpHQlZ2UoRAA.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirrored raidz

2010-07-27 Thread Richard Elling
On Jul 27, 2010, at 7:13 AM, Darren J Moffat wrote:

 On 27/07/2010 13:28, Edward Ned Harvey wrote:
 The opposite is also true.  If you have any special properties set on your
 main pool, they won't automatically be set on your receiving pool.  So I
 personally recommend saving zpool get all and zfs get all into a txt
 file, and store it along with your backup media.  So you have it available,
 if ever there were any confusion about it at all.
 
 PSARC/2010/193 defines a solution to solve that problem without having to 
 save away a copy of 'zfs get all'.
 
 http://arc.opensolaris.org/caselog/PSARC/2010/193/mail

Agree.  This is a better solution because some configurable parameters
are hidden from zfs get all
 -- richard

-- 
ZFS and performance consulting
http://www.RichardElling.com









___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] orignal raidz code required

2010-07-27 Thread v
Thanks Darren.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How does zil work

2010-07-27 Thread v
Thanks for your replies.

Regards
Victor
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lost zpool after reboot

2010-07-27 Thread Amit Kulkarni
I think that Device Manager in Windows 7 doesn't do any harm. Instead I used 
this utility to try and format an external USB hard drive.

http://www.ridgecrop.demon.co.uk/fat32format.htm

I used the GUI format
http://www.ridgecrop.demon.co.uk/guiformat.htm

I clicked and started this GUI format without inserting the USB hard drive and 
it messed up the ZFS mirror. Entirely my fault, but just posting it out there 
so there is some record of it. I don't know what the utility may have modified.

--- On Sat, 7/17/10, Giovanni Tirloni gtirl...@sysdroid.com wrote:

 From: Giovanni Tirloni gtirl...@sysdroid.com
 Subject: Re: [zfs-discuss] Lost zpool after reboot
 To: Amit Kulkarni amitk...@yahoo.com
 Cc: zfs-discuss@opensolaris.org
 Date: Saturday, July 17, 2010, 6:23 PM
 On Sat, Jul 17, 2010 at 3:07 PM, Amit
 Kulkarni amitk...@yahoo.com
 wrote:
  I don't know if the devices are renumbered. How do you
 know if the devices are changed?
 
  Here is output of format, the middle one is the boot
 drive and selection 0  2 are the ZFS mirrors
 
  AVAILABLE DISK SELECTIONS:
        0. c8t0d0 ATA-HITACHIHDS7225S-A94A cyl
 30398 alt 2 hd 255 sec 63
           /p...@0,0/pci108e,5...@7/d...@0,0
        1. c8t1d0 DEFAULT cyl 15010 alt 2 hd 255
 sec 63
           /p...@0,0/pci108e,5...@7/d...@1,0
        2. c9t0d0 ATA-HITACHIHDS7225S-A7BA cyl
 30398 alt 2 hd 255 sec 63
           /p...@0,0/pci108e,5...@8/d...@0,0
 
 It seems that the devices that ZFS is trying to open exist.
 I wonder
 why it's failing.
 
 Please send the output of:
 
 zpool status
 zpool import
 zdb -C (dump config)
 zdb -l /dev/dsk/c8t0d0s0 (dump label contents)
 zdb -l /dev/dsk/c9t0d0s0 (dump label contents)
 check /var/adm/messages
 
 Perhaps with the additional information someone here can
 help you
 better. I don't have any experience with Windows 7 to
 guarantee that
 it hasn't messes with the disk contents.
 
 -- 
 Giovanni Tirloni
 gtirl...@sysdroid.com
 


  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] raidz2 + spare or raidz3 and no spare for nine 1.5 TB SATA disks?

2010-07-27 Thread Jack Kielsmeier
The only other zfs pool in my system is a mirrored rpool (2 500 gb disks). This 
is for my own personal use, so it's not like the data is mission critical in 
some sort of production environment.

The advantage I can see with going with raidz2 + spare over raidz3 and no spare 
is I would spend much less time running in a degraded state when a drive fails 
(I'd have to RMA the drive and wait most likely a week or more for a 
replacement).

The disadvantage of raidz2 + spare is the event of a triple disk failure. This 
is most likely not going to occur with 9 disks, but certainly is possible. If 3 
disks fail before one can be rebuilt with the spare, the data will be lost.

So, I guess the main question I have is, how much a performance hit is noticed 
when a raidz3 array is running in a degraded state?

Thanks

- Jack
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss