[zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Uwe Dippel
This might sound sooo simple, but it isn't. I read the ZFS Administration Guide 
and it did not give an answer; at least no simple answer, simple enough for me 
to understand.
The intention is to follow the thread Easiest way to replace a boot disk with 
a larger one.
The command given would be 
zpool attach rpool /dev/dsk/c1d0s0 /dev/dsk/c2d0s0
as far as I understand in my case. What it says is cannot open 
'/dev/dsk/c2d0s0': No such device or address. format shows that the partition 
exists:
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
   0. c1d0 DEFAULT cyl 17020 alt 2 hd 255 sec 63
  /p...@0,0/pci-...@9/i...@0/c...@0,0
   1. c2d0 DEFAULT cyl 10442 alt 2 hd 255 sec 126
  /p...@0,0/pci-...@9/i...@1/c...@0,0
Specify disk (enter its number): 1
selecting c2d0
Controller working list found
[disk formatted, defect list found]
FORMAT MENU:
[...]
 Total disk size is 38912 cylinders
 Cylinder size is 32130 (512 byte) blocks

   Cylinders
  Partition   StatusType  Start   End   Length%
  =   ==  =   ===   ==   ===
  1 Linux native  019  20  0
  2 Solaris2 19  1046210444 27
  3 Other OS   10463  130742612  7
  4 EXT-DOS13075  3891225838 66

[...]

To my understanding, there is no need to format before using a file system in 
ZFS.
The Creating a Basic ZFS File System is not clear to me. The first (and only) 
command it offers, creates a mirrored storage of a whole disk; none of which I 
intend to do. (I suggested before, to offer a guide as well containing all the 
*basic* commands.) I wonder if I really need to use format-partition first to 
create slice s0 in that second (DOS)partition of c2d0 before ZFS can use it?

Thanks,

Uwe
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Peter Tribble
On Sat, Dec 20, 2008 at 11:52 AM, Uwe Dippel udip...@gmail.com wrote:
 This might sound sooo simple, but it isn't. I read the ZFS Administration 
 Guide and it did not give an answer; at least no simple answer, simple enough 
 for me to understand.
 The intention is to follow the thread Easiest way to replace a boot disk 
 with a larger one.
 The command given would be
 zpool attach rpool /dev/dsk/c1d0s0 /dev/dsk/c2d0s0
 as far as I understand in my case. What it says is cannot open 
 '/dev/dsk/c2d0s0': No such device or address. format shows that the 
 partition exists:

The output you gave shows that there is an fdisk partition.

If you're going to use it then you'll need to at the very least put a
label on it.

format - partition should offer to label it.

You can then set the size of s0 (to be the same as s2, if you want to use the
full disk), and write the label again.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] General question about ZFS and RAIDZ

2008-12-20 Thread Juergen Dankoweit
Hello to the forum,

with my general question about ZFS and RAIDZ I want the following to know:
Must all harddisks for the storage pool have the same capacity or is it 
possible to use harddisks with different capacities?

Many thanks for the answers.

Best regards

JueDan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] General question about ZFS and RAIDZ

2008-12-20 Thread Mario Goebbels
 with my general question about ZFS and RAIDZ I want the following to know:
 Must all harddisks for the storage pool have the same capacity or is it 
 possible to use harddisks with different capacities?

Lowest common denominator applies here. Creating a RAIDZ from a 100GB,
200GB and 300GB disk will only use 100GB from each disk.

That is until you're replacing the 100GB one with e.g. a 200GB one, the
array grows automatically.

-mg



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Uwe Dippel
Thanks, Peter!

(and I really wished the Admin Guide was more practical). So we still do need 
the somewhat arcane format-partition- tool! I guess, the step that ZFS saves 
is newfs, then?!

Uwe
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Peter Tribble
On Sat, Dec 20, 2008 at 1:11 PM, Uwe Dippel udip...@gmail.com wrote:
 Thanks, Peter!

 (and I really wished the Admin Guide was more practical). So we still do need 
 the somewhat arcane format-partition- tool! I guess, the step that ZFS 
 saves is newfs, then?!

If you want to use the whole disk then zfs will do it all for you; you
just need to define partitions/slices if you're going to use slices.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] General question about ZFS and RAIDZ

2008-12-20 Thread Mario Goebbels
 with my general question about ZFS and RAIDZ I want the following to
 know:
 Must all harddisks for the storage pool have the same capacity or is
 it possible to use harddisks with different capacities?

 Lowest common denominator applies here. Creating a RAIDZ from a 100GB,
 200GB and 300GB disk will only use 100GB from each disk.

 That is until you're replacing the 100GB one with e.g. a 200GB one, the
 array grows automatically.
 
 Ah, ok. Thanks for your answer.

Depending on how much disks you plan to use, you might want to group
them in two or more RAIDZ according to disk sizes to minimize the loss
of diskspace, as long the gain outweighs the space loss due to
additional parity disk(s).

-mg



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Gary Mills
On Sat, Dec 20, 2008 at 03:52:46AM -0800, Uwe Dippel wrote:
 This might sound sooo simple, but it isn't. I read the ZFS Administration 
 Guide and it did not give an answer; at least no simple answer, simple enough 
 for me to understand.
 The intention is to follow the thread Easiest way to replace a boot disk 
 with a larger one.
 The command given would be 
 zpool attach rpool /dev/dsk/c1d0s0 /dev/dsk/c2d0s0
 as far as I understand in my case. What it says is cannot open 
 '/dev/dsk/c2d0s0': No such device or address. format shows that the 
 partition exists:

The problem is that fdisk partitions are not the same as Solaris
partitions.  The admin guide refers to a Solaris partition.  For
Solaris 10 x86, this has to be created inside an fdisk partition.

 # format
 Searching for disks...done
 AVAILABLE DISK SELECTIONS:
0. c1d0 DEFAULT cyl 17020 alt 2 hd 255 sec 63
   /p...@0,0/pci-...@9/i...@0/c...@0,0
1. c2d0 DEFAULT cyl 10442 alt 2 hd 255 sec 126
   /p...@0,0/pci-...@9/i...@1/c...@0,0
 Specify disk (enter its number): 1
 selecting c2d0
 Controller working list found
 [disk formatted, defect list found]
 FORMAT MENU:
 [...]
  Total disk size is 38912 cylinders
  Cylinder size is 32130 (512 byte) blocks
 
Cylinders
   Partition   StatusType  Start   End   Length%
   =   ==  =   ===   ==   ===
   1 Linux native  019  20  0
   2 Solaris2 19  1046210444 27
   3 Other OS   10463  130742612  7
   4 EXT-DOS13075  3891225838 66

These are fdisk partitions.

 To my understanding, there is no need to format before using a file system in 
 ZFS.
 The Creating a Basic ZFS File System is not clear to me. The first (and 
 only) command it offers, creates a mirrored storage of a whole disk; none of 
 which I intend to do. (I suggested before, to offer a guide as well 
 containing all the *basic* commands.) I wonder if I really need to use 
 format-partition first to create slice s0 in that second (DOS)partition of 
 c2d0 before ZFS can use it?

The Solaris `format' command is use to create Solaris partitions, and
the label that describes them.  For a ZFS root pool, you have to use a
Solaris label, and a partition (slice).  This was slice 0 in your
example.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Uwe Dippel
This is what I did:

partition print
Current partition table (original):
Total disk cylinders available: 10442 + 2 (reserved cylinders)
Part  TagFlag Cylinders SizeBlocks
  0   rootwm   3 - 10441  159.93GB(10439/0/0) 335405070
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 10441  159.98GB(10442/0/0) 335501460
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 0   15.69MB(1/0/0) 32130
  9 alternateswm   1 - 2   31.38MB(2/0/0) 64260
partition quit

But it won't work. What is going wrong now??:

# zpool attach rpool /dev/dsk/c1d0s0 /dev/dsk/c2d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c2d0s0 overlaps with /dev/dsk/c2d0s2

Do I really need to '-f' this?

Uwe
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Uwe Dippel
Gary,

thanks. All my servers run OpenBSD, so I know the difference between a 
DOS-partition and a slice. :)
My confusion is about the labels. I could not label it what I wanted, like 
zfsed or pool, it had to be root. And since we can have only a single 
bf-partition per drive (dsk), I was thinking ZFS would take the (existing but 
unlabeled) s0 to attach to. This does not seem to be the case. 
Out of curiosity: how does it matter (to ZFS) if /dsk/c3t1d0s0 is a complete 
drive or exists in a bf-partition?

One way or another, /dev/dsk/c2d0s0 seems to be over-defined now.

By the way, sorry to take your time. If what I want to do is described in a 
recipe-like manner, I will with pleasure receive and study that link. 

Thanks again,

Uwe
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Gary Mills
On Sat, Dec 20, 2008 at 06:10:10AM -0800, Uwe Dippel wrote:
 
 thanks. All my servers run OpenBSD, so I know the difference between
 a DOS-partition and a slice. :)

My background is Solaris SPARC, where things are simpler.  Solaris
writes a label to a physical disk to define slices (Solaris
partitions) on the disk.  The `format' command sees the physical disk.
In the case of Solaris x86, this command sees one fdisk partition,
which it treats as a disk.  I generally create a single fdisk
partition that occupies the entire disk, to return to simplicity.

 My confusion is about the labels. I could not label it what I
 wanted, like zfsed or pool, it had to be root. And since we can have
 only a single bf-partition per drive (dsk), I was thinking ZFS would
 take the (existing but unlabeled) s0 to attach to. This does not
 seem to be the case.

The tag that appears on the partition menu isn't used in normal
operation of the system.  There are only a few valid choices, but
`root' is fine.

 Out of curiosity: how does it matter (to ZFS) if /dsk/c3t1d0s0 is a
 complete drive or exists in a bf-partition?
 
 One way or another, /dev/dsk/c2d0s0 seems to be over-defined now.

If you give `zpool' a complete disk, by omitting the slice part, it
will write its own label to the drive.  If you specify it with a
slice, it expects that you have already defined that slice.  For a
root pool, it has to be a slice.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Question on RaidZ expansion

2008-12-20 Thread David Markey
Hello,


Im curious on the status on zfs raidz expansion(i.e adding a disk to a
3 disk raidz). I know there was some work done on this feature and i
know there is some demand for it in the home server market.




Thanks.

David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-12-20 Thread Dmitry Razguliaev
Hi, I faced with a similar problem, like Ross, but still have not found a 
solution. I have raidz out of 9 sata disks connected to internal and 2 external 
sata controllers. Bonnie++ gives me the following results: 
nexenta,8G,104393,43,159637,30,57855,13,77677,38,56296,7,281.8,1,16,26450,99,+,+++,29909,93,24232,99,+,+++,13912,99
while running on a single disk it gives me the following:
nexenta,8G,54382,23,49141,8,25955,5,58696,27,60815,5,270.8,1,16,19793,76,+,+++,32637,99,22958,99,+,+++,10490,99
The performance difference of between those two seems to be too small. I 
checked zpool iostat -v during bonnie++ itelligent writing and it looks it, 
every time more or less like this:

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
iTank   7.20G  2.60T  12 13  1.52M  1.58M
  raidz17.20G  2.60T  12 13  1.52M  1.58M
c8d0-  -   1  1   172K   203K
c7d1-  -   1  1   170K   203K
c6t0d0  -  -  1  1   172K   203K
c8d1-  -   1  1   173K   203K
c9d0-  -   1  1   174K   203K
c10d0   -  -  1  1   174K   203K
c6t1d0  -  -  1  1   175K   203K
c5t0d0s0  -  -   1  1   176K   203K
c5t1d0s0  -  -   1  1   176K   203K

As far as I understand it, less each vdev executes only 1 i/o in a time. time. 
however, on a single device zpool iostat -v gives me the following:


   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
rpool5.47G   181G  3  3   441K   434K
  c7d0s05.47G   181G  3  3   441K   434K
--  -  -  -  -  -  -

In this case this device performs 3 i/o in a time, which gives it much higher 
bandwidth per unit.

Is there any way to increase i/o counts for my iTank zpool?
I'm running OS-11.2008 on MSI P45 Diamond with 4G of memory

Best Regards, Dmitry
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] replicating a set of zfs snapshots

2008-12-20 Thread Elaine Ashton
I currently have a host that serves as a nominal backup host by receiving 
nightly differential snapshots of datasets/filesystems from a fileserver.

Say that I want to move those snapshots to another system /as they are/ and 
continue to to do nightly snapshots from the fileserver only to a new host as 
if nothing has changed...How do I approach this as 'zfs send/recv' from the 
manual doesn't quite work the way I expect, e.g.

zfs send tank/snaps...@today | ssh newbox zfs recv tank/snaps...@today

barfs complaining that the fs already exists on the receiving host. I just want 
to copy all the snapshots on one host to another without altering them at all.

I'm certain this must be do-able and that I'm doing it wrong somehow.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Uwe Dippel
Now I modified the slice s0, so that is doesn't overlap with s2 (the whole 
disk) any longer:

Part  TagFlag Cylinders SizeBlocks
  0   rootwm   3 - 10432  159.80GB(10430/0/0) 335115900
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 10441  159.98GB(10442/0/0) 335501460

but it still won't do; at least not without '-f'. I scare that something is 
wrong in my approach, since if I don't create root s0, it won't work, and if I 
create root s0, it isn't happy neither and still says  /dev/dsk/c2d0s0 
overlaps with /dev/dsk/c2d0s2.
Actually, how could s0, or sN, not (partially) overlap with the whole disk? 
What is it that I don't understand here?

Uwe
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replicating a set of zfs snapshots

2008-12-20 Thread Ian Collins
Elaine Ashton wrote:
 I currently have a host that serves as a nominal backup host by receiving 
 nightly differential snapshots of datasets/filesystems from a fileserver.

 Say that I want to move those snapshots to another system /as they are/ and 
 continue to to do nightly snapshots from the fileserver only to a new host as 
 if nothing has changed...How do I approach this as 'zfs send/recv' from the 
 manual doesn't quite work the way I expect, e.g.

 zfs send tank/snaps...@today | ssh newbox zfs recv tank/snaps...@today

 barfs complaining that the fs already exists on the receiving host. 
You are sending a full copy of tank/snapshot.

 I just want to copy all the snapshots on one host to another without altering 
 them at all.

   
If you just want the snapshots, send them to files.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SMART data

2008-12-20 Thread Mam Ruoc
 Carsten wrote:
 I will ask my boss about this (since he is the one
 mentioned in the
 copyright line of smartctl ;)), please stay tuned.

How is this going? I'm very interested too... 

Mam
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replicating a set of zfs snapshots

2008-12-20 Thread Elaine Ashton
 You are sending a full copy of tank/snapshot.

Well, yes, but it wasn't the behaviour I was quite expecting. 

I suspect that a 'zfs copy' or somesuch would be a nice utility when wanting to 
shove a parent and all of it's snapshots to another system.

 If you just want the snapshots, send them to files.

So, I could just use rsync and it would do the right thing?

I'm rather new to zfs so I'm sorry if this seems like a dumb idea or question 
as the documentation circled around this idea but never really addressed the 
problem of someone wanting to merely move snapshots from one system to another. 
I wasn't sure if rsync would do the right thing or not.
 
 -- 
 Ian.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discu
 ss
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] replicating a set of zfs snapshots

2008-12-20 Thread Ian Collins
Elaine Ashton wrote:
 You are sending a full copy of tank/snapshot.
 

 Well, yes, but it wasn't the behaviour I was quite expecting. 

 I suspect that a 'zfs copy' or somesuch would be a nice utility when wanting 
 to shove a parent and all of it's snapshots to another system.

   
If that's what you want, do an incremental send (-I).

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Tim
On Sat, Dec 20, 2008 at 7:02 PM, Uwe Dippel udip...@gmail.com wrote:

 Now I modified the slice s0, so that is doesn't overlap with s2 (the whole
 disk) any longer:

 Part  TagFlag Cylinders SizeBlocks
  0   rootwm   3 - 10432  159.80GB(10430/0/0) 335115900
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 10441  159.98GB(10442/0/0) 335501460

 but it still won't do; at least not without '-f'. I scare that something is
 wrong in my approach, since if I don't create root s0, it won't work, and if
 I create root s0, it isn't happy neither and still says  /dev/dsk/c2d0s0
 overlaps with /dev/dsk/c2d0s2.
 Actually, how could s0, or sN, not (partially) overlap with the whole disk?
 What is it that I don't understand here?

 Uwe
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



You're making this far more difficult than it needs to be.  Assuming you've
already installed on the first disk, just do the following:
prtvtoc /dev/dsk/c1t2d0s2 | fmthard -s - /dev/rdsk/c1t3d0s2

Where the cXtXdXs2 relate to your disk ID's.  You only do it for s2.  After
that you should have no issues.  In your case I believe it would be:
prtvtoc /dev/dsk/c1d0s2 | fmthard -s - /dev/rdsk/c2d0s2

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Uwe Dippel
[i]prtvtoc /dev/dsk/c1d0s2 | fmthard -s - /dev/rdsk/c2d0s2[/i]

Tim,

I understand what you try to do here, and had thought of something likewise 
myself. But - please see my first post - it is not just a mirror that I want, 
the disk is of a different size, and so is the bf-partition. If I simply take 
the table and force it unto the new drive, I am afraid it might damage the 
other (DOS-)partitions; and even if not, it will maximally create a slice of 
the same size, which is not what I want. I need the whole, new, larger, 
bf-partition.

Please, correct me if I'm wrong!

Uwe
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Richard Elling
Uwe Dippel wrote:
 Now I modified the slice s0, so that is doesn't overlap with s2 (the whole 
 disk) any longer:

 Part  TagFlag Cylinders SizeBlocks
   0   rootwm   3 - 10432  159.80GB(10430/0/0) 335115900
   1 unassignedwm   00 (0/0/0) 0
   2 backupwu   0 - 10441  159.98GB(10442/0/0) 335501460

 but it still won't do; at least not without '-f'. I scare that something is 
 wrong in my approach, since if I don't create root s0, it won't work, and if 
 I create root s0, it isn't happy neither and still says  /dev/dsk/c2d0s0 
 overlaps with /dev/dsk/c2d0s2.
 Actually, how could s0, or sN, not (partially) overlap with the whole disk? 
 What is it that I don't understand here?
   

This is bug 6397079 which was closed as a dup of 6419310
http://bugs.opensolaris.org/view_bug.do?bug_id=6387079
http://bugs.opensolaris.org/view_bug.do?bug_id=6419310

workaround: after you have verified this is what you want to
do, use -f
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] damaged dataset + zdb coredumps

2008-12-20 Thread Emmanuel
I posted the article below in October and I have been waiting for 2008.11 
hoping that the update would magically sort out my problem (basically, after a 
power cut, my pool imports but one of the datasets doesn't - the other datasets 
as well as their contents are visible and seem fully functional).

I went through a few commands (output attached) showing the import and zdb -d 
outputs for increasing levels of verbosity. At -d, zdb core dumps. zdb 
follows 7 levels of indirections and breaks at L2. A zdb -R on that block 
segfaults zdb.

Any advice or anything you guys see worth trying.

The system is a virtualbox 2.0.6 guest (2GB allocated, 4 physical drives 
passed-through) on an up-to-date Ubuntu Hardy.
-- 
This message posted from opensolaris.org~$ pfexec zpool import

  pool: tank
id: 10939520087096106673
 state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
some features will not be available without an explicit 'zpool upgrade'.
config:

tankONLINE
  raidz1ONLINE
c5t0d0  ONLINE
c5t1d0  ONLINE
  raidz1ONLINE
c5t2d0  ONLINE
c5t3d0  ONLINE

~$ pfexec zpool import tank
cannot mount 'tank/mail': I/O error

~$ pfexec zdb tank
version=10
name='tank'
state=0
txg=7698185
pool_guid=10939520087096106673
hostid=724374
hostname='moscow'
vdev_tree
type='root'
id=0
guid=10939520087096106673
children[0]
type='raidz'
id=0
guid=17648667281479346738
nparity=1
metaslab_array=13
metaslab_shift=32
ashift=9
asize=640114229248
is_log=0
children[0]
type='disk'
id=0
guid=5902022595400705343
path='/dev/dsk/c5t0d0s0'

devid='id1,s...@sata_vbox_harddiskvbd53bb1af-9f7400db/a'
phys_path='/p...@0,0/pci8086,2...@d/d...@0,0:a'
whole_disk=1
DTL=84
children[1]
type='disk'
id=1
guid=8827036041867308956
path='/dev/dsk/c5t1d0s0'

devid='id1,s...@sata_vbox_harddiskvba5ee1c45-b2bbcaa3/a'
phys_path='/p...@0,0/pci8086,2...@d/d...@1,0:a'
whole_disk=1
DTL=83
children[1]
type='raidz'
id=1
guid=1724435683388879308
nparity=1
metaslab_array=218
metaslab_shift=33
ashift=9
asize=1500286287872
is_log=0
children[0]
type='disk'
id=0
guid=15007089885865328028
path='/dev/dsk/c5t2d0s0'

devid='id1,s...@sata_vbox_harddiskvbcea797d3-e6ef5750/a'
phys_path='/p...@0,0/pci8086,2...@d/d...@2,0:a'
whole_disk=1
DTL=221
children[1]
type='disk'
id=1
guid=9332007382569190498
path='/dev/dsk/c5t3d0s0'

devid='id1,s...@sata_vbox_harddiskvb7b6c68bc-7658138b/a'
phys_path='/p...@0,0/pci8086,2...@d/d...@3,0:a'
whole_disk=1
DTL=220
Uberblock

magic = 00bab10c
version = 10
txg = 7698185
guid_sum = 14040546736538210696
timestamp = 1229725106 UTC = Sat Dec 20 09:18:26 2008

Dataset mos [META], ID 0, cr_txg 4, 21.5M, 228 objects
Dataset tank/mail [ZPL], ID 38, cr_txg 35, 4.05G, 60849 objects
Dataset tank/media [ZPL], ID 26, cr_txg 31, 164G, 21230 objects
^C

:~$ pfexec zdb -ddd tank/mail tank
Dataset tank/mail [ZPL], ID 38, cr_txg 35, 4.05G, 60849 objects

ZIL header: claim_txg 7669623, seq 0


Object  lvl   iblk   dblk  lsize  asize  type
 0716K16K  30.7M  17.5M  DMU dnode


~$ pfexec zdb -d tank/mail
Dataset tank/mail [ZPL], ID 38, cr_txg 35, 4.05G, 60849 objects, rootbp [L0 DMU 
objset] 400L/200P DVA[0]=1:400227c00:400 DVA[1]=0:607400:400 fletcher4 
lzjb LE contiguous birth=7669623 fill=60849 
cksum=ec63b7b86:5ce7635a8d1:12b737a1b1974:293277eb03ab04

ZIL header: claim_txg 7669623, seq 0

first block: [L0 ZIL intent log] 1000L/1000P DVA[0]=1:40732e000:2000 
zilog uncompressed LE contiguous birth=7669622 fill=0 
cksum=5b576a84665b3619:406081a28d9ebd5c:26:ca

Block seqno