Re: [zfs-discuss] zfs list improvements?

2009-01-10 Thread Ross Smith
Hmm... that's a tough one. To me, it's a trade off either way, using a -r parameter to specify the depth for zfs list feels more intuitive than adding extra commands to modify the -r behaviour, but I can see your point. But then, using -c or -d means there's an optional parameter for zfs list tha

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2009-01-10 Thread Dmitry Razguliaev
At the time of writing that post, no, I didn't run zpool iostat -v 1. However, I run it after that. Results for operations of iostat command has changed from 1 for every device in raidz to something in between 20 and 400 for raidz volume and from 3 to something in between 200 and 450 for a singl

Re: [zfs-discuss] zfs list improvements?

2009-01-10 Thread Chris Gerhard
My current solution is a -d option that takes a colon set of aruments min:max giving the minimum and maximum depth so zfs list -d 1:1 tank behaves like zfs list -c is described and only lists the direct children of tank. zfs list -d 1: tank Will list all the descendants of tank zfs list -

Re: [zfs-discuss] Intel SS4200-E?

2009-01-10 Thread Guido Glaus
Tried with my own 8GB DOM first (with different OS'es) but could only make some linux working (bootdriver) so i went for a usb memory stick based solution. I just added a serial console cable with a db9 connector on one side which fits nicely in the back of the box and setup the bios to do console

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2009-01-10 Thread Bob Friesenhahn
On Sat, 10 Jan 2009, Dmitry Razguliaev wrote: > At the time of writing that post, no, I didn't run zpool iostat -v > 1. However, I run it after that. Results for operations of iostat > command has changed from 1 for every device in raidz to something in > between 20 and 400 for raidz volume and

Re: [zfs-discuss] zpool add dumping core

2009-01-10 Thread Brad Plecs
Problem solved... after the resilvers completed, the status reported that the filesystem needed an upgrade. I did a zpool upgrade -a, and after that completed and there was no resilvering going on, the zpool add ran successfully. I would like to suggest, however, that the behavior be fixed --

Re: [zfs-discuss] zpool add dumping core

2009-01-10 Thread Richard Elling
Brad Plecs wrote: > Problem solved... after the resilvers completed, the status reported that the > filesystem needed an upgrade. > > I did a zpool upgrade -a, and after that completed and there was no > resilvering going on, the zpool add ran successfully. > > I would like to suggest, however,

Re: [zfs-discuss] zpool add dumping core

2009-01-10 Thread Brad Plecs
> Are you sure this isn't a case of CR 6433264 which > was fixed > long ago, but arrived in patch 118833-36 to Solaris > 10? It certainly looks similar, but this system already had 118833-36 when the error occurred, so if this bug is truly fixed, it must be something else. Then again, I wasn't

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2009-01-10 Thread JZ
muchas gracias por vuestro tiempo, por que me hiciste mas fuerte [if misleading, plz excuse my french...] ;-) z - Original Message - From: "Bob Friesenhahn" To: "Dmitry Razguliaev" Cc: Sent: Saturday, January 10, 2009 10:28 AM Subject: Re: [zfs-discuss] ZFS poor performance on Areca

Re: [zfs-discuss] can't import zpool after upgrade to solaris 10u6

2009-01-10 Thread Steve Goldthorpe
There's definately something strange going on as these are the only uberblocks I can find by scanning /dev/dsk/c0t0d0s7 - nothing to conflict with my theory so far: TXG: 106052 TIME: 2009-01-04:11:06:12 BLK: 0e29000 (14848000) VER: 10 GUID_SUM: 9f8d9ef301489223 (11497020190282519075) TXG: 10605

Re: [zfs-discuss] can't import zpool after upgrade to solaris 10u6

2009-01-10 Thread JZ
Hi Gold, 9987988 sounds factual to me... IMHO, z - Original Message - From: "Steve Goldthorpe" To: Sent: Saturday, January 10, 2009 6:59 PM Subject: Re: [zfs-discuss] can't import zpool after upgrade to solaris 10u6 > There's definately something strange going on as these are the onl

Re: [zfs-discuss] I/O error when import

2009-01-10 Thread Matthew Zhang
Here is zdb -l results: bash-3.2# zdb -l /dev/dsk/c1d1 LABEL 0 failed to unpack label 0 LABEL 1 failed to unpack label

[zfs-discuss] how can I get

2009-01-10 Thread Marvin Wang, Min
I need to identify the datablock of one file is located in zfs? how can I get that info for a file or directory (not a file system dataset) is located in a zfs file system? ps, is zdb able to show that stuff? thanks, -- This message posted from opensolaris.org __

Re: [zfs-discuss] how can I get

2009-01-10 Thread Marvin Wang, Min
sorry, my question might be misleading, I mean the starting datablock of a file -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] how can I get

2009-01-10 Thread Fajar A. Nugraha
On Sun, Jan 11, 2009 at 11:40 AM, Marvin Wang, Min wrote: > sorry, my question might be misleading, I mean the starting datablock of a > file "starting" is somewhat misleading, since zfs will allocate a new block whenever a block is updated, so the physical blocks for a file is not necessarily a

Re: [zfs-discuss] how can I get

2009-01-10 Thread Marvin Wang, Min
thanks, yes, it is what I want to see. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss