[zfs-discuss] Announging The First Ever ZFS Conference

2012-09-24 Thread Deirdre Straughan
http://zfsday.com/

Brought to you by the same people who organized
dtrace.conf in
April, 2012, this two-day event will be your opportunity to catch up on the
illumos  family (including, of
course, SmartOS) and ZFS , and hear what’s
new and what’s coming in this exciting and powerful family of technologies.
You'll see many familiar names on the speaker list, as well as some new
ones: http://zfsday.com/speaker-list/
Dates

   - *illumos Day *: Monday, October
   1st, 2012 - agenda and reg: http://zfsday.com/about-illumos-day/
   - *2nd Annual Solaris Family Reunion*: Monday, Oct 1st, evening – sign
   up here: http://2ndsolaris.eventbrite.com/
   - *ZFS Day *: Tuesday, October 2nd, 2012 -
   agenda and reg: http://zfsday.com/zfsday/
   - *illumos / ZFS Hackathon* : This was a big
   
success
at
   last year’s Open Storage Summit, and this year will be held Wednesday, Oct
   3rd, at Joyent’s offices. http://zfsday.com/hackathon/


Cost

Your kind attention. (In other words: FREE!)

Venue

Children’s Creativity Museum  (Theater)

221 Fourth St., corner of Howard (behind the carousel)
San Francisco, CA 94103
(415) 820-3320

If you can’t be in San Francisco in person, both days will be *live video
streamed*; sign up to be informed of the stream details.
-- 


best regards,
Deirdré Straughan
Community Architect, SmartOS
illumos Community Manager


cell 720 371 4107
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Selective zfs list

2012-09-24 Thread Matthew Ahrens
On Fri, Sep 21, 2012 at 4:00 AM, Bogdan Ćulibrk  wrote:

> Greetings,
>
> I'm trying to achieve selective output of "zfs list" command for specific
> user to show only delegated sets. Anyone knows how to achieve this?
> I've checked "zfs allow" already but it only helps in restricting the user
> to create, destroy, etc something. There is no permission subcommand for
> listing or displaying sets.
>
> I'm on oi_151a3 bits.
>
>
You may be able to use zones for this use case.  Each zone only sees the
filesystems that are delegated to it (and their ancestors).

--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS stats output - used, compressed, deduped, etc.

2012-09-24 Thread Richard Elling
On Sep 24, 2012, at 10:08 AM, Jason Usher  wrote:

> Oh, and one other thing ...
> 
> 
> --- On Fri, 9/21/12, Jason Usher  wrote:
> 
>>> It shows the allocated number of bytes used by the
>>> filesystem, i.e.
>>> after compression. To get the uncompressed size,
>> multiply
>>> "used" by
>>> "compressratio" (so for example if used=65G and
>>> compressratio=2.00x,
>>> then your decompressed size is 2.00 x 65G = 130G).
>> 
>> 
>> Ok, thank you.  The problem with this is, the
>> compressratio only goes to two significant digits, which
>> means if I do the math, I'm only getting an
>> approximation.  Since we may use these numbers to
>> compute billing, it is important to get it right.
>> 
>> Is there any way at all to get the real *exact* number ?
> 
> 
> I'm hoping the answer is yes - I've been looking but do not see it ...

none can hide from dtrace!
# dtrace -qn 'dsl_dataset_stats:entry {this->ds = (dsl_dataset_t 
*)arg0;printf("%s\tcompressed size = %d\tuncompressed size=%d\n", 
this->ds->ds_dir->dd_myname, this->ds->ds_phys->ds_compressed_bytes, 
this->ds->ds_phys->ds_uncompressed_bytes)}'
openindiana-1   compressed size = 3667988992uncompressed size=3759321088

[zfs get all rpool/openindiana-1 in another shell]

For reporting, the number is rounded to 2 decimal places.

>> Ok.  So the dedupratio I see for the entire pool is
>> "dedupe ratio for filesystems in this pool that have dedupe
>> enabled" ... yes ?
>> 
>> 
 Also, why do I not see any dedupe stats for the
>>> individual filesystem ?  I see compressratio, and I
>> see
>>> dedup=on, but I don't see any dedupratio for the
>> filesystem
>>> itself...
>> 
>> 
>> Ok, getting back to precise accounting ... if I turn on
>> dedupe for a particular filesystem, and then I multiply the
>> "used" property by the compressratio property, and calculate
>> the real usage, do I need to do another calculation to
>> account for the deduplication ?  Or does the "used"
>> property not take into account deduping ?
> 
> 
> So if the answer to this is "yes, the used property is not only a compressed 
> figure, but a deduped figure" then I think we have a bigger problem ...
> 
> You described dedupe as operating not only within the filesystem with 
> dedup=on, but between all filesystems with dedupe enabled.
> 
> Doesn't that mean that if I enabled dedupe on more than one filesystem, I can 
> never know how much total, raw space each of those is using ?  Because if the 
> dedupe ratio is calculated across all of them, it's not the actual ratio for 
> any one of them ... so even if I do the math, I can't decide what the total 
> raw usage for one of them is ... right ?

Correct. This is by design so that blocks shared amongst different datasets can
be deduped -- the common case for things like virtual machine images.

> 
> Again, if "used" does not reflect dedupe, and I don't need to do any math to 
> get the "raw" storage figure, then it doesn't matter...
> 
> 
> 
 Did turning on dedupe for a single filesystem turn
>> it
>>> on for the entire pool ?
>>> 
>>> In a sense, yes. The dedup machinery is pool-wide, but
>> only
>>> writes from
>>> filesystems which have dedup enabled enter it. The
>> rest
>>> simply pass it
>>> by and work as usual.
>> 
>> 
>> Ok - but from a performance point of view, I am only using
>> ram/cpu resources for the deduping of just the individual
>> filesystems I enabled dedupe on, right ?  I hope that
>> turning on dedupe for just one filesystem did not incur
>> ram/cpu costs across the entire pool...
> 
> 
> I also wonder about this performance question...

It depends.
 -- richard

--
illumos Day & ZFS Day, Oct 1-2, 2012 San Fransisco 
www.zfsday.com
richard.ell...@richardelling.com
+1-760-896-4422








___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace X with Y: devices have different sector alignment

2012-09-24 Thread Timothy Coalson
I'm not sure how to definitively check physical sector size on
solaris/illumos, but on linux, hdparm -I (capital i) or smartctl -i will do
it.  OpenIndiana's smartctl doesn't output this information yet (and its
smartctl doesn't work on SATA disks unless attached via a SAS chip).  The
issue is complicated by having both a logical and a physical sector size,
and as far as I am aware, on current disks, logical is always 512, which
may be what is being reported in what you ran.  Some quick googling
suggests that previously, it was not possible to use an existing utility to
report the physical sector size on solaris, because someone wrote their own:

http://solaris.kuehnke.de/archives/18-Checking-physical-sector-size-of-disks-on-Solaris.html

So, if you want to make sure of the physical sector size, you could give
that program a whirl (it compiled fine for me on oi_151a6, and runs, but it
is not easy for me to attach a 4k sector disk to one of my OI machines, so
I haven't confirmed its correctness), or temporarily transplant the spare
in question to a linux machine (or live system) and use hdparm -I.

Tim

On Mon, Sep 24, 2012 at 2:37 PM, LIC mesh  wrote:

> Any ideas?
>
>
> On Mon, Sep 24, 2012 at 10:46 AM, LIC mesh  wrote:
>
>> That's what I thought also, but since both prtvtoc and fdisk -G see the
>> two disks as the same (and I have not overridden sector size), I am
>> confused.
>> *
>> *
>> *iostat -xnE:*
>> c16t5000C5002AA08E4Dd0 Soft Errors: 0 Hard Errors: 323 Transport Errors:
>> 489
>> Vendor: ATA  Product: ST32000542AS Revision: CC34 Serial No:
>> %FAKESERIAL%
>> Size: 2000.40GB <2000398934016 bytes>
>> Media Error: 207 Device Not Ready: 0 No Device: 116 Recoverable: 0
>> Illegal Request: 0 Predictive Failure Analysis: 0
>> c16t5000C5005295F727d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
>> Vendor: ATA  Product: ST2000VX000-9YW1 Revision: CV13 Serial No:
>> %FAKESERIAL%
>> Size: 2000.40GB <2000398934016 bytes>
>> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
>> Illegal Request: 0 Predictive Failure Analysis: 0
>>
>> *zpool status:*
>>   pool: rspool
>>  state: ONLINE
>>   scan: resilvered 719G in 65h28m with 0 errors on Fri Aug 24 04:21:44
>> 2012
>> config:
>>
>> NAMESTATE READ WRITE CKSUM
>> rspool  ONLINE   0 0 0
>>   raidz1-0  ONLINE   0 0 0
>> c16t5000C5002AA08E4Dd0  ONLINE   0 0 0
>> c16t5000C5002ABE78F5d0  ONLINE   0 0 0
>> c16t5000C5002AC49840d0  ONLINE   0 0 0
>> c16t50014EE057B72DD3d0  ONLINE   0 0 0
>> c16t50014EE057B69208d0  ONLINE   0 0 0
>> cache
>>   c4t2d0ONLINE   0 0 0
>> spares
>>   c16t5000C5005295F727d0AVAIL
>>
>> errors: No known data errors
>>
>> *root@nas:~# zpool replace rspool c16t5000C5002AA08E4Dd0
>> c16t5000C5005295F727d0*
>> cannot replace c16t5000C5002AA08E4Dd0 with c16t5000C5005295F727d0:
>> devices have different sector alignment
>>
>>
>>
>> On Mon, Sep 24, 2012 at 9:23 AM, Gregg Wonderly wrote:
>>
>>> What is the error message you are seeing on the "replace"?  This sounds
>>> like a slice size/placement problem, but clearly, prtvtoc seems to think
>>> that everything is the same.  Are you certain that you did prtvtoc on the
>>> correct drive, and not one of the active disks by mistake?
>>>
>>> Gregg Wonderly
>>>
>>> As does fdisk -G:
>>> root@nas:~# fdisk -G /dev/rdsk/c16t5000C5002AA08E4Dd0
>>> * Physical geometry for device /dev/rdsk/c16t5000C5002AA08E4Dd0
>>> * PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
>>>   608006080000255   252   512
>>> You have new mail in /var/mail/root
>>> root@nas:~# fdisk -G /dev/rdsk/c16t5000C5005295F727d0
>>> * Physical geometry for device /dev/rdsk/c16t5000C5005295F727d0
>>> * PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
>>>   608006080000255   252   512
>>>
>>>
>>>
>>> On Mon, Sep 24, 2012 at 9:01 AM, LIC mesh  wrote:
>>>
 Yet another weird thing - prtvtoc shows both drives as having the same
 sector size,  etc:
 root@nas:~# prtvtoc /dev/rdsk/c16t5000C5002AA08E4Dd0
 * /dev/rdsk/c16t5000C5002AA08E4Dd0 partition map
 *
 * Dimensions:
 * 512 bytes/sector
 * 3907029168 sectors
 * 3907029101 accessible sectors
 *
 * Flags:
 *   1: unmountable
 *  10: read-only
 *
 * Unallocated space:
 *   First SectorLast
 *   Sector CountSector
 *  34   222   255
 *
 *  First SectorLast
 * Partition  Tag  FlagsSector CountSector  Mount Directory
0  400256 3907012495 3907012750
8 1100  3907012751 16384 3907029134
 root@nas:~# prtvtoc 

Re: [zfs-discuss] cannot replace X with Y: devices have different sector alignment

2012-09-24 Thread LIC mesh
Any ideas?

On Mon, Sep 24, 2012 at 10:46 AM, LIC mesh  wrote:

> That's what I thought also, but since both prtvtoc and fdisk -G see the
> two disks as the same (and I have not overridden sector size), I am
> confused.
> *
> *
> *iostat -xnE:*
> c16t5000C5002AA08E4Dd0 Soft Errors: 0 Hard Errors: 323 Transport Errors:
> 489
> Vendor: ATA  Product: ST32000542AS Revision: CC34 Serial No:
> %FAKESERIAL%
> Size: 2000.40GB <2000398934016 bytes>
> Media Error: 207 Device Not Ready: 0 No Device: 116 Recoverable: 0
> Illegal Request: 0 Predictive Failure Analysis: 0
> c16t5000C5005295F727d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
> Vendor: ATA  Product: ST2000VX000-9YW1 Revision: CV13 Serial No:
> %FAKESERIAL%
> Size: 2000.40GB <2000398934016 bytes>
> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
> Illegal Request: 0 Predictive Failure Analysis: 0
>
> *zpool status:*
>   pool: rspool
>  state: ONLINE
>   scan: resilvered 719G in 65h28m with 0 errors on Fri Aug 24 04:21:44 2012
> config:
>
> NAMESTATE READ WRITE CKSUM
> rspool  ONLINE   0 0 0
>   raidz1-0  ONLINE   0 0 0
> c16t5000C5002AA08E4Dd0  ONLINE   0 0 0
> c16t5000C5002ABE78F5d0  ONLINE   0 0 0
> c16t5000C5002AC49840d0  ONLINE   0 0 0
> c16t50014EE057B72DD3d0  ONLINE   0 0 0
> c16t50014EE057B69208d0  ONLINE   0 0 0
> cache
>   c4t2d0ONLINE   0 0 0
> spares
>   c16t5000C5005295F727d0AVAIL
>
> errors: No known data errors
>
> *root@nas:~# zpool replace rspool c16t5000C5002AA08E4Dd0
> c16t5000C5005295F727d0*
> cannot replace c16t5000C5002AA08E4Dd0 with c16t5000C5005295F727d0: devices
> have different sector alignment
>
>
>
> On Mon, Sep 24, 2012 at 9:23 AM, Gregg Wonderly wrote:
>
>> What is the error message you are seeing on the "replace"?  This sounds
>> like a slice size/placement problem, but clearly, prtvtoc seems to think
>> that everything is the same.  Are you certain that you did prtvtoc on the
>> correct drive, and not one of the active disks by mistake?
>>
>> Gregg Wonderly
>>
>> As does fdisk -G:
>> root@nas:~# fdisk -G /dev/rdsk/c16t5000C5002AA08E4Dd0
>> * Physical geometry for device /dev/rdsk/c16t5000C5002AA08E4Dd0
>> * PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
>>   608006080000255   252   512
>> You have new mail in /var/mail/root
>> root@nas:~# fdisk -G /dev/rdsk/c16t5000C5005295F727d0
>> * Physical geometry for device /dev/rdsk/c16t5000C5005295F727d0
>> * PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
>>   608006080000255   252   512
>>
>>
>>
>> On Mon, Sep 24, 2012 at 9:01 AM, LIC mesh  wrote:
>>
>>> Yet another weird thing - prtvtoc shows both drives as having the same
>>> sector size,  etc:
>>> root@nas:~# prtvtoc /dev/rdsk/c16t5000C5002AA08E4Dd0
>>> * /dev/rdsk/c16t5000C5002AA08E4Dd0 partition map
>>> *
>>> * Dimensions:
>>> * 512 bytes/sector
>>> * 3907029168 sectors
>>> * 3907029101 accessible sectors
>>> *
>>> * Flags:
>>> *   1: unmountable
>>> *  10: read-only
>>> *
>>> * Unallocated space:
>>> *   First SectorLast
>>> *   Sector CountSector
>>> *  34   222   255
>>> *
>>> *  First SectorLast
>>> * Partition  Tag  FlagsSector CountSector  Mount Directory
>>>0  400256 3907012495 3907012750
>>>8 1100  3907012751 16384 3907029134
>>> root@nas:~# prtvtoc /dev/rdsk/c16t5000C5005295F727d0
>>> * /dev/rdsk/c16t5000C5005295F727d0 partition map
>>> *
>>> * Dimensions:
>>> * 512 bytes/sector
>>> * 3907029168 sectors
>>> * 3907029101 accessible sectors
>>> *
>>> * Flags:
>>> *   1: unmountable
>>> *  10: read-only
>>> *
>>> * Unallocated space:
>>> *   First SectorLast
>>> *   Sector CountSector
>>> *  34   222   255
>>> *
>>> *  First SectorLast
>>> * Partition  Tag  FlagsSector CountSector  Mount Directory
>>>0  400256 3907012495 3907012750
>>> 8 1100  3907012751 16384 3907029134
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Sep 24, 2012 at 12:20 AM, Timothy Coalson wrote:
>>>
 I think you can fool a recent Illumos kernel into thinking a 4k disk is
 512 (incurring a performance hit for that disk, and therefore the vdev and
 pool, but to save a raidz1, it might be worth it):

 http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ,
 see "Overriding the Physical Sector Size"

 I don't know what you might have to do to coax it to do the replace
 with a hot spare (zpool replace? export/import?).  Perhaps there should be
 a feature in ZFS that noti

Re: [zfs-discuss] ZFS stats output - used, compressed, deduped, etc.

2012-09-24 Thread Jason Usher

Oh, and one other thing ...


--- On Fri, 9/21/12, Jason Usher  wrote:

> > It shows the allocated number of bytes used by the
> > filesystem, i.e.
> > after compression. To get the uncompressed size,
> multiply
> > "used" by
> > "compressratio" (so for example if used=65G and
> > compressratio=2.00x,
> > then your decompressed size is 2.00 x 65G = 130G).
> 
> 
> Ok, thank you.  The problem with this is, the
> compressratio only goes to two significant digits, which
> means if I do the math, I'm only getting an
> approximation.  Since we may use these numbers to
> compute billing, it is important to get it right.
> 
> Is there any way at all to get the real *exact* number ?


I'm hoping the answer is yes - I've been looking but do not see it ...



> Ok.  So the dedupratio I see for the entire pool is
> "dedupe ratio for filesystems in this pool that have dedupe
> enabled" ... yes ?
> 
> 
> > > Also, why do I not see any dedupe stats for the
> > individual filesystem ?  I see compressratio, and I
> see
> > dedup=on, but I don't see any dedupratio for the
> filesystem
> > itself...
> 
> 
> Ok, getting back to precise accounting ... if I turn on
> dedupe for a particular filesystem, and then I multiply the
> "used" property by the compressratio property, and calculate
> the real usage, do I need to do another calculation to
> account for the deduplication ?  Or does the "used"
> property not take into account deduping ?


So if the answer to this is "yes, the used property is not only a compressed 
figure, but a deduped figure" then I think we have a bigger problem ...

You described dedupe as operating not only within the filesystem with dedup=on, 
but between all filesystems with dedupe enabled.

Doesn't that mean that if I enabled dedupe on more than one filesystem, I can 
never know how much total, raw space each of those is using ?  Because if the 
dedupe ratio is calculated across all of them, it's not the actual ratio for 
any one of them ... so even if I do the math, I can't decide what the total raw 
usage for one of them is ... right ?

Again, if "used" does not reflect dedupe, and I don't need to do any math to 
get the "raw" storage figure, then it doesn't matter...



> > > Did turning on dedupe for a single filesystem turn
> it
> > on for the entire pool ?
> > 
> > In a sense, yes. The dedup machinery is pool-wide, but
> only
> > writes from
> > filesystems which have dedup enabled enter it. The
> rest
> > simply pass it
> > by and work as usual.
> 
> 
> Ok - but from a performance point of view, I am only using
> ram/cpu resources for the deduping of just the individual
> filesystems I enabled dedupe on, right ?  I hope that
> turning on dedupe for just one filesystem did not incur
> ram/cpu costs across the entire pool...


I also wonder about this performance question...


Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace X with Y: devices have different sector alignment

2012-09-24 Thread LIC mesh
That's what I thought also, but since both prtvtoc and fdisk -G see the two
disks as the same (and I have not overridden sector size), I am confused.
*
*
*iostat -xnE:*
c16t5000C5002AA08E4Dd0 Soft Errors: 0 Hard Errors: 323 Transport Errors:
489
Vendor: ATA  Product: ST32000542AS Revision: CC34 Serial No:
%FAKESERIAL%
Size: 2000.40GB <2000398934016 bytes>
Media Error: 207 Device Not Ready: 0 No Device: 116 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
c16t5000C5005295F727d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST2000VX000-9YW1 Revision: CV13 Serial No:
%FAKESERIAL%
Size: 2000.40GB <2000398934016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

*zpool status:*
  pool: rspool
 state: ONLINE
  scan: resilvered 719G in 65h28m with 0 errors on Fri Aug 24 04:21:44 2012
config:

NAMESTATE READ WRITE CKSUM
rspool  ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c16t5000C5002AA08E4Dd0  ONLINE   0 0 0
c16t5000C5002ABE78F5d0  ONLINE   0 0 0
c16t5000C5002AC49840d0  ONLINE   0 0 0
c16t50014EE057B72DD3d0  ONLINE   0 0 0
c16t50014EE057B69208d0  ONLINE   0 0 0
cache
  c4t2d0ONLINE   0 0 0
spares
  c16t5000C5005295F727d0AVAIL

errors: No known data errors

*root@nas:~# zpool replace rspool c16t5000C5002AA08E4Dd0
c16t5000C5005295F727d0*
cannot replace c16t5000C5002AA08E4Dd0 with c16t5000C5005295F727d0: devices
have different sector alignment



On Mon, Sep 24, 2012 at 9:23 AM, Gregg Wonderly  wrote:

> What is the error message you are seeing on the "replace"?  This sounds
> like a slice size/placement problem, but clearly, prtvtoc seems to think
> that everything is the same.  Are you certain that you did prtvtoc on the
> correct drive, and not one of the active disks by mistake?
>
> Gregg Wonderly
>
> As does fdisk -G:
> root@nas:~# fdisk -G /dev/rdsk/c16t5000C5002AA08E4Dd0
> * Physical geometry for device /dev/rdsk/c16t5000C5002AA08E4Dd0
> * PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
>   608006080000255   252   512
> You have new mail in /var/mail/root
> root@nas:~# fdisk -G /dev/rdsk/c16t5000C5005295F727d0
> * Physical geometry for device /dev/rdsk/c16t5000C5005295F727d0
> * PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
>   608006080000255   252   512
>
>
>
> On Mon, Sep 24, 2012 at 9:01 AM, LIC mesh  wrote:
>
>> Yet another weird thing - prtvtoc shows both drives as having the same
>> sector size,  etc:
>> root@nas:~# prtvtoc /dev/rdsk/c16t5000C5002AA08E4Dd0
>> * /dev/rdsk/c16t5000C5002AA08E4Dd0 partition map
>> *
>> * Dimensions:
>> * 512 bytes/sector
>> * 3907029168 sectors
>> * 3907029101 accessible sectors
>> *
>> * Flags:
>> *   1: unmountable
>> *  10: read-only
>> *
>> * Unallocated space:
>> *   First SectorLast
>> *   Sector CountSector
>> *  34   222   255
>> *
>> *  First SectorLast
>> * Partition  Tag  FlagsSector CountSector  Mount Directory
>>0  400256 3907012495 3907012750
>>8 1100  3907012751 16384 3907029134
>> root@nas:~# prtvtoc /dev/rdsk/c16t5000C5005295F727d0
>> * /dev/rdsk/c16t5000C5005295F727d0 partition map
>> *
>> * Dimensions:
>> * 512 bytes/sector
>> * 3907029168 sectors
>> * 3907029101 accessible sectors
>> *
>> * Flags:
>> *   1: unmountable
>> *  10: read-only
>> *
>> * Unallocated space:
>> *   First SectorLast
>> *   Sector CountSector
>> *  34   222   255
>> *
>> *  First SectorLast
>> * Partition  Tag  FlagsSector CountSector  Mount Directory
>>0  400256 3907012495 3907012750
>> 8 1100  3907012751 16384 3907029134
>>
>>
>>
>>
>>
>> On Mon, Sep 24, 2012 at 12:20 AM, Timothy Coalson  wrote:
>>
>>> I think you can fool a recent Illumos kernel into thinking a 4k disk is
>>> 512 (incurring a performance hit for that disk, and therefore the vdev and
>>> pool, but to save a raidz1, it might be worth it):
>>>
>>> http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ,
>>> see "Overriding the Physical Sector Size"
>>>
>>> I don't know what you might have to do to coax it to do the replace with
>>> a hot spare (zpool replace? export/import?).  Perhaps there should be a
>>> feature in ZFS that notifies when a pool is created or imported with a hot
>>> spare that can't be automatically used in one or more vdevs?  The whole
>>> point of hot spares is to have them automatically swap in when you aren't
>>> there to fiddle with things,

Re: [zfs-discuss] cannot replace X with Y: devices have different sector alignment

2012-09-24 Thread Gregg Wonderly
What is the error message you are seeing on the "replace"?  This sounds like a 
slice size/placement problem, but clearly, prtvtoc seems to think that 
everything is the same.  Are you certain that you did prtvtoc on the correct 
drive, and not one of the active disks by mistake?

Gregg Wonderly

> As does fdisk -G:
> root@nas:~# fdisk -G /dev/rdsk/c16t5000C5002AA08E4Dd0
> * Physical geometry for device /dev/rdsk/c16t5000C5002AA08E4Dd0
> * PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
>   608006080000255   252   512   
> You have new mail in /var/mail/root
> root@nas:~# fdisk -G /dev/rdsk/c16t5000C5005295F727d0
> * Physical geometry for device /dev/rdsk/c16t5000C5005295F727d0
> * PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
>   608006080000255   252   512   
> 
> 
> 
> On Mon, Sep 24, 2012 at 9:01 AM, LIC mesh  wrote:
> Yet another weird thing - prtvtoc shows both drives as having the same sector 
> size,  etc:
> root@nas:~# prtvtoc /dev/rdsk/c16t5000C5002AA08E4Dd0
> * /dev/rdsk/c16t5000C5002AA08E4Dd0 partition map
> *
> * Dimensions:
> * 512 bytes/sector
> * 3907029168 sectors
> * 3907029101 accessible sectors
> *
> * Flags:
> *   1: unmountable
> *  10: read-only
> *
> * Unallocated space:
> *   First SectorLast
> *   Sector CountSector 
> *  34   222   255
> *
> *  First SectorLast
> * Partition  Tag  FlagsSector CountSector  Mount Directory
>0  400256 3907012495 3907012750
>8 1100  3907012751 16384 3907029134
> root@nas:~# prtvtoc /dev/rdsk/c16t5000C5005295F727d0
> * /dev/rdsk/c16t5000C5005295F727d0 partition map
> *
> * Dimensions:
> * 512 bytes/sector
> * 3907029168 sectors
> * 3907029101 accessible sectors
> *
> * Flags:
> *   1: unmountable
> *  10: read-only
> *
> * Unallocated space:
> *   First SectorLast
> *   Sector CountSector 
> *  34   222   255
> *
> *  First SectorLast
> * Partition  Tag  FlagsSector CountSector  Mount Directory
>0  400256 3907012495 3907012750
>8 1100  3907012751 16384 3907029134
> 
> 
> 
> 
> 
> On Mon, Sep 24, 2012 at 12:20 AM, Timothy Coalson  wrote:
> I think you can fool a recent Illumos kernel into thinking a 4k disk is 512 
> (incurring a performance hit for that disk, and therefore the vdev and pool, 
> but to save a raidz1, it might be worth it):
> 
> http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks , see 
> "Overriding the Physical Sector Size"
> 
> I don't know what you might have to do to coax it to do the replace with a 
> hot spare (zpool replace? export/import?).  Perhaps there should be a feature 
> in ZFS that notifies when a pool is created or imported with a hot spare that 
> can't be automatically used in one or more vdevs?  The whole point of hot 
> spares is to have them automatically swap in when you aren't there to fiddle 
> with things, which is a bad time to find out it won't work.
> 
> Tim
> 
> On Sun, Sep 23, 2012 at 10:52 PM, LIC mesh  wrote:
> Well this is a new one
> 
> Illumos/Openindiana let me add a device as a hot spare that evidently has a 
> different sector alignment than all of the other drives in the array.
> 
> So now I'm at the point that I /need/ a hot spare and it doesn't look like I 
> have it.
> 
> And, worse, the other spares I have are all the same model as said hot spare.
> 
> Is there anything I can do with this or am I just going to be up the creek 
> when any one of the other drives in the raidz1 fails?
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace X with Y: devices have different sector alignment

2012-09-24 Thread LIC mesh
As does fdisk -G:
root@nas:~# fdisk -G /dev/rdsk/c16t5000C5002AA08E4Dd0
* Physical geometry for device /dev/rdsk/c16t5000C5002AA08E4Dd0
* PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
  608006080000255   252   512
You have new mail in /var/mail/root
root@nas:~# fdisk -G /dev/rdsk/c16t5000C5005295F727d0
* Physical geometry for device /dev/rdsk/c16t5000C5005295F727d0
* PCYL NCYL ACYL BCYL NHEAD NSECT SECSIZ
  608006080000255   252   512



On Mon, Sep 24, 2012 at 9:01 AM, LIC mesh  wrote:

> Yet another weird thing - prtvtoc shows both drives as having the same
> sector size,  etc:
> root@nas:~# prtvtoc /dev/rdsk/c16t5000C5002AA08E4Dd0
> * /dev/rdsk/c16t5000C5002AA08E4Dd0 partition map
> *
> * Dimensions:
> * 512 bytes/sector
> * 3907029168 sectors
> * 3907029101 accessible sectors
> *
> * Flags:
> *   1: unmountable
> *  10: read-only
> *
> * Unallocated space:
> *   First SectorLast
> *   Sector CountSector
> *  34   222   255
> *
> *  First SectorLast
> * Partition  Tag  FlagsSector CountSector  Mount Directory
>0  400256 3907012495 3907012750
>8 1100  3907012751 16384 3907029134
> root@nas:~# prtvtoc /dev/rdsk/c16t5000C5005295F727d0
> * /dev/rdsk/c16t5000C5005295F727d0 partition map
> *
> * Dimensions:
> * 512 bytes/sector
> * 3907029168 sectors
> * 3907029101 accessible sectors
> *
> * Flags:
> *   1: unmountable
> *  10: read-only
> *
> * Unallocated space:
> *   First SectorLast
> *   Sector CountSector
> *  34   222   255
> *
> *  First SectorLast
> * Partition  Tag  FlagsSector CountSector  Mount Directory
>0  400256 3907012495 3907012750
>8 1100  3907012751 16384 3907029134
>
>
>
>
>
> On Mon, Sep 24, 2012 at 12:20 AM, Timothy Coalson  wrote:
>
>> I think you can fool a recent Illumos kernel into thinking a 4k disk is
>> 512 (incurring a performance hit for that disk, and therefore the vdev and
>> pool, but to save a raidz1, it might be worth it):
>>
>> http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ,
>> see "Overriding the Physical Sector Size"
>>
>> I don't know what you might have to do to coax it to do the replace with
>> a hot spare (zpool replace? export/import?).  Perhaps there should be a
>> feature in ZFS that notifies when a pool is created or imported with a hot
>> spare that can't be automatically used in one or more vdevs?  The whole
>> point of hot spares is to have them automatically swap in when you aren't
>> there to fiddle with things, which is a bad time to find out it won't work.
>>
>> Tim
>>
>> On Sun, Sep 23, 2012 at 10:52 PM, LIC mesh  wrote:
>>
>>> Well this is a new one
>>>
>>> Illumos/Openindiana let me add a device as a hot spare that evidently
>>> has a different sector alignment than all of the other drives in the array.
>>>
>>> So now I'm at the point that I /need/ a hot spare and it doesn't look
>>> like I have it.
>>>
>>> And, worse, the other spares I have are all the same model as said hot
>>> spare.
>>>
>>> Is there anything I can do with this or am I just going to be up the
>>> creek when any one of the other drives in the raidz1 fails?
>>>
>>>
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>
>>>
>>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace X with Y: devices have different sector alignment

2012-09-24 Thread LIC mesh
Yet another weird thing - prtvtoc shows both drives as having the same
sector size,  etc:
root@nas:~# prtvtoc /dev/rdsk/c16t5000C5002AA08E4Dd0
* /dev/rdsk/c16t5000C5002AA08E4Dd0 partition map
*
* Dimensions:
* 512 bytes/sector
* 3907029168 sectors
* 3907029101 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector
*  34   222   255
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400256 3907012495 3907012750
   8 1100  3907012751 16384 3907029134
root@nas:~# prtvtoc /dev/rdsk/c16t5000C5005295F727d0
* /dev/rdsk/c16t5000C5005295F727d0 partition map
*
* Dimensions:
* 512 bytes/sector
* 3907029168 sectors
* 3907029101 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector
*  34   222   255
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400256 3907012495 3907012750
   8 1100  3907012751 16384 3907029134





On Mon, Sep 24, 2012 at 12:20 AM, Timothy Coalson  wrote:

> I think you can fool a recent Illumos kernel into thinking a 4k disk is
> 512 (incurring a performance hit for that disk, and therefore the vdev and
> pool, but to save a raidz1, it might be worth it):
>
> http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks ,
> see "Overriding the Physical Sector Size"
>
> I don't know what you might have to do to coax it to do the replace with a
> hot spare (zpool replace? export/import?).  Perhaps there should be a
> feature in ZFS that notifies when a pool is created or imported with a hot
> spare that can't be automatically used in one or more vdevs?  The whole
> point of hot spares is to have them automatically swap in when you aren't
> there to fiddle with things, which is a bad time to find out it won't work.
>
> Tim
>
> On Sun, Sep 23, 2012 at 10:52 PM, LIC mesh  wrote:
>
>> Well this is a new one
>>
>> Illumos/Openindiana let me add a device as a hot spare that evidently has
>> a different sector alignment than all of the other drives in the array.
>>
>> So now I'm at the point that I /need/ a hot spare and it doesn't look
>> like I have it.
>>
>> And, worse, the other spares I have are all the same model as said hot
>> spare.
>>
>> Is there anything I can do with this or am I just going to be up the
>> creek when any one of the other drives in the raidz1 fails?
>>
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Selective zfs list

2012-09-24 Thread Jim Klimov

2012-09-24 14:38, Bogdan Ćulibrk wrote:

Regarding RFE, I would do that gladly, but quite frankly since shutdown
of opensolaris.org I'm little bit lost where to submit it.


Much of the open-sourced ZFS development goes under illumos
project umbrella (used as kernel for OI you said you're
working with), so you can post an RFE at their bugtracker:
  https://www.illumos.org/issues

This would probably go into the illumos-gate sub-project
as a "Feature".

HTH,
//Jim Klimov

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Selective zfs list

2012-09-24 Thread Bogdan Ćulibrk

Hello Richard, thanks for reply.


On 9/21/12 8:09 PM, Richard Elling wrote:


There are several ways, but no builtin way, today. Can you provide a 
use case for

how you want this to work? We might want to create an RFE here :-)
 -- richard


Could you give some pointers on how to do it? If it's not built-in that 
works for me at the moment.
Regarding RFE, I would do that gladly, but quite frankly since shutdown 
of opensolaris.org I'm little bit lost where to submit it.




=bc


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss