Re: [zfs-discuss] (no subject)

2009-01-16 Thread JZ
BTW,


ASANTE
MAPENZI



:-)

Best,
z



- Original Message - 
From: "JZ" 
To: "ZFS Discussions" 
Sent: Friday, January 16, 2009 10:59 PM
Subject: Re: [zfs-discuss] (no subject)


¡Gracias muy, mucho!  para amor
Respetuoso


Best,
z

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (no subject)

2009-01-16 Thread JZ
¡Gracias muy, mucho!  para amor
Respetuoso


Best,
z 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comparison between the S-TEC Zeus and the Intel X25-E ??

2009-01-16 Thread Adam Leventhal
The Intel part does about a fourth as many synchronous write IOPS at  
best.

Adam

On Jan 16, 2009, at 5:34 PM, Erik Trimble wrote:

> I'm looking at the newly-orderable (via Sun) STEC Zeus SSDs, and  
> they're
> outrageously priced.
>
> http://www.stec-inc.com/product/zeusssd.php
>
> I just looked at the Intel X25-E series, and they look comparable in
> performance.  At about 20% of the cost.
>
> http://www.intel.com/design/flash/nand/extreme/index.htm
>
>
> Can anyone enlighten me as to any possible difference between an STEC
> Zeus and an Intel X25-E ?  I mean, other than those associated with  
> the
> fact that you can't get the Intel one orderable through Sun right now.
>
> -- 
> Erik Trimble
> Java System Support
> Mailstop:  usca22-123
> Phone:  x17195
> Santa Clara, CA
> Timezone: US/Pacific (GMT-0800)
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Comparison between the S-TEC Zeus and the Intel X25-E ??

2009-01-16 Thread Erik Trimble
I'm looking at the newly-orderable (via Sun) STEC Zeus SSDs, and they're 
outrageously priced.

http://www.stec-inc.com/product/zeusssd.php

I just looked at the Intel X25-E series, and they look comparable in 
performance.  At about 20% of the cost.

http://www.intel.com/design/flash/nand/extreme/index.htm


Can anyone enlighten me as to any possible difference between an STEC 
Zeus and an Intel X25-E ?  I mean, other than those associated with the 
fact that you can't get the Intel one orderable through Sun right now.

-- 
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem setting quotas on a zfs pool

2009-01-16 Thread Richard Elling
Carson Gaspar wrote:
> Gregory Edwards - Software Support wrote:
> 
>> [r...@osprey /] # zfs set quota=800g target/u05
>>
>> cannot set property for 'target/u05': size is less than current used or 
>> reserved space
> ...
>> target/u05  1.06T   206G
>>
>> target/u...@1 671G  -
> ...
>> He was able to set them all all pools except target/u05. He is also 
>> asking if it is safe for him to delete the
>> snap shot without causing any problems?  target/u...@1 671G
> 
> Well, he'll lose the snapshot, but otherwise it should be fine. Unless 
> there's data he needs to recover from the snap, I don't see any issues.

They may want to use "refquota" instead of "quota"
However, I do not believe refquota is available in
Solaris 10 5/08, so they would need to upgrade.

Description of refquota from zfs(1m):
  refquota=size | none

  Limits the amount of space a dataset can  consume.  This
  property  enforces  a  hard limit on the amount of space
  used. This hard limit does not  include  space  used  by
  descendents, including file systems and snapshots.

  -- richard-who-has-been-RTFMing-all-day
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread JZ
Hi Rich,

This is the best summary I have seen.  [china folks say, older ginger more 
satisfying, true]



Just one thing I would like to add -

It also depends on the encryption technique and algorism.  Today we are 
doing private key encryption that without the key, you cannot read the data. 
Some used a public "public key" approach, that you can read the data without 
the key, but just misleading.  The private key approach saves a lot of 
blocks in data writing, but carries the risk of cannot duplicate or 
duplicating too many of the key.  The public "public key" approach takes 
much much more storage space to store the real data, but less risky, in some 
views.



Again, how to do data storage is an art.

Sun folks can guide with a good taste, but they are not limiting anyone's 
free will to do IT.



Best,

z, going chinatown for dinner soon





- Original Message - 
From: "Richard Elling" 
To: "Tim" 
Cc: 
Sent: Friday, January 16, 2009 5:28 PM
Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...


> Tim wrote:
>> On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold > > wrote:
>>
>> Meh this is retarted. It looks like zpool list shows an incorrect
>> calculation? Can anyone agree that this looks like a bug?
>>
>> r...@fsk-backup:~# df -h | grep ambry
>> ambry 2.7T   27K  2.7T   1% /ambry
>>
>> r...@fsk-backup:~# zpool list
>> NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
>> ambry  3.62T   132K  3.62T 0%  ONLINE  -
>>
>> r...@fsk-backup:~# zfs list
>> NAME   USED  AVAIL  REFER  MOUNTPOINT
>> ambry 92.0K  2.67T  26.9K  /ambry
>>
>>
>>  From what I understand:
>>
>> zpool list shows total capacity of all the drives in the pool.  df shows
>> usable capacity after parity.
>
> More specifically, from zpool(1m)
>
>  These space usage properties report  actual  physical  space
>  available  to  the  storage  pool. The physical space can be
>  different from the total amount of space that any  contained
>  datasets  can  actually  use.  The amount of space used in a
>  raidz configuration depends on the  characteristics  of  the
>  data being written. In addition, ZFS reserves some space for
>  internal accounting that  the  zfs(1M)  command  takes  into
>  account,  but the zpool command does not. For non-full pools
>  of a reasonable size, these effects should be invisible. For
>  small  pools,  or  pools  that are close to being completely
>  full, these discrepancies may become more noticeable.
>
> Similarly, from zfs(1m)
>  The amount of space available to the dataset and all its
>  children,  assuming  that  there is no other activity in
>  the pool. Because space is shared within a pool, availa-
>  bility  can be limited by any number of factors, includ-
>  ing physical pool size, quotas, reservations,  or  other
>  datasets within the pool.
>
> IMHO, this is a little bit wordy, in an already long man page.
> If you come up with a better way to say the same thing in
> fewer words, then please file a bug against the man page.
>  -- richard
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem setting quotas on a zfs pool

2009-01-16 Thread Carson Gaspar
Gregory Edwards - Software Support wrote:

> [r...@osprey /] # zfs set quota=800g target/u05
> 
> cannot set property for 'target/u05': size is less than current used or 
> reserved space
...
> target/u05  1.06T   206G
> 
> target/u...@1 671G  -
...
> He was able to set them all all pools except target/u05. He is also 
> asking if it is safe for him to delete the
> snap shot without causing any problems?  target/u...@1 671G

Well, he'll lose the snapshot, but otherwise it should be fine. Unless 
there's data he needs to recover from the snap, I don't see any issues.

-- 
Carson

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Problem setting quotas on a zfs pool

2009-01-16 Thread Gregory Edwards - Software Support

Solaris 10 5/08

Customer migrated to a new emc array with a snap shot and did a send and 
receive.


He is now trying to set quotas on the zfs file system and getting the 
following error.




[r...@osprey /] # zfs set quota=800g target/u05

cannot set property for 'target/u05': size is less than current used or 
reserved space




[r...@osprey /] # zfs list -o name,used,available

NAME USED  AVAIL

target  1.32T   206G

target/u02  72.2G   148G

target/u...@112.0G  -

target/u03  61.1G   159G

target/u...@112.1G  -

target/u04   126G  93.6G

target/u...@114.5G  -

target/u05  1.06T   206G

target/u...@1 671G  -

target/zoneroot 3.70G  4.30G

target/zoner...@1   12.9M  -

zfspool  553G  1018G

zfspool/u02 60.0G   160G

zfspool/u...@1   0  -

zfspool/u03 48.8G   171G

zfspool/u...@1   0  -

zfspool/u04  112G   108G

zfspool/u...@1   0  -

zfspool/u05  328G   472G

zfspool/u...@1   0  -

zfspool/zoneroot3.69G  4.31G

zfspool/zoner...@155K  -

He was able to set them all all pools except target/u05. He is also 
asking if it is safe for him to delete the

snap shot without causing any problems?  target/u...@1 671G


--
  * Gregory D Edwards *
Technical Support Engineer - Operating Systems
*Sun Microsystems *
Phone: 1 800 872 4786 ref case #
Email: greg.edwa...@sun.com
Working Hours: 0900-1800 Mon-Fri MST


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread JZ
Hi Wes,
I now have a real question.

How do you define "silly", and "artificial intelligence", and "script"?

And "halfway inclined to believe" to me means 25%.
(believe is 100%, inclined is 50%, and halfway is 25% in crystal math, and 
maybe even less in storage math, including the RAID and HA and DR and BC...)
Is my understanding correct?

But my confusion is only toward the Wes statement, all other posts by Sun 
folks made clear sense to me.
So, I am going out for dinner, hope dear Wes can help me out here.


Ciao,
z


- Original Message - 
From: "Wes Morgan" 
To: "Bob Friesenhahn" 
Cc: 
Sent: Friday, January 16, 2009 5:48 PM
Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...


> On Fri, 16 Jan 2009, Bob Friesenhahn wrote:
>
>> On Fri, 16 Jan 2009, Matt Harrison wrote:
>>>
>>> Is this guy seriously for real? It's getting hard to stay on the list
>>> with all this going on. No list etiquette, complete irrelevant
>>> ramblings, need I go on?
>>
>> The ZFS discussion list has produced its first candidate for the
>> rubber room that I mentioned here previously.  A reduction in crystal
>> meth intake could have a profound effect though.
>
> I'm halfway inclined to believe he/it is a silly "artificial intelligence"
> script.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Wes Morgan
On Fri, 16 Jan 2009, Bob Friesenhahn wrote:

> On Fri, 16 Jan 2009, Matt Harrison wrote:
>>
>> Is this guy seriously for real? It's getting hard to stay on the list
>> with all this going on. No list etiquette, complete irrelevant
>> ramblings, need I go on?
>
> The ZFS discussion list has produced its first candidate for the
> rubber room that I mentioned here previously.  A reduction in crystal
> meth intake could have a profound effect though.

I'm halfway inclined to believe he/it is a silly "artificial intelligence" 
script.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Richard Elling
Tim wrote:
> On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold  > wrote:
> 
> Meh this is retarted. It looks like zpool list shows an incorrect
> calculation? Can anyone agree that this looks like a bug?
> 
> r...@fsk-backup:~# df -h | grep ambry
> ambry 2.7T   27K  2.7T   1% /ambry
> 
> r...@fsk-backup:~# zpool list
> NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
> ambry  3.62T   132K  3.62T 0%  ONLINE  -
> 
> r...@fsk-backup:~# zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> ambry 92.0K  2.67T  26.9K  /ambry
> 
> 
>  From what I understand:
> 
> zpool list shows total capacity of all the drives in the pool.  df shows 
> usable capacity after parity.

More specifically, from zpool(1m)

  These space usage properties report  actual  physical  space
  available  to  the  storage  pool. The physical space can be
  different from the total amount of space that any  contained
  datasets  can  actually  use.  The amount of space used in a
  raidz configuration depends on the  characteristics  of  the
  data being written. In addition, ZFS reserves some space for
  internal accounting that  the  zfs(1M)  command  takes  into
  account,  but the zpool command does not. For non-full pools
  of a reasonable size, these effects should be invisible. For
  small  pools,  or  pools  that are close to being completely
  full, these discrepancies may become more noticeable.

Similarly, from zfs(1m)
  The amount of space available to the dataset and all its
  children,  assuming  that  there is no other activity in
  the pool. Because space is shared within a pool, availa-
  bility  can be limited by any number of factors, includ-
  ing physical pool size, quotas, reservations,  or  other
  datasets within the pool.

IMHO, this is a little bit wordy, in an already long man page.
If you come up with a better way to say the same thing in
fewer words, then please file a bug against the man page.
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6 ludelete issues with zones on zfs root

2009-01-16 Thread Peter Pickford
This what I discovered

Yo cant have sub directories of the zone root file system that is part of
the BE filesystem tree with zfs and lu (no spearate /var etc)
zone roots must be on the root pool for lu to work
extra file system must be from a none BE zfs file system tree ( I use
datasets)

[r...@buildsun4u ~]# zfs list -r rpool/zones
NAMEUSED  AVAIL  REFER
MOUNTPOINT
rpool/zones 162M  14.8G21K  /zones
rpool/zones/zone1-restore_080915   73.2M  14.8G  73.2M
/zones/zone1
rpool/zones/zone1-restore_080...@patch_090115  0  -  73.2M  -
rpool/zones/zone1-restore_080915-patch_090115  7.76M  14.8G  76.1M
/.alt.patch_090115/zones/zone1
rpool/zones/zone2-restore_080915   73.4M  14.8G  73.4M
/zones/zone2
rpool/zones/zone2-restore_080...@patch_090115  0  -  73.4M  -
rpool/zones/zone2-restore_080915-patch_090115  7.75M  14.8G  76.3M
/.alt.patch_090115/zones/zone2

You can have datasets and probably mount that are not part of the BE

[r...@buildsun4u ~]# zfs list -r rpool/zonesextra
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool/zonesextra 284K  14.8G18K  legacy
rpool/zonesextra/zone1   132K  14.8G18K  legacy
rpool/zonesextra/zone1/app18K  8.00G18K  /opt/app
rpool/zonesextra/zone1/core   18K  8.00G18K  /var/core
rpool/zonesextra/zone1/export   78.5K  14.8G20K  /export
rpool/zonesextra/zone1/export/home  58.5K  8.00G  58.5K  /export/home
rpool/zonesextra/zone2   133K  14.8G18K  legacy
rpool/zonesextra/zone2/app18K  8.00G18K  /opt/app
rpool/zonesextra/zone2/core   18K  8.00G18K  /var/core
rpool/zonesextra/zone2/export 79K  14.8G20K  /export
rpool/zonesextra/zone2/export/home59K  8.00G59K  /export/home

2009/1/16 

> cindy.swearingen>
> http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_with_Zones
>
> Thanks, Cindy, that was in fact the page I had been originally referencing
> when I set up my datasets, and it was very helpful.  I found it by reading
> a
> comp.unix.solaris post in which someone else was talking about not being
> able
> to ludelete an old BE.  Unfortunately, it wasn't quite the same issue as
> you
> cover in "Recover from BE Removal Failure (ludelete)," and that fix had
> already been applied to my system.
>
> cindy.swearingen> The entire Solaris 10 10/08 UFS to ZFS with zones
> migration
> cindy.swearingen> is described here:
> cindy.swearingen>
> http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1?a=view
>
> Thanks, I find most of the ZFS stuff to be fairly straightforward.  And I'm
> never doing any migration from UFS (which is what many of the zones and zfs
> docs seem to be aimed at).  It's mixing ZFS, Zones, and liveupgrade that's
> been... challenging.  :}
>
> But now I know that there's definitely a bug involved, and I'll wait for
> the
> patch.  Thanks to you and Mark for your help.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Verbose Information from "zfs send -v "

2009-01-16 Thread Brandon High
On Fri, Jan 16, 2009 at 2:47 AM, Nick Smith  wrote:
>> meh
>>
> meh

You should ignore "JZ", he seems to just be trolling the list.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6 ludelete issues with zones on zfs root

2009-01-16 Thread amy.rich
cindy.swearingen> 
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_with_Zones

Thanks, Cindy, that was in fact the page I had been originally referencing
when I set up my datasets, and it was very helpful.  I found it by reading a
comp.unix.solaris post in which someone else was talking about not being able
to ludelete an old BE.  Unfortunately, it wasn't quite the same issue as you
cover in "Recover from BE Removal Failure (ludelete)," and that fix had
already been applied to my system.

cindy.swearingen> The entire Solaris 10 10/08 UFS to ZFS with zones migration
cindy.swearingen> is described here:
cindy.swearingen> http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1?a=view

Thanks, I find most of the ZFS stuff to be fairly straightforward.  And I'm
never doing any migration from UFS (which is what many of the zones and zfs
docs seem to be aimed at).  It's mixing ZFS, Zones, and liveupgrade that's
been... challenging.  :}

But now I know that there's definitely a bug involved, and I'll wait for the
patch.  Thanks to you and Mark for your help.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6 ludelete issues with zones on zfs root

2009-01-16 Thread Cindy . Swearingen

Hi Amy,

You can review the ZFS/LU/zones issues here:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_with_Zones

The entire Solaris 10 10/08 UFS to ZFS with zones migration is described
here:

http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1?a=view

Let us know if you can't find something...

Cindy

amy.r...@tufts.edu wrote:
> mmusante> This is a known problem with ZFS and live upgrade.  I believe the
> mmusante> docs for s10u6 discourage the config you show here.  A patch should
> mmusante> be ready some time next month with a fix for this.
> 
> Do you happen to have a bugid handy?
> 
> I had done various searches to try and determine what the best way to set up
> zones without any UFS was, and the closest I came was to setting them up under
> the pool/ROOT/ area.  I tried giving them their own dataset in pool (not
> under ROOT/), but that was also unsuccessful.
> 
> I guess let me back up and ask... If one only has one mirrored disk set and
> one pool (the root pool), what's the recommended way to put zones on a zfs
> root?  Most of the documentation I've seen is specific to putting the zone
> roots on UFS and giving them access to ZFS datasets.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6 ludelete issues with zones on zfs root

2009-01-16 Thread Mark J Musante
On Fri, 16 Jan 2009, amy.r...@tufts.edu wrote:

> mmusante> This is a known problem with ZFS and live upgrade.  I believe the
> mmusante> docs for s10u6 discourage the config you show here.  A patch should
> mmusante> be ready some time next month with a fix for this.
>
> Do you happen to have a bugid handy?

The closest one would be 6742586.  The description of the bug doesn't 
exactly match what you saw, but the cause was the same: the zone mounting 
code in LU was broken and had to be rewritten.

> I had done various searches to try and determine what the best way to 
> set up zones without any UFS was, and the closest I came was to setting 
> them up under the pool/ROOT/ area.  I tried giving them their own 
> dataset in pool (not under ROOT/), but that was also unsuccessful.

There are some docs on s10u6's restrictions of where zones can be put if 
you are using live upgrade.  I'll have to see if I can dig them up.

> I guess let me back up and ask... If one only has one mirrored disk set 
> and one pool (the root pool), what's the recommended way to put zones on 
> a zfs root?  Most of the documentation I've seen is specific to putting 
> the zone roots on UFS and giving them access to ZFS datasets.

If you put your zones in a subdirectory of a dataset, they should work. 
But I'd recommend waiting for the patch to be released.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6 ludelete issues with zones on zfs root

2009-01-16 Thread amy.rich
mmusante> This is a known problem with ZFS and live upgrade.  I believe the
mmusante> docs for s10u6 discourage the config you show here.  A patch should
mmusante> be ready some time next month with a fix for this.

Do you happen to have a bugid handy?

I had done various searches to try and determine what the best way to set up
zones without any UFS was, and the closest I came was to setting them up under
the pool/ROOT/ area.  I tried giving them their own dataset in pool (not
under ROOT/), but that was also unsuccessful.

I guess let me back up and ask... If one only has one mirrored disk set and
one pool (the root pool), what's the recommended way to put zones on a zfs
root?  Most of the documentation I've seen is specific to putting the zone
roots on UFS and giving them access to ZFS datasets.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] s10u6 ludelete issues with zones on zfs root

2009-01-16 Thread Mark J Musante

Hi Amy,

This is a known problem with ZFS and live upgrade.  I believe the docs for 
s10u6 discourage the config you show here.  A patch should be ready some 
time next month with a fix for this.


On Fri, 16 Jan 2009, amy.r...@tufts.edu wrote:

> I've installed an s10u6 machine with no UFS partitions at all.  I've created a
> dataset for zones and one for a zone named "default."  I then do an lucreate
> and luactivate and a subsequent boot off the new BE.  All of that appears to
> go just fine (though I've found that I MUST call the zone dataset zoneds for
> some reason, or it will rename it ot that for me).  When I try to delete the
> old BE, it fails with the following message:
>
> # ludelete s10-RC
> ERROR: cannot mount '/zoneds': directory is not empty
> ERROR: cannot mount mount point  device 
> 
> ERROR: failed to mount file system  on 
> 
> ERROR: unmounting partially mounted boot environment file systems
> ERROR: cannot mount boot environment by icf file 
> ERROR: Cannot mount BE .
> Unable to delete boot environment.
>
> It's obvious that luactivate is not correctly resetting the mount point of
> /zoneds and /zoneds/default (the zone named default) in the old BE so that
> it's under /.alt like the rest of the ROOT dataset:
>
> # zfs list |grep s10-RC
> rpool/ROOT/s10-RC  14.6M  57.3G  1.29G  /.alt.tmp.b-VK.mnt/
> rpool/ROOT/s10-RC/var  2.69M  57.3G  21.1M  
> /.alt.tmp.b-VK.mnt//var
> rpool/ROOT/s10-RC/zoneds   5.56M  57.3G19K  /zoneds
> rpool/ROOT/s10-RC/zoneds/default   5.55M  57.3G  29.9M  /zoneds/default
>
> Obviously I can reset the mount points by hand with "zfs set mountpoint," but
> this seems like something that luactivate and the subsequent boot should
> handle.  Is this a bug, or am I missing a step/have something misconfigured?
>
> Also, once I run ludelete on a BE, it seems like it should also clean up the
> old ZFS filesystems for the BE s10-RC (the old BE) instead of me having to do
> an explicit zfs destroy.
>
> The very weird thing is that, if I run lucreate again (new BE is named bar)
> and boot off of the new BE, it does the right thing with the old BE (foo):
>
> rpool/ROOT/bar 1.52G  57.2G  1.29G  /
> rpool/ROOT/b...@foo 89.1M  -  1.29G  -
> rpool/ROOT/b...@bar 84.1M  -  1.29G  -
> rpool/ROOT/bar/var 24.7M  57.2G  21.2M  /var
> rpool/ROOT/bar/v...@foo 2.64M  -  21.0M  -
> rpool/ROOT/bar/v...@bar  923K  -  21.2M  -
> rpool/ROOT/bar/zoneds  32.7M  57.2G20K  /zoneds
> rpool/ROOT/bar/zon...@foo18K  -19K  -
> rpool/ROOT/bar/zon...@bar19K  -20K  -
> rpool/ROOT/bar/zoneds/default  32.6M  57.2G  29.9M  /zoneds/default
> rpool/ROOT/bar/zoneds/defa...@foo  2.61M  -  27.0M  -
> rpool/ROOT/bar/zoneds/defa...@bar   162K  -  29.9M  -
> rpool/ROOT/foo 2.93M  57.2G  1.29G  /.alt.foo
> rpool/ROOT/foo/var  818K  57.2G  21.2M  /.alt.foo/var
> rpool/ROOT/foo/zoneds   270K  57.2G20K  /.alt.foo/zoneds
> rpool/ROOT/foo/zoneds/default   253K  57.2G  29.9M  
> /.alt.foo/zoneds/default
>
> And then DOES clean up the zfs filesystem when I run ludelete.  Does anyone
> know where there's a discrepancy?  The same lucreate command (-n  -p
> rpool) command was used both times.
>
>
>
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] s10u6 ludelete issues with zones on zfs root

2009-01-16 Thread amy.rich
I've installed an s10u6 machine with no UFS partitions at all.  I've created a
dataset for zones and one for a zone named "default."  I then do an lucreate
and luactivate and a subsequent boot off the new BE.  All of that appears to
go just fine (though I've found that I MUST call the zone dataset zoneds for
some reason, or it will rename it ot that for me).  When I try to delete the
old BE, it fails with the following message:

# ludelete s10-RC
ERROR: cannot mount '/zoneds': directory is not empty
ERROR: cannot mount mount point  device 

ERROR: failed to mount file system  on 

ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file 
ERROR: Cannot mount BE .
Unable to delete boot environment.

It's obvious that luactivate is not correctly resetting the mount point of
/zoneds and /zoneds/default (the zone named default) in the old BE so that
it's under /.alt like the rest of the ROOT dataset:

# zfs list |grep s10-RC
rpool/ROOT/s10-RC  14.6M  57.3G  1.29G  /.alt.tmp.b-VK.mnt/
rpool/ROOT/s10-RC/var  2.69M  57.3G  21.1M  /.alt.tmp.b-VK.mnt//var
rpool/ROOT/s10-RC/zoneds   5.56M  57.3G19K  /zoneds
rpool/ROOT/s10-RC/zoneds/default   5.55M  57.3G  29.9M  /zoneds/default

Obviously I can reset the mount points by hand with "zfs set mountpoint," but
this seems like something that luactivate and the subsequent boot should
handle.  Is this a bug, or am I missing a step/have something misconfigured?

Also, once I run ludelete on a BE, it seems like it should also clean up the
old ZFS filesystems for the BE s10-RC (the old BE) instead of me having to do
an explicit zfs destroy.

The very weird thing is that, if I run lucreate again (new BE is named bar)
and boot off of the new BE, it does the right thing with the old BE (foo):

rpool/ROOT/bar 1.52G  57.2G  1.29G  /
rpool/ROOT/b...@foo 89.1M  -  1.29G  -
rpool/ROOT/b...@bar 84.1M  -  1.29G  -
rpool/ROOT/bar/var 24.7M  57.2G  21.2M  /var
rpool/ROOT/bar/v...@foo 2.64M  -  21.0M  -
rpool/ROOT/bar/v...@bar  923K  -  21.2M  -
rpool/ROOT/bar/zoneds  32.7M  57.2G20K  /zoneds
rpool/ROOT/bar/zon...@foo18K  -19K  -
rpool/ROOT/bar/zon...@bar19K  -20K  -
rpool/ROOT/bar/zoneds/default  32.6M  57.2G  29.9M  /zoneds/default
rpool/ROOT/bar/zoneds/defa...@foo  2.61M  -  27.0M  -
rpool/ROOT/bar/zoneds/defa...@bar   162K  -  29.9M  -
rpool/ROOT/foo 2.93M  57.2G  1.29G  /.alt.foo
rpool/ROOT/foo/var  818K  57.2G  21.2M  /.alt.foo/var
rpool/ROOT/foo/zoneds   270K  57.2G20K  /.alt.foo/zoneds
rpool/ROOT/foo/zoneds/default   253K  57.2G  29.9M  /.alt.foo/zoneds/default

And then DOES clean up the zfs filesystem when I run ludelete.  Does anyone
know where there's a discrepancy?  The same lucreate command (-n  -p
rpool) command was used both times.





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lackluster ZFS performance trials using various ZIL and L2ARC configurations...

2009-01-16 Thread Neil Perrin
I don't believe that iozone does any synchronous calls (fsync/O_DSYNC/O_SYNC),
so the ZIL and separate logs (slogs) would be unused.

I'd recommend performance testing by configuring filebench to
do synchronous writes:

http://opensolaris.org/os/community/performance/filebench/

Neil.

On 01/15/09 00:36, Gray Carper wrote:
> Hey, all!
> 
> Using iozone (with the sequential read, sequential write, random read, 
> and random write categories), on a Sun X4240 system running OpenSolaris 
> b104 (NexentaStor 1.1.2, actually), we recently ran a number of relative 
> performance tests using a few ZIL and L2ARC configurations (meant to try 
> and uncover which configuration would be the best choice). I'd like to 
> share the highlights with you all (without bogging you down with raw 
> data) to see if anything strikes you.
> 
> Our first (baseline) test used a ZFS pool which had a self-contained ZIL 
> and L2ARC (i.e. not moved to other devices, the default configuration). 
> Note that this system had both SSDs and SAS drive attached to the 
> controller, but only the SAS drives were in use.
> 
> In the second test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD 
> and the L2ARC on four 146GB SAS drives. Random reads were significantly 
> worse than the baseline, but all other categories were slightly better.
> 
> In the third test, we rebuilt the ZFS pool with the ZIL on a 32GB SSD 
> and the L2ARC on four 80GB SSDs. Sequential reads were better than the 
> baseline, but all other categories were worse.
> 
> In the fourth test, we rebuilt the ZFS pool with no separate ZIL, but 
> with the L2ARC on four 146GB SAS drives. Random reads were significantly 
> worse than the baseline and all other categories were about the same as 
> the baseline.
> 
> As you can imagine, we were disappointed. None of those configurations 
> resulted in any significant improvements, and all of the configurations 
> resulted in at least one category being worse. This was very much not 
> what we expected.
> 
> For the sake of sanity checking, we decided to run the baseline case 
> again (ZFS pool which had a self-contained ZIL and L2ARC), but this time 
> remove the SSDs completely from the box. Amazingly, the simple presence 
> of the SSDs seemed to be a negative influence - the new SSD-free test 
> showed improvement in every single category when compared to the 
> original baseline test.
> 
> So, this has lead us to the conclusion that we shouldn't be mixing SSDs 
> with SAS drives on the same controller (at least, not the controller we 
> have in this box). Has anyone else seen problems like this before that 
> might validate that conclusion? If so, we think we should probably build 
> an SSD JBOD, hook it up to the box, and re-run the tests. This leads us 
> to another question: Does anyone have any recommendations for 
> SSD-performant controllers that have great OpenSolaris driver support?
> 
> Thanks!
> -Gray
> -- 
> Gray Carper
> MSIS Technical Services
> University of Michigan Medical School
> gcar...@umich.edu   |  skype:  graycarper  | 
>  734.418.8506
> http://www.umms.med.umich.edu/msis/
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Jonny Gerold
That's what I figured. That raidz & raidz1 are the same thing. The one 
is just put there to collect confusion ;)

Thanks, Jonny

Tim wrote:
>
>
> On Thu, Jan 15, 2009 at 10:36 PM, Jonny Gerold  > wrote:
>
> BTW, is there any difference between raidz & raidz1 (is the one
> for one
> disk parity) or does raidz have a parity disk too?
>
> Thanks, Jonny
>
>
> It depends on who you're talking to I suppose.
>
> I would expect generally "raidz" is describing that you're using Sun's 
> raid algorithm which can be either "raidz1" (one parity drive) or 
> "raidz2" (two parity drives).
>
> It may also be that people are just interchanging the term "raidz" and 
> "raidz1" as well.  I guess most documentation I've seen officially 
> address them as "raidz" or "raidz2", there is no "raidz1".
>
> --Tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Tim
On Thu, Jan 15, 2009 at 10:36 PM, Jonny Gerold  wrote:

> BTW, is there any difference between raidz & raidz1 (is the one for one
> disk parity) or does raidz have a parity disk too?
>
> Thanks, Jonny
>

It depends on who you're talking to I suppose.

I would expect generally "raidz" is describing that you're using Sun's raid
algorithm which can be either "raidz1" (one parity drive) or "raidz2" (two
parity drives).

It may also be that people are just interchanging the term "raidz" and
"raidz1" as well.  I guess most documentation I've seen officially address
them as "raidz" or "raidz2", there is no "raidz1".

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Jonny Gerold
BTW, is there any difference between raidz & raidz1 (is the one for one 
disk parity) or does raidz have a parity disk too?

Thanks, Jonny

Tim wrote:
>
>
> On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold  > wrote:
>
> Meh this is retarted. It looks like zpool list shows an incorrect
> calculation? Can anyone agree that this looks like a bug?
>
> r...@fsk-backup:~# df -h | grep ambry
> ambry 2.7T   27K  2.7T   1% /ambry
>
> r...@fsk-backup:~# zpool list
> NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
> ambry  3.62T   132K  3.62T 0%  ONLINE  -
>
> r...@fsk-backup:~# zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> ambry 92.0K  2.67T  26.9K  /ambry
>
>
> From what I understand:
>
> zpool list shows total capacity of all the drives in the pool.  df 
> shows usable capacity after parity.
>
> I wouldn't really call that retarded, it allows you to see what kind 
> of space you're chewing up with parity fairly easily.
>
> --Tim 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Tim
On Thu, Jan 15, 2009 at 10:20 PM, Jonny Gerold  wrote:

> Meh this is retarted. It looks like zpool list shows an incorrect
> calculation? Can anyone agree that this looks like a bug?
>
> r...@fsk-backup:~# df -h | grep ambry
> ambry 2.7T   27K  2.7T   1% /ambry
>
> r...@fsk-backup:~# zpool list
> NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
> ambry  3.62T   132K  3.62T 0%  ONLINE  -
>
> r...@fsk-backup:~# zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> ambry 92.0K  2.67T  26.9K  /ambry
>
>
>From what I understand:

zpool list shows total capacity of all the drives in the pool.  df shows
usable capacity after parity.

I wouldn't really call that retarded, it allows you to see what kind of
space you're chewing up with parity fairly easily.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Matt Harrison
Jonny Gerold wrote:
> Meh this is retarted. It looks like zpool list shows an incorrect 
> calculation? Can anyone agree that this looks like a bug?
> 
> r...@fsk-backup:~# df -h | grep ambry
> ambry 2.7T   27K  2.7T   1% /ambry
> 
> r...@fsk-backup:~# zpool list
> NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
> ambry  3.62T   132K  3.62T 0%  ONLINE  -
> 
> r...@fsk-backup:~# zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> ambry 92.0K  2.67T  26.9K  /ambry

Bug or not I am not the person to say, but it's done that ever since 
I've used ZFS. zpool list shows the total space regardless of 
redundancy, whereas zfs list shows the actual available space. I was 
confusing at first but now I just ignore it.

Matt

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Jonny Gerold
Meh this is retarted. It looks like zpool list shows an incorrect 
calculation? Can anyone agree that this looks like a bug?

r...@fsk-backup:~# df -h | grep ambry
ambry 2.7T   27K  2.7T   1% /ambry

r...@fsk-backup:~# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
ambry  3.62T   132K  3.62T 0%  ONLINE  -

r...@fsk-backup:~# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
ambry 92.0K  2.67T  26.9K  /ambry


Thanks, Jonny

Bob Friesenhahn wrote:
> On Fri, 16 Jan 2009, Matt Harrison wrote:
>   
>> Is this guy seriously for real? It's getting hard to stay on the list
>> with all this going on. No list etiquette, complete irrelevant
>> ramblings, need I go on?
>> 
>
> The ZFS discussion list has produced its first candidate for the 
> rubber room that I mentioned here previously.  A reduction in crystal 
> meth intake could have a profound effect though.
>
> Bob
> ==
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Tim
On Fri, Jan 16, 2009 at 9:59 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Fri, 16 Jan 2009, Matt Harrison wrote:
> >
> > Is this guy seriously for real? It's getting hard to stay on the list
> > with all this going on. No list etiquette, complete irrelevant
> > ramblings, need I go on?
>
> The ZFS discussion list has produced its first candidate for the
> rubber room that I mentioned here previously.  A reduction in crystal
> meth intake could have a profound effect though.
>
> Bob



Just the product of English as a second language + intentionally trolling.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Volker A. Brandt
> JZ wrote:
[...]

> Is this guy seriously for real? It's getting hard to stay on the list
> with all this going on. No list etiquette, complete irrelevant
> ramblings, need I go on?

He probably has nothing better to do.  Just ignore him; that's what
they dislike most.  He will go away eventually.  Just put him in
your killfile.

Don't feed the trolls.


Regards -- Volker
-- 

Volker A. Brandt  Consulting and Support for Sun Solaris
Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim Email: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 45
Geschäftsführer: Rainer J. H. Brandt und Volker A. Brandt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Bob Friesenhahn
On Fri, 16 Jan 2009, Matt Harrison wrote:
>
> Is this guy seriously for real? It's getting hard to stay on the list
> with all this going on. No list etiquette, complete irrelevant
> ramblings, need I go on?

The ZFS discussion list has produced its first candidate for the 
rubber room that I mentioned here previously.  A reduction in crystal 
meth intake could have a profound effect though.

Bob
==
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card

2009-01-16 Thread Charles Wright
I tested with zfs_vdev_max_pending=8
I hoped this should make the error messages 
   arcmsr0: too many outstanding commands (257 > 256)
go away but it did not.

zfs_vdev_max_pending=8 this should have only allowed 128 commands total to 
be outstanding I would think (16 Drives * 8 = 128).
 
However I haven't yet been able to corrupt zfs with it set to 8 (yet...)
So it seems to have helped.

I took a log of iostat -x 1 while I was doing a log of I/O and posted it here
http://wrights.webhop.org/areca/solaris-info/zfs_vdev_max_pending-tests/8/iostat-8.txt

You can see the number of errors and other info here
http://wrights.webhop.org/areca/solaris-info/zfs_vdev_max_pending-tests/8/

Information about my system can also be found here
http://wrights.webhop.org/areca/

Thanks for the suggestion.   I'm working with James and Erich and hopefully 
they will find something in the driver code.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Jonny Gerold
This seems to have worked. But is showing an abnormal amount of data.

r...@fsk-backup:~# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
ambry  3.62T   132K  3.62T 0%  ONLINE  -

r...@fsk-backup:~# df -h | grep ambry
ambry 2.7T   27K  2.7T   1% /ambry

This happened the last time I created a raidz1... Meh, before I 
continue, is this incredibly abnormal? Or is there something that I'm 
missing and this is normal procedure?

Thanks, Jonny

Wes Morgan wrote:
> On Thu, 15 Jan 2009, Jonny Gerold wrote:
>
>> Hello,
>> I was hoping that this would work:
>> http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror
>>
>> I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
>> cant delete/backup somewhere else)
>>
>>> r...@fsk-backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
>>> /dev/lofi/1
>>> r...@fsk-backup:~# zpool list
>>> NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
>>> ambry   592G   132K   592G 0%  ONLINE  -
>> I get this (592GB???) I bring the virtual device offline, and it becomes
>> degraded, yet I wont be able to copy my data over. I was wondering if
>> anyone else had a solution.
>>
>> Thanks, Jonny
>>
>> P.S. Please let me know if you need any extra information.
>
> Are you certain that you created the sparse file as the correct size? 
> If I had to guess, it is only in the range of about 150gb. The 
> smallest device size will limit the total size of your array. Try 
> using this for your sparse file and recreating the raidz:
>
> dd if=/dev/zero of=fakedisk bs=1k seek=976762584 count=0
> lofiadm -a fakedisk
>

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-16 Thread Johan Hartzenberg
On Thu, Jan 15, 2009 at 5:18 PM, Jim Klimov  wrote:

> Usecase scenario:
>
> I have a single server (or home workstation) with 4 HDD bays, sold with 2
> drives.
> Initially the system was set up with a ZFS mirror for data slices. Now we
> got 2
> more drives and want to replace the mirror with a larger RAIDZ2 set (say I
> don't
> want a RAID10 which is trivial to make).
>
> Technically I think that it should be possible to force creation of a
> degraded
> raidz2 array with two actual drives and two missing drives. Then I'd copy
> data
> from the old mirror pool to the new degraded raidz2 pool (zfs send | zfs
> recv),
> destroy the mirror pool and attach its two drives to "repair" the raidz2
> pool.
>
>
1. Buy, borrow or steal two External USB disk enclosures (if you don't have
two).
2. Install two new disks internally, and connect the other two via the USB
external enclosures.
3. Set up the zpool
4. Copy the data over.
5. Export both pool.
6. Shut Down
7. Remove the two old disks
8. Move the two disks from the External USB enclosures into the system
9. Start back up, and ...
10. Import the new pool.



-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Verbose Information from "zfs send -v "

2009-01-16 Thread Chris Kirby
On Jan 16, 2009, at 4:47 AM, Nick Smith wrote:
>
> When I use the command 'zfs send -v snapshot-name' I expect to see  
> as the manpage states, some "verbose information" printed to stderr  
> (probably) but I don't see anything on either Solaris 10u6 or  
> OpenSolaris 2008.11. I am doing something wrong here?
>
> Also what should be the contents of this "verbose information" anyway?

Nick,

Specifying -v to zfs send doesn't result in much extra  
information, and only in
certain cases.  For example, if you do an incremental send, you'll get  
this piece
of extra output:

# zfs send -v -I t...@snap1 t...@snap2 >/tmp/x
sending from @snap1 to t...@snap2
#

zfs recv -v is a bit more chatty, but again, only in certain cases.

Output from zfs send -v goes to stderr; output from zfs recv -v goes  
to stdout.

-Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 4 disk raidz1 with 3 disks...

2009-01-16 Thread Matt Harrison
JZ wrote:
> Beloved Jonny,
> 
> I am just like you.
> 
> 
> There was a day, I was hungry, and went for a job interview for sysadmin.
> They asked me - what is a "protocol"?
> I could not give a definition, and they said, no, not qualified.
> 
> But they did not ask me about CICS and mainframe. Too bad.
> 
> 
> 
> baby, even there is a day you can break daddy's pride, you won't want to, I 
> am sure.   ;-)
> 
> [if you want a solution, ask Orvar, I would guess he thinks on his own now, 
> not baby no more, teen now...]
> 
> best,
> z
> 
> - Original Message - 
> From: "Jonny Gerold" 
> To: "JZ" 
> Sent: Thursday, January 15, 2009 10:19 PM
> Subject: Re: [zfs-discuss] 4 disk raidz1 with 3 disks...
> 
> 
> Sorry that I broke your pride (all knowing) bubble by "challenging" you.
> But your just as stupid as I am since you did not give me a "solution."
> Find a solution, and I will rock with your Zhou style, otherwise you're
> just like me :) I am in the U.S. Great weather...
> 
> Thanks, Jonny
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 

Is this guy seriously for real? It's getting hard to stay on the list 
with all this going on. No list etiquette, complete irrelevant 
ramblings, need I go on?

~Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Verbose Information from "zfs send -v "

2009-01-16 Thread Nick Smith
> meh
> 
meh

Anyway I assume that my original post was unclear, so I'll try an correct that.

When I use the command 'zfs send -v snapshot-name' I expect to see as the 
manpage states, some "verbose information" printed to stderr (probably) but I 
don't see anything on either Solaris 10u6 or OpenSolaris 2008.11. I am doing 
something wrong here?

Also what should be the contents of this "verbose information" anyway? 

Regards,

Nick
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Verbose Information from "zfs send -v "

2009-01-16 Thread JZ
meh





- Original Message - 
From: "Nick Smith" 
To: 
Sent: Friday, January 16, 2009 4:28 AM
Subject: [zfs-discuss] Verbose Information from "zfs send -v "


> What 'verbose information' is reported by the "zfs send -v " 
> contain?
>
> Also on Solaris 10u6 I don't get any output at all - is this a bug?
>
> Regards,
>
> Nick
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can I create ZPOOL with missing disks?

2009-01-16 Thread Daniel Rock
Jim Klimov schrieb:
> Is it possible to create a (degraded) zpool with placeholders specified 
> instead
> of actual disks (parity or mirrors)? This is possible in linux mdadm 
> ("missing" 
> keyword), so I kinda hoped this can be done in Solaris, but didn't manage to.

Create sparse files with the size of the disks (mkfile -n ...).

Create a zpool with the free disks and the sparse files (zpool create -f 
...). Then immediately put the sparse files offline (zpool offline ...). 
Copy the files to the new zpool, destroy the old one and replace the 
sparse files with the now freed up disks (zpool replace ...).

Remember: during data migration your are running without safety belts. 
If a disk fails during migration you will lose data.



Daniel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Verbose Information from "zfs send -v "

2009-01-16 Thread Nick Smith
What 'verbose information' is reported by the "zfs send -v " contain?

Also on Solaris 10u6 I don't get any output at all - is this a bug?

Regards,

Nick
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (no subject)

2009-01-16 Thread JZ
[no one speaking?   want my spam?]



Ok, folks, I don't know how open is this now. I turned on some public music, 
the songs are very healthy today.

I guess whatever I say now would be misleading, so, here is a joke.

Zhou will always end something with happy -





I told some bullshit, not lie, just bullshit, to the list.

One of my best friends only does Remy+++.  I do many others.  He told me to 
release him from the list, but ok, I will just hunt him down for drinking 
then.

Between Remy and Hennessy, I say Hennessy, the taste, and the logo. But, 
please, if you don't know for sure how to use that logo, don't try at home.





Anyway, once upon a time, there was a worldwide competition for hard liquid, 
at last, only XO and 2GT from Beijing as final contenders.



All folks were over their head and cannot drink no more. They got two little 
mickey mices for experiment.



The first one drunk a glass of XO and passed out right there. Folks were 
saying, wow, XO, good shit.



The second one took a sip of 2GT and flashed outside. Folks were saying, 
ahh, 2GT taste too bad, result disappointing.











All of a sudden, the second one came back, holding a brick, saying, you, 
where is the cat? let me do him!





True story, my dearest aunt told me that, cannot be fake.

Peace!

Best,

z



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss