Re: [zfs-discuss] one more time: pool size changes

2010-06-02 Thread Juergen Nickelsen
Richard Elling  writes:

>> And some time before I had suggested to a my buddy zfs for his new
>> home storage server, but he turned it down since there is no
>> expansion available for a pool.
>
> Heck, let him buy a NetApp :-)

Definitely a possibility, given the availability and pricing of
oldish NetApp hardware on eBay. Although for home use, it is easier
to put together something adequately power-saving and silent with
OpenSolaris and PC hardware than with NetApp gear.

-- 
I wasn't so desperate yet that I actually looked into documentation.
 -- Juergen Nickelsen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-02 Thread Brandon High
On Wed, Jun 2, 2010 at 3:54 PM, Roman Naumenko  wrote:
> And some time before I had suggested to a my buddy zfs for his new home 
> storage server, but he turned it down since there is no expansion available 
> for a pool.

There's no expansion for aggregates in OnTap, either. You can add more
disks (as a raid-dp or mirror set) to an existing aggr, but you can
also add more vdevs (as raidz or mirrors) to a zpool two.

> And he really wants to be able to add a drive or couple to an existing pool. 
> Yes, there are ways to expand storage to some extent without rebuilding it. 
> Like replacing disk with larger ones. Not enough for a typical home user I 
> would say.

You can do this. 'zpool add'

> Nevertheless, NetApp appears to have such feature as I learned from my 
> co-worker. It works with some restrictions (you have to zero disks before 
> adding, and rebalance the aggregate after and still without perfect 
> distribution) - but Ontap is able to do aggregates expansion nevertheless.

Yeah, you can add to a aggr, but you can't add to a raid-dp set. It's
the same as ZFS.

ZFS doesn't require that you zero disks, and there is no rebalancing.
As more data is written to the pool it will become more balanced
however.

> So, my question is: what does prevent to introduce the same for zfs at 
> present time? Is this because of the design of zfs, or there is simply no 
> demand for it in community?
>
> My understanding is that at present time there are no plans to introduce it.

Rebalancing depends on bp_rewrite, which is vaporware still. There has
been discussion of it for a while but no implementation that I know
of.

Once the feature is added, it will be possible to add or remove
devices from a zpool or vdev, something that OnTap can't do.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-02 Thread Richard Elling
On Jun 2, 2010, at 3:54 PM, Roman Naumenko wrote:
> Recently I talked to a co-worker who manages NetApp storages. We discussed 
> size changes for pools in zfs and aggregates in NetApp.
> 
> And some time before I had suggested to a my buddy zfs for his new home 
> storage server, but he turned it down since there is no expansion available 
> for a pool. 

Heck, let him buy a NetApp :-)

> And he really wants to be able to add a drive or couple to an existing pool. 
> Yes, there are ways to expand storage to some extent without rebuilding it. 
> Like replacing disk with larger ones. Not enough for a typical home user I 
> would say. 

Why not? I do this quite often. Growing is easy, shrinking is more challenging.

> And this is might be an important for corporate too. Frankly speaking I doubt 
> there are many administrators use it in DC environment. 
> 
> Nevertheless, NetApp appears to have such feature as I learned from my 
> co-worker. It works with some restrictions (you have to zero disks before 
> adding, and rebalance the aggregate after and still without perfect 
> distribution) - but Ontap is able to do aggregates expansion nevertheless. 
> 
> So, my question is: what does prevent to introduce the same for zfs at 
> present time? Is this because of the design of zfs, or there is simply no 
> demand for it in community?

Its been there since 2005: zpool subcommand add.
 -- richard

> 
> My understanding is that at present time there are no plans to introduce it.
> 
> --Regards,
> Roman Naumenko
> ro...@naumenko.com

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-02 Thread Richard Elling
On Jun 2, 2010, at 4:08 PM, Freddie Cash wrote:

> On Wed, Jun 2, 2010 at 3:54 PM, Roman Naumenko  wrote:
> Recently I talked to a co-worker who manages NetApp storages. We discussed 
> size changes for pools in zfs and aggregates in NetApp.
> 
> And some time before I had suggested to a my buddy zfs for his new home 
> storage server, but he turned it down since there is no expansion available 
> for a pool.
> 
> There are two ways to increase the storage space available to a ZFS pool:
>   1.  add more vdevs to the pool
>   2.  replace each drive in a vdev with a larger drive

  3. grow a LUN and export/import (old releases) or toggle autoexpand=on (later
release)

 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
ZFS and NexentaStor training, Rotterdam, July 13-15, 2010
http://nexenta-rotterdam.eventbrite.com/




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-02 Thread Erik Trimble

Roman Naumenko wrote:

Recently I talked to a co-worker who manages NetApp storages. We discussed size 
changes for pools in zfs and aggregates in NetApp.

And some time before I had suggested to a my buddy zfs for his new home storage server, but he turned it down since there is no expansion available for a pool. 

And he really wants to be able to add a drive or couple to an existing pool. Yes, there are ways to expand storage to some extent without rebuilding it. Like replacing disk with larger ones. Not enough for a typical home user I would say. 

And this is might be an important for corporate too. Frankly speaking I doubt there are many administrators use it in DC environment. 

Nevertheless, NetApp appears to have such feature as I learned from my co-worker. It works with some restrictions (you have to zero disks before adding, and rebalance the aggregate after and still without perfect distribution) - but Ontap is able to do aggregates expansion nevertheless. 


So, my question is: what does prevent to introduce the same for zfs at present 
time? Is this because of the design of zfs, or there is simply no demand for it 
in community?

My understanding is that at present time there are no plans to introduce it.

--Regards,
Roman Naumenko
ro...@naumenko.com
  


Expanding a RAIDZ (which, really, is the only thing that can't do right 
now, w/r/t adding disks) requires the Block Pointer (BP) Rewrite 
functionality before it can get implemented.


We've been promised BP rewrite for awhile, but I have no visibility as 
to where development on it is in the schedule.


Fortunately, several other things also depend on BP rewrite (e.g.  
shrinking a pool (removing vdevs), efficient defragmentation/compaction, 
etc.).


So, while resizing a raidZ device isn't really high on the list of 
things to do, the fundamental building block which would allow for it to 
occur is very much important for Oracle. And, once BP rewrite is 
available, I suspect that there might be a raidZ resize contribution 
from one of the non-Oracle folks.  Or, maybe even someone like me (whose 
not a ZFS developer inside Oracle, but I play one on TV...)



Dev guys - where are we on BP rewrite?


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-02 Thread Freddie Cash
On Wed, Jun 2, 2010 at 3:54 PM, Roman Naumenko  wrote:

> Recently I talked to a co-worker who manages NetApp storages. We discussed
> size changes for pools in zfs and aggregates in NetApp.
>
> And some time before I had suggested to a my buddy zfs for his new home
> storage server, but he turned it down since there is no expansion available
> for a pool.
>

There are two ways to increase the storage space available to a ZFS pool:
  1.  add more vdevs to the pool
  2.  replace each drive in a vdev with a larger drive

The first option "expands the width" of the pool, adds redundancy to the
pool, and (should) increase the performance of the pool.  This is very
simple to do, but requires having the drive bays and/or drive connectors
available.  (In fact, any time you add a vdev to a pool, including when you
first create it, you go through this process.)

The second option "increases the total storage" of the pool, without
changing any of the redundancy of the pool.  Performance may or may not
increase.  Once all the drives in a vdev are replaced, the storage space
becomes available to the pool (depending on the ZFS version, you may need to
export/import the pool for the space to become available).

We've used both of the above quite successfully, both at home and at work.

Not sure what your buddy was talking about.  :)

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] one more time: pool size changes

2010-06-02 Thread Frank Cusack

On 6/2/10 3:54 PM -0700 Roman Naumenko wrote:

And some time before I had suggested to a my buddy zfs for his new home
storage server, but he turned it down since there is no expansion
available for a pool.


That's incorrect.  zfs pools can be expanded at any time.  AFAIK zfs has
always had this capability.


Nevertheless, NetApp appears to have such feature as I learned from my
co-worker. It works with some restrictions (you have to zero disks before
adding, and rebalance the aggregate after and still without perfect
distribution) - but Ontap is able to do aggregates expansion
nevertheless.


I wasn't aware that Netapp could rebalance.  Is that a true Netapp
feature, or is it a matter of copying the data "manually"?  zfs doesn't
have a cleaner process that rebalances, so for zfs you would have to
copy the data to rebalance the pool.  I certainly wouldn't make my
Netapp/zfs decision based on that (alone).

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] one more time: pool size changes

2010-06-02 Thread Roman Naumenko
Recently I talked to a co-worker who manages NetApp storages. We discussed size 
changes for pools in zfs and aggregates in NetApp.

And some time before I had suggested to a my buddy zfs for his new home storage 
server, but he turned it down since there is no expansion available for a pool. 

And he really wants to be able to add a drive or couple to an existing pool. 
Yes, there are ways to expand storage to some extent without rebuilding it. 
Like replacing disk with larger ones. Not enough for a typical home user I 
would say. 

And this is might be an important for corporate too. Frankly speaking I doubt 
there are many administrators use it in DC environment. 

Nevertheless, NetApp appears to have such feature as I learned from my 
co-worker. It works with some restrictions (you have to zero disks before 
adding, and rebalance the aggregate after and still without perfect 
distribution) - but Ontap is able to do aggregates expansion nevertheless. 

So, my question is: what does prevent to introduce the same for zfs at present 
time? Is this because of the design of zfs, or there is simply no demand for it 
in community?

My understanding is that at present time there are no plans to introduce it.

--Regards,
Roman Naumenko
ro...@naumenko.com
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating to ZFS

2010-06-02 Thread Ross Walker

On Jun 2, 2010, at 12:03 PM, zfsnoob4  wrote:


Wow thank you very much for the clear instructions.

And Yes, I have another 120GB drive for the OS, separate from A, B  
and C. I will repartition the drive and install Solaris. Then maybe  
at some point I'll delete the entire drive and just install a single  
OS.



I have a question about step 6, "Step 6: create a "dummy" drive as a  
sparse file: mkfile -n 1500G /foo"


I understand that I need to create a dummy drive and then immediatly  
remove it to run the raidz in degraded mode. But by creating the  
file with mkfile, will it allocate the 1.5TB right away on the OS  
drive? I was wondering because my OS drive is only 120GB, so won't  
it have a problem with creating a 1.5TB sparse file?


There is one potential pitfall in this method, if your Windows mirror  
is using dynamic disks, you can't access a dynamic disk with the NTFS  
driver under Solaris.


To get around this create a basic NTFS partition on the new third  
drive, copy the data to that drive and blow away the dynamic mirror.  
Then build the degraded raidz pool out of the two original mirror  
disks and copy the data back off the new third disk on to the raidz,  
then wipe the disk labels off that third drive and resilver the raidz.


A safer approach is to get a 2GB eSATA drive (a mirrored device to be  
extra safe) and copy the data there, then build a complete raidz and  
copy the data off the eSATA device to the raidz.


The risk and time it takes to copy data on to a degraded raidz isn't  
worth it. The write throughput on a degraded raidz will be horrible  
and the time it takes to copy the data over plus the time it takes in  
the red zone where it resilvers the raidz with no backup available...   
There is a high potential for tears here.


Get an external disk for your own sanity.

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 10U8, Sun Cluster, and SSD issues.

2010-06-02 Thread Steve D. Jost
Andreas,
We actually are not using one and hadn't thought about that at all.  Do 
you have a recommendation on a particular model?  I see some that do SAS->SATA 
and some that are just A/A SATA switches, is one better than the other?  We 
were looking at one based on a newer LSI part number that does 6gb SAS but 
can't seem to find a source on the gear any ideas?  Thanks!

Steve Jost
 
> The Intel SSD is not a dual ported SAS device. This device must be supported
> by the SAS expander in your external chassis.
> Did you use an AAMUX transposer card for the SATA device between the
> connector of the chassis and the SATA drive?
> 
> Andreas
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating to ZFS

2010-06-02 Thread Erik Trimble

On 6/2/2010 9:03 AM, zfsnoob4 wrote:

Wow thank you very much for the clear instructions.

And Yes, I have another 120GB drive for the OS, separate from A, B and C. I 
will repartition the drive and install Solaris. Then maybe at some point I'll 
delete the entire drive and just install a single OS.


I have a question about step 6, "Step 6: create a "dummy" drive as a sparse file: 
mkfile -n 1500G /foo"

I understand that I need to create a dummy drive and then immediatly remove it 
to run the raidz in degraded mode. But by creating the file with mkfile, will 
it allocate the 1.5TB right away on the OS drive? I was wondering because my OS 
drive is only 120GB, so won't it have a problem with creating a 1.5TB sparse 
file?


Thanks
   


No.  The '-n'  option tells Solaris that this is a "sparse" file - that 
is, it only takes up space as you use it.  It reports it's size to 'ls' 
and similar utilities as the size you created it, but actually, the 
on-disk allocation is only that which has been actually written to the 
file. In this case, you're not going to be writing any data to it (well, 
just a trivial ZFS header file), so you're all set.


I just tested it again to make sure - it works just fine creating a file 
size N on a zfs filesystem size M, where N >> M.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating to ZFS

2010-06-02 Thread zfsnoob4
Wow thank you very much for the clear instructions.

And Yes, I have another 120GB drive for the OS, separate from A, B and C. I 
will repartition the drive and install Solaris. Then maybe at some point I'll 
delete the entire drive and just install a single OS.


I have a question about step 6, "Step 6: create a "dummy" drive as a sparse 
file: mkfile -n 1500G /foo"

I understand that I need to create a dummy drive and then immediatly remove it 
to run the raidz in degraded mode. But by creating the file with mkfile, will 
it allocate the 1.5TB right away on the OS drive? I was wondering because my OS 
drive is only 120GB, so won't it have a problem with creating a 1.5TB sparse 
file?


Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot destroy ... dataset already exists

2010-06-02 Thread Cindy Swearingen

Hi Ned,

If you do incremental receives, this might be CR 6860996:

%temporary clones are not automatically destroyed on error

A temporary clone is created for an incremental receive and
in some cases, is not removed automatically.

Victor might be able to describe this better, but consider
the following steps as further diagnosis or a workaround:

1. Determine clone names:

# zdb -d  | grep %

2. Destroy identified clones:
# zfs destroy 

It will complain that 'dataset does not exist', but you can check
again(see 1)

3. Destroy snapshot(s) that could not be destroyed previously

Thanks,

Cindy

On 06/02/10 08:42, Edward Ned Harvey wrote:

This is the problem:

[r...@nasbackup backup-scripts]# zfs destroy 
storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30


cannot destroy 
'storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30': dataset 
already exists


 

This is apparently a common problem.  It's happened to me twice already, 
and the third time now.  Each time it happens, it's on the "backup" 
server, so fortunately, I have total freedom to do whatever I want, 
including destroy the pool.


 

The previous two times, I googled around, basically only found "destroy 
the pool" as a solution, and I destroyed the pool.


 

This time, I would like to dedicate a little bit of time and resource to 
finding the cause of the problem, so hopefully this can be fixed for 
future users, including myself.  This time I also found "apply updates 
and repeat your attempt to destroy the snapshot"  ...  So I applied 
updates, and repeated.  But no improvement.  The OS was sol 10u6, but 
now it’s fully updated.  Problem persists.


 


I’ve also tried exporting and importing the pool.

 

Somebody on the Internet suspected the problem is somehow aftermath of 
killing a "zfs send" or receive.  This is distinctly possible, as I’m 
sure that’s happened on my systems.  But there is currently no send or 
receive being killed ... Any such occurrence is long since past, and 
even beyond reboots and such.


 

I do not use clones.  There are no clones of this snapshot anywhere, and 
there never have been.


 

I do have other snapshots, which were incrementally received based on 
this one.  But that shouldn't matter, right?


 

I have not yet called support, although we do have a support contract. 

 


Any suggestions?

 


FYI:

 


[r...@nasbackup backup-scripts]# zfs list

NAME   USED  
AVAIL  REFER  MOUNTPOINT


rpool 
19.3G   126G34K  /rpool


rpool/ROOT
16.3G   126G21K  legacy


rpool/ROOT/nasbackup_slash
16.3G   126G  16.3G  /


rpool/dump1.00G  
 126G  1.00G  -


rpool/swap
2.00G   127G  1.08G  -


storagepool   1.28T  
4.06T  34.4K  /storage


storagepool/nas-lyricpool 1.27T  
4.06T  1.13T  /storage/nas-lyricpool


storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30   
94.1G  -  1.07T  -


storagepool/nas-lyricp...@daily-2010-06-01-00-00-00   
0  -  1.13T  -


storagepool/nas-rpool-ROOT-nas_slash  8.65G  
4.06T  8.65G  /storage/nas-rpool-ROOT-nas_slash


storagepool/nas-rpool-root-nas_sl...@daily-2010-06-01-00-00-00
0  -  8.65G  -


zfs-external1 
1.13T   670G24K  /zfs-external1


zfs-external1/nas-lyricpool   
1.12T   670G  1.12T  /zfs-external1/nas-lyricpool


zfs-external1/nas-lyricp...@daily-2010-06-01-00-00-00 
0  -  1.12T  -


zfs-external1/nas-rpool-ROOT-nas_slash
8.60G   670G  8.60G  /zfs-external1/nas-rpool-ROOT-nas_slash


zfs-external1/nas-rpool-root-nas_sl...@daily-2010-06-01-00-00-00  
0  -  8.60G  -


 


And

 


[r...@nasbackup ~]# zfs get origin

NAME   
   PROPERTY  VALUE   SOURCE


rpool 
origin-   -


rpool/ROOT
origin-   -


rpool/ROOT/nasbackup_slash
origin-   -


rpool/dump
origin-   -


rpool/swap
origin-   -


storagepool   
origin-   -


storagepool/nas-lyricpool 
origin-   -


storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30   
origin-   -


storagepool/nas

Re: [zfs-discuss] cannot destroy ... dataset already exists

2010-06-02 Thread sensille
Is the pool mounted? I ran into this problem frequently, until I set mountpoint
to legacy. It may be that I had to destroy the filesystem afterwards, but since
I stopped mounting the backup target everything runs smoothly. Nevertheless I
agree it would be nice to find the root cause for this.

--
Arne

Edward Ned Harvey wrote:
> This is the problem:
> 
> [r...@nasbackup backup-scripts]# zfs destroy
> storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30
> 
> cannot destroy
> 'storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30': dataset
> already exists
> 
>  
> 
> This is apparently a common problem.  It's happened to me twice already,
> and the third time now.  Each time it happens, it's on the "backup"
> server, so fortunately, I have total freedom to do whatever I want,
> including destroy the pool.
> 
>  
> 
> The previous two times, I googled around, basically only found "destroy
> the pool" as a solution, and I destroyed the pool.
> 
>  
> 
> This time, I would like to dedicate a little bit of time and resource to
> finding the cause of the problem, so hopefully this can be fixed for
> future users, including myself.  This time I also found "apply updates
> and repeat your attempt to destroy the snapshot"  ...  So I applied
> updates, and repeated.  But no improvement.  The OS was sol 10u6, but
> now it’s fully updated.  Problem persists.
> 
>  
> 
> I’ve also tried exporting and importing the pool.
> 
>  
> 
> Somebody on the Internet suspected the problem is somehow aftermath of
> killing a "zfs send" or receive.  This is distinctly possible, as I’m
> sure that’s happened on my systems.  But there is currently no send or
> receive being killed ... Any such occurrence is long since past, and
> even beyond reboots and such.
> 
>  
> 
> I do not use clones.  There are no clones of this snapshot anywhere, and
> there never have been.
> 
>  
> 
> I do have other snapshots, which were incrementally received based on
> this one.  But that shouldn't matter, right?
> 
>  
> 
> I have not yet called support, although we do have a support contract. 
> 
>  
> 
> Any suggestions?
> 
>  
> 
> FYI:
> 
>  
> 
> [r...@nasbackup backup-scripts]# zfs list
> 
> NAME   USED 
> AVAIL  REFER  MOUNTPOINT
> 
> rpool
> 19.3G   126G34K  /rpool
> 
> rpool/ROOT   
> 16.3G   126G21K  legacy
> 
> rpool/ROOT/nasbackup_slash   
> 16.3G   126G  16.3G  /
> 
> rpool/dump1.00G 
>  126G  1.00G  -
> 
> rpool/swap   
> 2.00G   127G  1.08G  -
> 
> storagepool   1.28T 
> 4.06T  34.4K  /storage
> 
> storagepool/nas-lyricpool 1.27T 
> 4.06T  1.13T  /storage/nas-lyricpool
> 
> storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30  
> 94.1G  -  1.07T  -
> 
> storagepool/nas-lyricp...@daily-2010-06-01-00-00-00  
> 0  -  1.13T  -
> 
> storagepool/nas-rpool-ROOT-nas_slash  8.65G 
> 4.06T  8.65G  /storage/nas-rpool-ROOT-nas_slash
> 
> storagepool/nas-rpool-root-nas_sl...@daily-2010-06-01-00-00-00   
> 0  -  8.65G  -
> 
> zfs-external1
> 1.13T   670G24K  /zfs-external1
> 
> zfs-external1/nas-lyricpool  
> 1.12T   670G  1.12T  /zfs-external1/nas-lyricpool
> 
> zfs-external1/nas-lyricp...@daily-2010-06-01-00-00-00
> 0  -  1.12T  -
> 
> zfs-external1/nas-rpool-ROOT-nas_slash   
> 8.60G   670G  8.60G  /zfs-external1/nas-rpool-ROOT-nas_slash
> 
> zfs-external1/nas-rpool-root-nas_sl...@daily-2010-06-01-00-00-00 
> 0  -  8.60G  -
> 
>  
> 
> And
> 
>  
> 
> [r...@nasbackup ~]# zfs get origin
> 
> NAME  
>PROPERTY  VALUE   SOURCE
> 
> rpool
> origin-   -
> 
> rpool/ROOT   
> origin-   -
> 
> rpool/ROOT/nasbackup_slash   
> origin-   -
> 
> rpool/dump   
> origin-   -
> 
> rpool/swap   
> origin-   -
> 
> storagepool  
> origin-   -
> 
> storagepool/nas-lyricpool
> origin-   -
> 
> storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30  
> origin-   -
> 
> storagepool/nas-lyricp...@daily-2010-06-01-00-00-00  
> origin-   -
> 
> storagepool/nas-l

[zfs-discuss] zfs send recv still running

2010-06-02 Thread Asif Iqbal
# in localhost
# zfs list | grep data
localpool/data   447G  82.4G   392G  /data
localpool/d...@now  54.4G  -   419G  -

# zfs get compressratio localpool/d...@now
NAME PROPERTY   VALUE  SOURCE
localpool/d...@now  compressratio  1.00x  -


# in remotehost
# zfs list | grep data
remotepool/data 130G   401G   130G  none

# zfs get compressratio remotepool/data
NAME   PROPERTY   VALUE  SOURCE
remotepool/data  compressratio  3.57x  -


little math:

130 * 3.57 > 419

so why is this still running?

zfs send localpool/d...@now | ssh remotehost zfs recv remotepool/data

-- 
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cannot destroy ... dataset already exists

2010-06-02 Thread Edward Ned Harvey
This is the problem:

[r...@nasbackup backup-scripts]# zfs destroy
storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30

cannot destroy 'storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30':
dataset already exists

 

This is apparently a common problem.  It's happened to me twice already, and
the third time now.  Each time it happens, it's on the "backup" server, so
fortunately, I have total freedom to do whatever I want, including destroy
the pool.

 

The previous two times, I googled around, basically only found "destroy the
pool" as a solution, and I destroyed the pool.

 

This time, I would like to dedicate a little bit of time and resource to
finding the cause of the problem, so hopefully this can be fixed for future
users, including myself.  This time I also found "apply updates and repeat
your attempt to destroy the snapshot"  ...  So I applied updates, and
repeated.  But no improvement.  The OS was sol 10u6, but now it's fully
updated.  Problem persists.

 

I've also tried exporting and importing the pool.

 

Somebody on the Internet suspected the problem is somehow aftermath of
killing a "zfs send" or receive.  This is distinctly possible, as I'm sure
that's happened on my systems.  But there is currently no send or receive
being killed ... Any such occurrence is long since past, and even beyond
reboots and such.

 

I do not use clones.  There are no clones of this snapshot anywhere, and
there never have been.

 

I do have other snapshots, which were incrementally received based on this
one.  But that shouldn't matter, right?

 

I have not yet called support, although we do have a support contract.  

 

Any suggestions?

 

FYI:

 

[r...@nasbackup backup-scripts]# zfs list

NAME   USED
AVAIL  REFER  MOUNTPOINT

rpool 19.3G
126G34K  /rpool

rpool/ROOT16.3G
126G21K  legacy

rpool/ROOT/nasbackup_slash16.3G
126G  16.3G  /

rpool/dump1.00G
126G  1.00G  -

rpool/swap2.00G
127G  1.08G  -

storagepool   1.28T
4.06T  34.4K  /storage

storagepool/nas-lyricpool 1.27T
4.06T  1.13T  /storage/nas-lyricpool

storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30   94.1G
-  1.07T  -

storagepool/nas-lyricp...@daily-2010-06-01-00-00-00   0
-  1.13T  -

storagepool/nas-rpool-ROOT-nas_slash  8.65G
4.06T  8.65G  /storage/nas-rpool-ROOT-nas_slash

storagepool/nas-rpool-root-nas_sl...@daily-2010-06-01-00-00-000
-  8.65G  -

zfs-external1 1.13T
670G24K  /zfs-external1

zfs-external1/nas-lyricpool   1.12T
670G  1.12T  /zfs-external1/nas-lyricpool

zfs-external1/nas-lyricp...@daily-2010-06-01-00-00-00 0
-  1.12T  -

zfs-external1/nas-rpool-ROOT-nas_slash8.60G
670G  8.60G  /zfs-external1/nas-rpool-ROOT-nas_slash

zfs-external1/nas-rpool-root-nas_sl...@daily-2010-06-01-00-00-00  0
-  8.60G  -

 

And

 

[r...@nasbackup ~]# zfs get origin

NAME  PROPERTY
VALUE   SOURCE

rpool origin
-   -

rpool/ROOTorigin
-   -

rpool/ROOT/nasbackup_slashorigin
-   -

rpool/dumporigin
-   -

rpool/swaporigin
-   -

storagepool   origin
-   -

storagepool/nas-lyricpool origin
-   -

storagepool/nas-lyricp...@nasbackup-2010-05-14-15-56-30   origin
-   -

storagepool/nas-lyricp...@daily-2010-06-01-00-00-00   origin
-   -

storagepool/nas-lyricp...@daily-2010-06-02-00-00-00   origin
-   -

storagepool/nas-rpool-ROOT-nas_slash  origin
-   -

storagepool/nas-rpool-root-nas_sl...@daily-2010-06-01-00-00-00origin
-   -

storagepool/nas-rpool-root-nas_sl...@daily-2010-06-02-00-00-00origin
-   -

zfs-external1 origin
-   -

zfs-external1/nas-lyricpool   origin
-   -

zfs-external1/nas-lyricp...@daily-2010-06-01-00-00-00 origin
-   -

zfs-external1/nas-rpool-ROOT-nas_slashorigin
-   -

zfs-external1/nas-rpool-root-nas_sl...@daily-2010-06-01-00-00-

Re: [zfs-discuss] ZFS recovery tools

2010-06-02 Thread David Magda
On Wed, June 2, 2010 02:20, Sigbjorn Lie wrote:

> I have just recovered from a ZFS crash. During the antagonizing time
> this took, I was surprised to learn how undocumented the tools and
> options for ZFS recovery we're. I managed to recover thanks to some great
> forum posts from Victor Latushkin, however without his posts I would
> still be crying at night...

For the archives, from a private exchange:

Zdb(1M) is complicated and in-flux, so asking on zfs-discuss or calling
Oracle isn't a very onerous request IMHO.

As for recovery, see zpool(1M):

> zpool import [-o mntopts] [ -o  property=value] ... [-d dir  | -c
>  cachefile] [-D] [-f] [-R root] [-F [-n]] pool | id  [newpool]
[...]
> -F
>  Recovery mode for a non-importable pool. Attempt to return
>  the pool to an importable state by discarding the last few
>  transactions. Not all damaged pools can be recovered by
>  using this option. If successful, the data from the
>  discarded transactions is irretrievably lost. This option
>  is ignored if the pool is importable or already imported.

http://docs.sun.com/app/docs/doc/819-2240/zpool-1m

This is available as of svn_128, and not in Solaris as of Update 8 (10/09):

http://bugs.opensolaris.org/view_bug.do?bug_id=6667683

This was part of PSARC 2009/479:

http://arc.opensolaris.org/caselog/PSARC/2009/479/
http://www.c0t0d0s0.org/archives/6067-PSARC-2009479-zpool-recovery-support.html
http://sparcv9.blogspot.com/2009/09/zpool-recovery-support-psarc2009479.html

Personally I'm waiting for Solaris 10u9 for a lot of these fixes and
updates [...].

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-02 Thread Andrew Gabriel




James C. McPherson wrote:
On 
2/06/10 03:11 PM, Fred Liu wrote:
  
  Fix some typos.


#


In fact, there is no problem for MPxIO name in technology.

It only matters for storage admins to remember the name.

  
  
You are correct.
  
  
  I think there is no way to give short aliases
to these long tedious MPxIO name.

  
  
You are correct that we don't have aliases. However, I do not
  
agree that the naming is tedious. It gives you certainty about
  
the actual device that you are dealing with, without having
  
to worry about whether you've cabled it right.
  


Might want to add a call record to

    CR 6901193 Need a command to list current usage of disks,
partitions, and slices

which includes a request for vanity naming for disks.

(Actually, vanity naming for disks should probably be brought out into
a separate RFE.)

-- 

Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracle Pre-Sales
Guillemont Park | Minley Road | Camberley | GU17 9QG | United Kingdom

ORACLE Corporation UK Ltd is a
company incorporated in England & Wales | Company Reg. No. 1782505
| Reg. office: Oracle Parkway, Thames Valley Park, Reading RG6 1RA


Oracle is committed to developing practices and products that
help protect the environment




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating to ZFS

2010-06-02 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Erik Trimble
>
> Step 1:Break the mirror of A & B inside Windows 7
> Step 2:Purchase the new C hard drive, and install it in the case.
> Step 3:Boot to OpenSolaris
> Step 4:Make sure you've gone out and installed the NTFS driver   (
> http://sun.drydog.com/faq/9.html  )
> Step 5:mount drive A:mount -F ntfs /dev/dsk/c0t0d0p0 /mnt
> Step 6:create a "dummy" drive as a sparse file:mkfile -n
> 1500G /foo
> Step 7:create the new RaidZ array, using the dummy drive:
> zpool create tank raidz c0t1d0 c0t2d0 /foo
> Step 8:remove the dummy drive:zpool offline tank /foo
> Step 9:copy all the data from the NTFS drive to the new drive:
>  cd /mnt;  cp -rp . /tank
> Step 10:kill the NTFS partition info:   run "format" and delete
> everything
> Step 11:add the erased NTFS drive to the new raidZ:zpool
> replace -f tank /foo c0t0d0
> Step 12:delete the dummy file:rm /foo

Very clever!  I was just about to say "you can't do it; you need 3 devices"
but by faking out the raidz with a sparse file, degrading the filesystem,
and resilvering ... I just tested that on my system and it worked.  Nice
job.

The only thing I would add here is:  Instead of mkfile -n 1500G, I would
suggest making sure your sparse file is larger than your actual disk drives.
Since you said your disk drives are 1.5T, I'd make the sparse file something
like 1700G just to be sure the size of your raidz volume is only limited by
the size of your disk drives, and not accidentally limited by the size of
your sparse file.

Although, I'm not 100% sure that would matter anyway.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-02 Thread James C. McPherson

On  2/06/10 03:11 PM, Fred Liu wrote:

Fix some typos.

#

In fact, there is no problem for MPxIO name in technology.
It only matters for storage admins to remember the name.


You are correct.


I think there is no way to give short aliases to these long tedious MPxIO name.


You are correct that we don't have aliases. However, I do not
agree that the naming is tedious. It gives you certainty about
the actual device that you are dealing with, without having
to worry about whether you've cabled it right.



And I just have only one HBA card, so I don't need multipath indeed.


For SAS and FC-attached devices, we are moving (however slowly) towards
having MPxIO on all the time.

Please don't assume that turning on MPxIO requires you to have
multiple ports and/or HBAs - for the addressing scheme at least,
it does not. Failover that's another matter.



The simple name -- cxtxdx will be much more easier.


That naming system is rooted in parallel scsi times. It is not
appropriate for SAS and FC environments.


Furthermore, my ultimate goal is to map the disk in MPxIO path

> to the actual physical slot

position. And If there is a broken HDD, I can easily know to

> replace which one.

BTW, the "luxadm led_blink" may not work in the commodity hardware

> and only works in Sun's proprietary disk array.


I think it is a common situation  for storage admins.

**How do you replace the broken HDDs in your best practice?**


If you are running build 126 or later, then you can take advantage
of the behaviour that was added to cfgadm(1m):



$ cfgadm -lav c3 c4
Ap_Id  Receptacle   Occupant Condition 
Information

When Type Busy Phys_Id
c3 connectedconfigured   unknown
unavailable  scsi-sas n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi
c3::0,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37)
unavailable  disk-pathn 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::0,0
c3::dsk/c3t2d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t2d0
c3::dsk/c3t3d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t3d0
c3::dsk/c3t4d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t4d0
c3::dsk/c3t6d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t6d0

c4 connectedconfigured   unknown
unavailable  scsi-sas n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi
c4::5,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5F001BB01248d0s0(sd38)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::5,0
c4::6,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0
c4::dsk/c4t3d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::dsk/c4t3d0
c4::dsk/c4t7d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::dsk/c4t7d0




While the above is a bit unwieldy to read in an email, it does
show you the following things:

(0) I have SAS and SATA disks
(1) I have MPxIO turned on
(2) the MPxIO-capable devices are listed with both their "client"
or scsi_vhci path, and their traditional cXtYdZ name



$ cfgadm -lav c3::0,0 c4::5,0 c4::6,0
Ap_Id  Receptacle   Occupant Condition 
Information

When Type Busy Phys_Id
c3::0,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37)
unavailable  disk-pathn 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::0,0
c4::5,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5F001BB01248d0s0(sd38)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::5,0
c4::6,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0



No need to use luxadm.



James C. McPherson
--
Senior Software Enginee

Re: [zfs-discuss] Is it possible to disable MPxIO during OpenSolaris installation?

2010-06-02 Thread James C. McPherson

On  2/06/10 03:11 PM, Fred Liu wrote:

Fix some typos.

#

In fact, there is no problem for MPxIO name in technology.
It only matters for storage admins to remember the name.


You are correct.


I think there is no way to give short aliases to these long tedious MPxIO name.


You are correct that we don't have aliases. However, I do not
agree that the naming is tedious. It gives you certainty about
the actual device that you are dealing with, without having
to worry about whether you've cabled it right.



And I just have only one HBA card, so I don't need multipath indeed.


For SAS and FC-attached devices, we are moving (however slowly) towards
having MPxIO on all the time.

Please don't assume that turning on MPxIO requires you to have
multiple ports and/or HBAs - for the addressing scheme at least,
it does not. Failover that's another matter.



The simple name -- cxtxdx will be much more easier.


That naming system is rooted in parallel scsi times. It is not
appropriate for SAS and FC environments.


Furthermore, my ultimate goal is to map the disk in MPxIO path

> to the actual physical slot

position. And If there is a broken HDD, I can easily know to

> replace which one.

BTW, the "luxadm led_blink" may not work in the commodity hardware

> and only works in Sun's proprietary disk array.


I think it is a common situation  for storage admins.

**How do you replace the broken HDDs in your best practice?**


If you are running build 126 or later, then you can take advantage
of the behaviour that was added to cfgadm(1m):



$ cfgadm -lav c3 c4
Ap_Id  Receptacle   Occupant Condition 
Information

When Type Busy Phys_Id
c3 connectedconfigured   unknown
unavailable  scsi-sas n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi
c3::0,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37)
unavailable  disk-pathn 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::0,0
c3::dsk/c3t2d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t2d0
c3::dsk/c3t3d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t3d0
c3::dsk/c3t4d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t4d0
c3::dsk/c3t6d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::dsk/c3t6d0

c4 connectedconfigured   unknown
unavailable  scsi-sas n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi
c4::5,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5F001BB01248d0s0(sd38)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::5,0
c4::6,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0
c4::dsk/c4t3d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::dsk/c4t3d0
c4::dsk/c4t7d0 connectedconfigured   unknown 
ST3320620AS ST3320620AS
unavailable  disk n 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::dsk/c4t7d0




While the above is a bit unwieldy to read in an email, it does
show you the following things:

(0) I have SAS and SATA disks
(1) I have MPxIO turned on
(2) the MPxIO-capable devices are listed with both their "client"
or scsi_vhci path, and their traditional cXtYdZ name



$ cfgadm -lav c3::0,0 c4::5,0 c4::6,0
Ap_Id  Receptacle   Occupant Condition 
Information

When Type Busy Phys_Id
c3::0,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5000CCA00510A7CCd0s0(sd37)
unavailable  disk-pathn 
/devices/p...@0,0/pci10de,3...@a/pci1000,3...@0:scsi::0,0
c4::5,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t5F001BB01248d0s0(sd38)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::5,0
c4::6,0connectedconfigured   unknownClient 
Device: /dev/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable  disk-pathn 
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0



No need to use luxadm.



James C. McPherson
--
Senior Software Enginee

Re: [zfs-discuss] ZFS recovery tools

2010-06-02 Thread David Magda

On Jun 2, 2010, at 02:20, Sigbjorn Lie wrote:

What the hell? I don't have a support contract for my home  
machines... I don't feel like this is the right way to go for an  
open source project...


Write a letter demanding a refund. Join the OpenSolaris Governing Board.

I'm not sure Oracle's focus is now toward the community, as it is  
towards recouping the billions of dollars it paid to buy Sun. There  
are many helpful folks on this list (both in and out of Oracle) who  
will try to help you, but if you absolutely need support, you need to  
pay for it.


And as with any open source software (even things like Samba,  
OpenLDAP, etc.) there is no guarantee of support, just mailing lists  
and volunteers who do for their own reasons. OpenSolaris without a  
contract is the exact same way: there's no guarantees. If you want  
support with your distribution, pay Red Hat or SuSE; if you want it  
with OpenLDAP, pay someone like Symas; with OpenSolaris, pay Oracle.


Personally I've run into a lot of crappy documentation on Linux and  
GNU, and had to turn to online search and trail and error. OpenSolaris  
has comparatively much better documentation.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Migrating to ZFS

2010-06-02 Thread Erik Trimble

On 6/1/2010 8:36 PM, zfsnoob4 wrote:

Hello,

I currently have a raid1 array setup on Windows 7 with a pair of 1.5TB drives. 
I don't have enough space in any other drives to make a backup of all this data 
and I really don't want to copy my ~1.1 TB of files over the network anyways.

What I want to do it get a third 1.5 TB drive and make a ZFS RaidZ array (from 
what I understand it will have two drives for data and one drive for parity, 
total capacity 1.36 * 2 TB).

I think what I need to do is get opensolaris and mount one of the drives from 
the raid1 array as an NTFS partition (because it is NTFS), and then create a 
ZFS filesystem on the new drive. Copy all the data from the NTFS drive to the 
new drive then add the other two drives to create a raidZ array making sure it 
copies the parity data from the first drive.

Is this the correct way of doing it? What commands do I need to use to create 
the raidZ array from a single ZFS drive.

Thanks.
   


Let's call your current drives A & B, and the new drive you are going to 
get C, and furthermore, assume that:


A = c0t0d0
B = c0t1d0
C = c0t2d0

I'm also assuming you have other drives which are going to hold the OS - 
that is, these drives are only for data, not the OS.



Step 1:Break the mirror of A & B inside Windows 7
Step 2:Purchase the new C hard drive, and install it in the case.
Step 3:Boot to OpenSolaris
Step 4:Make sure you've gone out and installed the NTFS driver   (  
http://sun.drydog.com/faq/9.html  )

Step 5:mount drive A:mount -F ntfs /dev/dsk/c0t0d0p0 /mnt
Step 6:create a "dummy" drive as a sparse file:mkfile -n 
1500G /foo
Step 7:create the new RaidZ array, using the dummy drive:
zpool create tank raidz c0t1d0 c0t2d0 /foo

Step 8:remove the dummy drive:zpool offline tank /foo
Step 9:copy all the data from the NTFS drive to the new drive:
cd /mnt;  cp -rp . /tank
Step 10:kill the NTFS partition info:   run "format" and delete 
everything
Step 11:add the erased NTFS drive to the new raidZ:zpool 
replace -f tank /foo c0t0d0

Step 12:delete the dummy file:rm /foo

Wait for the raidz to reconstruct (resilver) - this may take quite some 
time.



Done!


If you are using the existing mirrored pair to actually boot in Windows 
7 (i.e. as the boot drives), then you can't create a raidz for 
OpenSolaris - OpenSolaris can only boot from a mirrored root pool, not a 
raidz of any kind.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss