Re: [zfs-discuss] zfs migration question

2008-10-22 Thread Johan Hartzenberg
On Wed, Oct 22, 2008 at 2:35 AM, Dave Bevans <[EMAIL PROTECTED]> wrote:

>  Hi,
>
> I have a customer with the following question...
>
> She's trying to combine 2 ZFS 460gb disks into one 900gb ZFS disk. If this
> is possible how is this done? Is there any documentation on this that I can
> provide to them?
>
>
There is no way to do this without a backup/restore.

Backup one of the zpools.
Destroy this zpool.
Add the disk to the other (remaining) zpool.  This will make it bigger
automatically.
Restore the data into your bigger pool.

-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke


My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration question

2008-10-22 Thread Tomas Ögren
On 21 October, 2008 - Dave Bevans sent me these 11K bytes:

> Hi,
> 
> I have a customer with the following question...
> 
> She's trying to combine 2 ZFS 460gb disks into one 900gb ZFS disk. If 
> this is possible how is this done? Is there any documentation on this 
> that I can provide to them?

You mean 'zpool create mypool disk1 disk2' which creates mypool
consisting of the two disks disk1 and disk2 without any ZFS redundancy?

Or what's your definition of "2 ZFS 460gb disks" and "900gb ZFS disk" ?

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs migration question

2008-10-21 Thread Dave Bevans

Hi,

I have a customer with the following question...

She's trying to combine 2 ZFS 460gb disks into one 900gb ZFS disk. If 
this is possible how is this done? Is there any documentation on this 
that I can provide to them?

--
Regards,
Dave
--
My normal working hours are Sunday through Wednesday from 8PM to 6AM 
Eastern. If you need assistance outside of these hours, please call 
1-800-usa-4sun and request the next available engineer.





Sun Microsystems
Mailstop ubur04-206
75 Network Drive
Burlington, MA  01803

*Dave Bevans - Technical Support Engineer*
*Phone: 1-800-USA-4SUN (800-872-4786)
(opt-2), (case #) (press "0" for the next available engineer)
* *Email:   david.bevans @Sun.com 


TSC Systems Group-OS / Hours: 8PM - 6AM EST / Sun - Wed
*
Submit, Check & Update Cases at the Online Support Center 





This email may contain confidential and privileged material for the sole 
use of the intended recipient. Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient please 
contact the sender and delete all copies.



--
Regards,
Dave
--
My normal working hours are Sunday through Wednesday from 8PM to 6AM 
Eastern. If you need assistance outside of these hours, please call 
1-800-usa-4sun and request the next available engineer.





Sun Microsystems
Mailstop ubur04-206
75 Network Drive
Burlington, MA  01803

*Dave Bevans - Technical Support Engineer*
*Phone: 1-800-USA-4SUN (800-872-4786)
(opt-2), (case #) (press "0" for the next available engineer)
* *Email:   david.bevans @Sun.com 


TSC Systems Group-OS / Hours: 8PM - 6AM EST / Sun - Wed
*
Submit, Check & Update Cases at the Online Support Center 





This email may contain confidential and privileged material for the sole 
use of the intended recipient. Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient please 
contact the sender and delete all copies.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Richard Elling

zpool replace == zpool attach + zpool detach

It is not a good practice to detach and then attach as you
are vulnerable after the detach and before the attach completes.

It is a good practice to attach and then detach.  There is no
practical limit to the number of sides of a mirror in ZFS.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Mark J Musante
On Fri, 1 Jun 2007, Krzys wrote:

> bash-3.00# zpool replace mypool c1t2d0 emcpower0a
> bash-3.00# zpool status
>pool: mypool
>   state: ONLINE
> status: One or more devices is currently being resilvered.  The pool will
>  continue to function, possibly in a degraded state.
> action: Wait for the resilver to complete.
>   scrub: resilver in progress, 0.00% done, 17h50m to go
> config:
>
>  NAMESTATE READ WRITE CKSUM
>  mypool  ONLINE   0 0 0
>replacing ONLINE   0 0 0
>  c1t2d0  ONLINE   0 0 0
>  emcpower0a  ONLINE   0 0 0

I don't think this is what you want.  Notice that it is in the process of
replacing c1t2d0 with emcpower0a.  Once the replacing operation is
complete, c1t2d0 will be removed from the configuration.

You've got two options.  Let's say your current mirror is c1t2d0 and
c1t3d0, and you want to replace c1t3d0 with emcpower0a.

Option one: perform a direct replace:

# zpool replace mypool c1t3d0 emcpower0a

Option two: remove c1t3d0 and add in emcpower0a:

# zpool detach mypool c1t3d0
# zpool attach mypool c1t2d0 emcpower0a

Do not mix these two options, as you showed in your email.  Do not perform
a 'detach' followed by a 'replace'.  This is mixing your options and you
will end up with a config you were not expecting.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys

Ok, now its seems like its working what I wanted to do:
bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool detach mypool c1t3d0
bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  c1t2d0ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
bash-3.00# zpool status
  pool: mypool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 0.00% done, 17h50m to go
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  replacing ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
emcpower0a  ONLINE   0 0 0

errors: No known data errors
bash-3.00#


thank you everyone who helped me with this...

Chris






On Fri, 1 Jun 2007, Will Murnane wrote:


On 6/1/07, Krzys <[EMAIL PROTECTED]> wrote:

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool2 123M   83.5K123M 0%  ONLINE -

Are you sure you've allocated as large a LUN as you thought initially?
Perhaps ZFS is doing something funky with it; does putting UFS on it
show a large filesystem or a small one?

Will


!DSPAM:122,46601749220211363223461!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys
yeah it does something funky that I did not expect, zpool seems like its taking 
slice 0 of that emc lun rather than taking the whole device...



so when I did create that lun, I formated disk and it looked like this:
format> verify

Primary label contents:

Volume name = <>
ascii name  = 
pcyl= 51200
ncyl= 51198
acyl=2
nhead   =  256
nsect   =   16
Part  TagFlag Cylinders SizeBlocks
  0   rootwm   0 -63  128.00MB(64/0/0)   262144
  1   swapwu  64 -   127  128.00MB(64/0/0)   262144
  2 backupwu   0 - 51197  100.00GB(51198/0/0) 209707008
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6usrwm 128 - 51197   99.75GB(51070/0/0) 209182720
  7 unassignedwm   00 (0/0/0) 0

that is the reason when I was trying to replace the other disk zpool did take 
slice 0 of that disk which was 128mb and treated it as pool rather than taking 
the whole disk or slice 2 or whatever it does with normal devices... I have that 
system connected to EMC clarion and I am using powerpath software from emc to do 
multipathing and stuff... ehh.. will try to replace that device old internal 
disk with this one and lets see how that will work.


thanks so much for help.

Chris


On Fri, 1 Jun 2007, Will Murnane wrote:


On 6/1/07, Krzys <[EMAIL PROTECTED]> wrote:

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool2 123M   83.5K123M 0%  ONLINE -

Are you sure you've allocated as large a LUN as you thought initially?
Perhaps ZFS is doing something funky with it; does putting UFS on it
show a large filesystem or a small one?

Will


!DSPAM:122,46601749220211363223461!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys

ok, I think I did figure out what is the problem
well what zpool does for that emc powerpath is it takes parition 0 from disk and 
is trying to attach it to my pool, so when I added emcpower0a I got the 
following:

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool2 123M   83.5K123M 0%  ONLINE -

because my emcpower0a structure looked like this:
format> verify

Primary label contents:

Volume name = <>
ascii name  = 
pcyl= 51200
ncyl= 51198
acyl=2
nhead   =  256
nsect   =   16
Part  TagFlag Cylinders SizeBlocks
  0   rootwm   0 -63  128.00MB(64/0/0)   262144
  1   swapwu  64 -   127  128.00MB(64/0/0)   262144
  2 backupwu   0 - 51197  100.00GB(51198/0/0) 209707008
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6usrwm 128 - 51197   99.75GB(51070/0/0) 209182720
  7 unassignedwm   00 (0/0/0) 0


So what I did I changed my layout to look like this:
Part  TagFlag Cylinders SizeBlocks
  0   rootwm   0 - 51197  100.00GB(51198/0/0) 209707008
  1   swapwu   00 (0/0/0) 0
  2 backupwu   0 - 51197  100.00GB(51198/0/0) 209707008
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6usrwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0


created new pool and I have the following:
bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool299.5G 80K   99.5G 0%  ONLINE -

so now I will try to replace it... I guess zpool does treat differently devices 
and in particular the ones that are under emc powerpath controll which is using 
the first slice of that disk to create pool and not the whole device...


Anyway thanks to everyone for help, now that replace should work... I am going 
to try it now.


Chris



On Fri, 1 Jun 2007, Will Murnane wrote:


On 5/31/07, Krzys <[EMAIL PROTECTED]> wrote:

so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small

Try "zpool attach mypool emcpower0a"; see
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view .

Will


!DSPAM:122,465fa1d813332148481500!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Will Murnane

On 6/1/07, Krzys <[EMAIL PROTECTED]> wrote:

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool2 123M   83.5K123M 0%  ONLINE -

Are you sure you've allocated as large a LUN as you thought initially?
Perhaps ZFS is doing something funky with it; does putting UFS on it
show a large filesystem or a small one?

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys

Never the less I get the following error:
bash-3.00# zpool attach mypool emcpower0a
missing  specification
usage:
attach [-f]   

bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool attach mypool c1t2d0 emcpower0a
cannot attach emcpower0a to c1t2d0: device is too small
bash-3.00#

Is there anyway to add that emc san to zfs at all? It seems like that emcpower0a 
cannot be added in any way...


but check this out, I did try to add it in as a new pool and here is what I got:
bash-3.00# zpool create mypool2 emcpower0a


bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors

  pool: mypool2
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
mypool2   ONLINE   0 0 0
  emcpower0a  ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   53.1G   14.9G78%  ONLINE -
mypool2 123M   83.5K123M 0%  ONLINE -
bash-3.00#







On Fri, 1 Jun 2007, Will Murnane wrote:


On 5/31/07, Krzys <[EMAIL PROTECTED]> wrote:

so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small

Try "zpool attach mypool emcpower0a"; see
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view .

Will


!DSPAM:122,465fa1d813332148481500!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-06-01 Thread Krzys
Yes by my goal is to replace exisiting disk which is internal disk 72gb with SAN 
storage disk which is 100GB in size... As long as I will be able to detach the 
old one then its going to be great... otherwise I will be stuck with one 
internal disk and oneSAN disk which I do not like that much to have.


Regards,

Chris


On Fri, 1 Jun 2007, Will Murnane wrote:


On 5/31/07, Krzys <[EMAIL PROTECTED]> wrote:

so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small

Try "zpool attach mypool emcpower0a"; see
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view .

Will


!DSPAM:122,465fa1d813332148481500!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-05-31 Thread Will Murnane

On 5/31/07, Krzys <[EMAIL PROTECTED]> wrote:

so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small

Try "zpool attach mypool emcpower0a"; see
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qrt?a=view .

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-05-31 Thread Krzys
Hmm, I am having some problems, I did follow what you suggested and here is what 
I did:


bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors
bash-3.00# zpool detach mypool c1t3d0
bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  c1t2d0ONLINE   0 0 0

errors: No known data errors


so now I have only one disk in my pool... Now, the c1t2d0 disk is a 72fb SAS 
drive. I am trying to replace it with SAN 100GB LUN (emcpower0a)




bash-3.00# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c1t0d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   1. c1t1d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   2. c1t2d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   3. c1t3d0 
  /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
   4. c2t5006016041E035A4d0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
   5. c2t5006016941E035A4d0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED]/SUNW,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
   6. c3t5006016841E035A4d0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
   7. c3t5006016141E035A4d0 
  /[EMAIL PROTECTED],70/[EMAIL PROTECTED],2/SUNW,[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
   8. emcpower0a 
  /pseudo/[EMAIL PROTECTED]
Specify disk (enter its number): ^D


so I do run replace command and I get and error:
bash-3.00# zpool replace mypool c1t2d0 emcpower0a
cannot replace c1t2d0 with emcpower0a: device is too small

Any idea what I am doing wrong? Why it thinks that emcpower0a is too small?

Regards,

Chris




On Thu, 31 May 2007, Richard Elling wrote:


Krzys wrote:
Sorry to bother you but something is not clear to me regarding this 
process.. Ok, lets sat I have two internal disks (73gb each) and I am 
mirror them... now I want to replace those two mirrored disks into one LUN 
that is on SAN and it is around 100gb. Now I do meet one requirement of 
having more than 73gb of storage but do I need only something like 73gb at 
minimum or do I actually need two luns of 73gb or more since I have it 
mirrored?


You can attach any number of devices to a mirror.

You can detach all but one of the devices from a mirror.  Obviously, when
the number is one, you don't currently have a mirror.

The resulting logical size will be equivalent to the smallest device.

My goal is simple to move data of two mirrored disks into one single SAN 
device... Any ideas if what I am planning to do is duable? or do I need to 
use zfs send and receive and just update everything and switch when I am 
done?


or do I just add this SAN disk to the existing pool and then remove mirror 
somehow? I would just have to make sure that all data is off that disk... 
is there any option to evacuate data off that mirror?


The ZFS terminology is "attach" and "detach"  A "replace" is an attach
followed by detach.

It is a good idea to verify that the sync has completed before detaching.
zpool status will show the current status.
-- richard


!DSPAM:122,465f396b235932151120594!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-05-31 Thread Richard Elling

Krzys wrote:
Sorry to bother you but something is not clear to me regarding this 
process.. Ok, lets sat I have two internal disks (73gb each) and I am 
mirror them... now I want to replace those two mirrored disks into one 
LUN that is on SAN and it is around 100gb. Now I do meet one requirement 
of having more than 73gb of storage but do I need only something like 
73gb at minimum or do I actually need two luns of 73gb or more since I 
have it mirrored?


You can attach any number of devices to a mirror.

You can detach all but one of the devices from a mirror.  Obviously, when
the number is one, you don't currently have a mirror.

The resulting logical size will be equivalent to the smallest device.

My goal is simple to move data of two mirrored disks into one single SAN 
device... Any ideas if what I am planning to do is duable? or do I need 
to use zfs send and receive and just update everything and switch when I 
am done?


or do I just add this SAN disk to the existing pool and then remove 
mirror somehow? I would just have to make sure that all data is off that 
disk... is there any option to evacuate data off that mirror?


The ZFS terminology is "attach" and "detach"  A "replace" is an attach
followed by detach.

It is a good idea to verify that the sync has completed before detaching.
zpool status will show the current status.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-05-31 Thread Krzys
Sorry to bother you but something is not clear to me regarding this process.. 
Ok, lets sat I have two internal disks (73gb each) and I am mirror them... now I 
want to replace those two mirrored disks into one LUN that is on SAN and it is 
around 100gb. Now I do meet one requirement of having more than 73gb of storage 
but do I need only something like 73gb at minimum or do I actually need two luns 
of 73gb or more since I have it mirrored?


My goal is simple to move data of two mirrored disks into one single SAN 
device... Any ideas if what I am planning to do is duable? or do I need to use 
zfs send and receive and just update everything and switch when I am done?


or do I just add this SAN disk to the existing pool and then remove mirror 
somehow? I would just have to make sure that all data is off that disk... is 
there any option to evacuate data off that mirror?



here is what I exactly have:
bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool   68G   52.9G   15.1G77%  ONLINE -
bash-3.00# zpool status
  pool: mypool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool  ONLINE   0 0 0
  mirrorONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0

errors: No known data errors
bash-3.00#


On Tue, 29 May 2007, Cyril Plisko wrote:


On 5/29/07, Krzys <[EMAIL PROTECTED]> wrote:

Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more 
storage
to this pool (double the space) then start using it. Then what I wanted to 
do is
just take out the internal disks out of that pool and use SAN only. Is 
there any
way to do that with zfs pools? Is there any way to move data from those 
internal

disks to external disks?


You can "zpool replace" your disks with other disks. Provided that you have
same amount of new disks and they are of same or greater size


--
Regards,
  Cyril


!DSPAM:122,465c515921755021468!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-05-29 Thread Krzys

Perfect, i will try to play with that...

Regards,

Chris


On Tue, 29 May 2007, Cyril Plisko wrote:


On 5/29/07, Krzys <[EMAIL PROTECTED]> wrote:

Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more 
storage
to this pool (double the space) then start using it. Then what I wanted to 
do is
just take out the internal disks out of that pool and use SAN only. Is 
there any
way to do that with zfs pools? Is there any way to move data from those 
internal

disks to external disks?


You can "zpool replace" your disks with other disks. Provided that you have
same amount of new disks and they are of same or greater size


--
Regards,
  Cyril


!DSPAM:122,465c515921755021468!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs migration

2007-05-29 Thread Cyril Plisko

On 5/29/07, Krzys <[EMAIL PROTECTED]> wrote:

Hello folks, I have a question. Currently I have zfs pool (mirror) on two
internal disks... I wanted to connect that server to SAN, then add more storage
to this pool (double the space) then start using it. Then what I wanted to do is
just take out the internal disks out of that pool and use SAN only. Is there any
way to do that with zfs pools? Is there any way to move data from those internal
disks to external disks?


You can "zpool replace" your disks with other disks. Provided that you have
same amount of new disks and they are of same or greater size


--
Regards,
   Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs migration

2007-05-29 Thread Krzys
Hello folks, I have a question. Currently I have zfs pool (mirror) on two 
internal disks... I wanted to connect that server to SAN, then add more storage 
to this pool (double the space) then start using it. Then what I wanted to do is 
just take out the internal disks out of that pool and use SAN only. Is there any 
way to do that with zfs pools? Is there any way to move data from those internal 
disks to external disks?


I mean there are ways around it, I know I can make new pool, create snap on old 
and then send it over then when I am done just bring zone down make incremental 
sync and then switch that zone to use new pool, but I wanted to do it while I 
have everything up.. so my goal was to add another disk (SAN) disk to my 
existing two disks mirrored pool, then move data while I have everything running 
from one internal disks to SAN and then just take those internal disks out...


Any comments or suggestions greatly appreciated.

Regards,

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss