Re: [zfs-discuss] LUN expansion choices

2012-11-14 Thread Peter Tribble
On Tue, Nov 13, 2012 at 6:16 PM, Karl Wagner k...@mouse-hole.com wrote:
 On 2012-11-13 17:42, Peter Tribble wrote:

  Given storage provisioned off a SAN (I know, but sometimes that's
   what you have to work with), what's the best way to expand a pool?
 
  Specifically, I can either grow existing LUNs, a]or add new LUNs.
 
  As an example, If I have 24x 2TB LUNs, and wish to double the
  size of the pool, is it better to add 24 additional 2TB LUNs, or
  get each of the existing LUNs expanded to 4TB each?

 This is only my opinion, but I would say you'd be better off expanding your
 current LUNs.

 The reason for this is balance. Currently, your data should be spread fairly
 evenly over the LUNs. If you add more, those will be empty, which will
 affect how data is written (data will try to go to those first).

 If you just expand your current LUNs, the data will remain balanced, and ZFS
 will just use the additional space.

Maybe, or maybe not. If you think in terms of metaslabs, then there
isn't any difference between creating extra metaslabs by growing an
existing LUN and creating new LUNs. With pooled storage on the
SAN back-end, there's no difference in I/O placement either.

Peripherally, this note by Adam Leventhal may be of interest

http://dtrace.org/blogs/ahl/2012/11/08/zfs-trivia-metaslabs/

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Karl Wagner
 

On 2012-11-13 17:42, Peter Tribble wrote: 

 Given storage
provisioned off a SAN (I know, but sometimes that's
 what you have to
work with), what's the best way to expand a pool?
 
 Specifically, I
can either grow existing LUNs, a]or add new LUNs.
 
 As an example, If
I have 24x 2TB LUNs, and wish to double the
 size of the pool, is it
better to add 24 additional 2TB LUNs, or
 get each of the existing LUNs
expanded to 4TB each?

This is only my opinion, but I would say you'd be
better off expanding your current LUNs. 

The reason for this is
balance. Currently, your data should be spread fairly evenly over the
LUNs. If you add more, those will be empty, which will affect how data
is written (data will try to go to those first). 

If you just expand
your current LUNs, the data will remain balanced, and ZFS will just use
the additional space. 

I _think _that's how it would work. Others here
will be able to give a more definitive answer. ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Brian Wilson
Not sure if this will make it to the list, but I'll try...

On 11/13/12, Peter Tribble 
 wrote:
 Given storage provisioned off a SAN (I know, but sometimes that's
 what you have to work with), what's the best way to expand a pool?
 
 Specifically, I can either grow existing LUNs, a]or add new LUNs.
 
 As an example, If I have 24x 2TB LUNs, and wish to double the
 size of the pool, is it better to add 24 additional 2TB LUNs, or
 get each of the existing LUNs expanded to 4TB each?






The thing I've found about growing the LUN is to evaluate if it saves you time 
and effort in your setup to make it work. So, first I figure out if it's 
possible. Then I figure out how big a PITA it is compared to the PITA of 
allocating new LUNs. For example I've got one SAN backend here where it's easy 
as pie (lun resize blah and then OS steps) and another backend where it 
requires a major undertaking (create temporary horcm files, destroy mirrors, 
run special command to resize mirror, wait, run special command to resize 
source, wait, recreate mirrors, delete temporary horcms, etc etc and *then* the 
OS steps).






What I love about ZFS is that it can handle either approach.






So it depends on your setup. In your case if it's at all painful to grow the 
LUNs, what I'd probably do is allocate new 4TB LUNs - and replace your 2TB LUNs 
with them one at a time with zpool replace, and wait for the resliver to finish 
each time. With autoexpansion on, you should get the additional capacity as 
soon as the resliver for each one is done, and the old 2TB LUNs should be 
reclaimable as soon as it's reslivered out.






That being said, I'm not aware of what any deeper implications of doing that 
are - from what Karl said about balancing the data out as one example.






Cheers,


Brian



 
 -- 
 -Peter Tribble
 http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
-- 




---

Brian Wilson, Solaris SE, UW-Madison DoIT

Room 3114 CSS 608-263-8047

brian.wilson(a)doit.wisc.edu

'I try to save a life a day. Usually it's my own.' - John Crichton

---
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion choices

2012-11-13 Thread Fajar A. Nugraha
On Wed, Nov 14, 2012 at 1:35 AM, Brian Wilson
brian.wil...@doit.wisc.edu wrote:
 So it depends on your setup. In your case if it's at all painful to grow the 
 LUNs, what I'd probably do is allocate new 4TB LUNs - and replace your 2TB 
 LUNs with them one at a time with zpool replace, and wait for the resliver to 
 finish each time. With autoexpansion on,

Yup, that's the gotcha. AFAIK autoexpand is off by default. You should
be able to use zpool online -e to force the expansion though.

 you should get the additional capacity as soon as the resliver for each one 
 is done, and the old 2TB LUNs should be reclaimable as soon as it's 
 reslivered out.

Minor correction: the additional capacity is only usable after a top
level vdev is completely replaced. In case of stripe-of-mirrors, this
is as soon as all vdev in one mirror is replaced. In the case of
raidzX, this is when all vdev is replaced.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-11 Thread James Hess
 What you could do is to write a program which calls 
 efi_use_whole_disk(3EXT) to re-write the label for you. Once you have a 
 new label you will be able to export/import the pool
Awesome..

Worked for me, anyways.   .C file attached
Although I did a  zpool export  before  opening the device and calling that 
function.

I'm generally not one to mess with labels on a live filesystem..
-- 
This message posted from opensolaris.org

uwd.c
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-08 Thread George Wilson

Leonid Zamdborg wrote:

George,

Is there a reasonably straightforward way of doing this partition table edit 
with existing tools that won't clobber my data?  I'm very new to ZFS, and 
didn't want to start experimenting with a live machine.
  

Leonid,

What you could do is to write a program which calls 
efi_use_whole_disk(3EXT) to re-write the label for you. Once you have a 
new label you will be able to export/import the pool and it will pickup 
the new size.


BTW, the LUN expansion project was just integrated today.

Thanks,
George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-08 Thread A Darren Dunham
On Sun, Jun 07, 2009 at 10:38:29AM -0700, Leonid Zamdborg wrote:
 Out of curiosity, would destroying the zpool and then importing the
 destroyed pool have the effect of recognizing the size change?  Or
 does 'destroying' a pool simply label a pool as 'destroyed' and make
 no other changes...

It would be unnecessary.  ZFS can handle size increases just fine
without any more than an export/import in most cases.

The problem is that the OS doesn't always make it so simple.  The label
on the disk needs to be changed to reflect the correct size of the LUN,
then any slice used on the disk needs to be changed to see the
increase.  

Destroying the zpool doesn't get the label rewritten.

You can destroy the label today, create a new label, then make slice 0
start at the same location, but encompass the entire disk.  When done,
ZFS should import and see the new space.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-04 Thread George Wilson

Leonid,

I will be integrating this functionality within the next week:

PSARC 2008/353 zpool autoexpand property
6475340 when lun expands, zfs should expand too

Unfortunately, the won't help you until they get pushed to Opensolaris. 
The problem you're facing is that the partition table needs to be 
expanded to use the newly created space. This all happens automatically 
with my code changes but if you want to do this you'll have to change 
the partition table and export/import the pool.


Your other option is to wait till these bits show up in Opensolaris.

Thanks,
George

Leonid Zamdborg wrote:

Hi,

I have a problem with expanding a zpool to reflect a change in the underlying 
hardware LUN.  I've created a zpool on top of a 3Ware hardware RAID volume, 
with a capacity of 2.7TB.  I've since added disks to the hardware volume, 
expanding the capacity of the volume to 10TB.  This change in capacity shows up 
in format:

0. c0t0d0 lt;AMCC-9650SE-16M DISK-4.06-10.00TBgt;
/p...@0,0/pci10de,3...@e/pci13c1,1...@0/s...@0,0

When I do a prtvtoc /dev/dsk/c0t0d0, I get:

* /dev/dsk/c0t0d0 partition map
*
* Dimensions:
* 512 bytes/sector
* 21484142592 sectors
* 5859311549 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector 
*  34   222   255

*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400256 5859294943 5859295198
   8 1100  5859295199 16384 5859311582

The new capacity, unfortunately, shows up as inaccessible.  I've tried exporting and 
importing the zpool, but the capacity is still not recognized.  I kept seeing things 
online about Dynamic LUN Expansion, but how do I do this?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-04 Thread Leonid Zamdborg
 The problem you're facing is that the partition table
 needs to be 
 expanded to use the newly created space. This all
 happens automatically 
 with my code changes but if you want to do this
 you'll have to change 
 the partition table and export/import the pool.

George,

Is there a reasonably straightforward way of doing this partition table edit 
with existing tools that won't clobber my data?  I'm very new to ZFS, and 
didn't want to start experimenting with a live machine.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss