[zfs-discuss] zfs recv hangs machine

2009-06-07 Thread Leonid Zamdborg
I'm running OpenSolaris 2009.06, and when I attempt to restore a ZFS snapshot, 
the machine hangs in an odd fashion.  

I create a backup of fs1 (roughly 15GB):
zfs send -R tank/f...@1 | gzip > /backups/test_1.gz

I create a new zpool to accept the backup:
zpool create testdev1 testpool

Then I attempt to restore the backup to the new pool:
gzcat /backups/test_1.gz | zfs recv -d testpool

Somewhere around the 6GB mark, the xterm hangs and the whole console hangs.  If 
I try to SSH in, I get a user prompt and a password prompt, then it hangs upon 
password entry.  Machine is still responsive to ping, but none of the CIFS 
shares are accessible when in this state.  Power-cycle resolves it, and upon 
logging in, the backup was obviously not restored.

What on earth is going on?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-07 Thread Leonid Zamdborg
Out of curiosity, would destroying the zpool and then importing the destroyed 
pool have the effect of recognizing the size change?  Or does 'destroying' a 
pool simply label a pool as 'destroyed' and make no other changes...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-04 Thread Leonid Zamdborg
> The problem you're facing is that the partition table
> needs to be 
> expanded to use the newly created space. This all
> happens automatically 
> with my code changes but if you want to do this
> you'll have to change 
> the partition table and export/import the pool.

George,

Is there a reasonably straightforward way of doing this partition table edit 
with existing tools that won't clobber my data?  I'm very new to ZFS, and 
didn't want to start experimenting with a live machine.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-03 Thread Leonid Zamdborg
I'm running 2008.11.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] LUN expansion

2009-06-03 Thread Leonid Zamdborg
Hi,

I have a problem with expanding a zpool to reflect a change in the underlying 
hardware LUN.  I've created a zpool on top of a 3Ware hardware RAID volume, 
with a capacity of 2.7TB.  I've since added disks to the hardware volume, 
expanding the capacity of the volume to 10TB.  This change in capacity shows up 
in format:

0. c0t0d0 
/p...@0,0/pci10de,3...@e/pci13c1,1...@0/s...@0,0

When I do a prtvtoc /dev/dsk/c0t0d0, I get:

* /dev/dsk/c0t0d0 partition map
*
* Dimensions:
* 512 bytes/sector
* 21484142592 sectors
* 5859311549 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector 
*  34   222   255
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400256 5859294943 5859295198
   8 1100  5859295199 16384 5859311582

The new capacity, unfortunately, shows up as inaccessible.  I've tried 
exporting and importing the zpool, but the capacity is still not recognized.  I 
kept seeing things online about "Dynamic LUN Expansion", but how do I do this?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss