Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread sol
Here it is:

# pstack core.format1
core 'core.format1' of 3351:    format
-  lwp# 1 / thread# 1  
 0806de73 can_efi_disk_be_expanded (0, 1, 0, ) + 7
 08066a0e init_globals (8778708, 0, f416c338, 8068a38) + 4c2
 08068a41 c_disk   (4, 806f250, 0, 0, 0, 0) + 48d
 0806626b main     (1, f416c3b0, f416c3b8, f416c36c) + 18b
 0805803d _start   (1, f416c47c, 0, f416c483, f416c48a, f416c497) + 7d
-  lwp# 2 / thread# 2  
 eed690b1 __door_return (0, 0, 0, 0) + 21
 eed50668 door_create_func (0, eee02000, eea1efe8, eed643e9) + 32
 eed6443c _thrp_setup (ee910240) + 9d
 eed646e0 _lwp_start (ee910240, 0, 0, 0, 0, 0)
-  lwp# 3 / thread# 3  
 eed6471b __lwp_park (8780880, 8780890) + b
 eed5e0d3 cond_wait_queue (8780880, 8780890, 0, eed5e5f0) + 63
 eed5e668 __cond_wait (8780880, 8780890, ee90ef88, eed5e6b1) + 89
 eed5e6bf cond_wait (8780880, 8780890, 208, eea740ad) + 27
 eea740f8 subscriber_event_handler (8778dd0, eee02000, ee90efe8, eed643e9) + 5c
 eed6443c _thrp_setup (ee910a40) + 9d
 eed646e0 _lwp_start (ee910a40, 0, 0, 0, 0, 0)





 From: John D Groenveld jdg...@elvis.arl.psu.edu
# pstack core
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-14 Thread Tomas Forsman
On 13 December, 2012 - Jan Owoc sent me these 1,0K bytes:

 Hi,
 
 On Thu, Dec 13, 2012 at 9:14 AM, sol a...@yahoo.com wrote:
  Hi
 
  I've just tried to use illumos (151a5)  import a pool created on solaris
  (11.1) but it failed with an error about the pool being incompatible.
 
  Are we now at the stage where the two prongs of the zfs fork are pointing in
  incompatible directions?
 
 Yes, that is correct. The last version of Solaris with source code
 used zpool version 28. This is the last version that is readable by
 non-Solaris operating systems FreeBSD, GNU/Linux, but also
 OpenIndiana. The filesystem, zfs, is technically at the same
 version, but you can't access it if you can't access the pool :-).

zfs version is bumped to 6 too in s11.1:
The following filesystem versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS filesystem version
 2   Enhanced directory entries
 3   Case insensitive and SMB credentials support
 4   userquota, groupquota properties
 5   System attributes
 6   Multilevel file system support

Pool version is upped as well:
 29  RAID-Z/mirror hybrid allocator
 30  Encryption
 31  Improved 'zfs list' performance
 32  One MB blocksize
 33  Improved share support
 34  Sharing with inheritance

 If you want to access the data now, your only option is to use Solaris
 to read it, and copy it over (eg. with zfs send | recv) onto a pool
 created with version 28.
 
 Jan
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Fred Liu
 
 BTW, anyone played NDMP in solaris? Or is it feasible to transfer snapshot via
 NDMP protocol?

I've heard you could, but I've never done it.  Sorry I'm not much help, except 
as a cheer leader.  You can do it!  I think you can!  Don't give up! heheheheh
Please post back whatever you find, or if you have to figure it out for 
yourself, then blog about it and post that.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of sol
 
 I added a 3TB Seagate disk (ST3000DM001) and ran the 'format' command but
 it crashed and dumped core.
 
 However the zpool 'create' command managed to create a pool on the whole
 disk (2.68 TB space).
 
 I hope that's only a problem with the format command and not with zfs or
 any other part of the kernel.

Suspicion and conjecture only:  I think format uses a fdisk label, which has a 
2T limit.  

Normally it's advised to use the whole disk directly via zpool anyway, so 
hopefully that's a good solution for you.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Bob Netherton
 
 At this point, the only thing would be to use 11.1 to create a new pool at 
 151's
 version (-o version=) and top level dataset (-O version=).   Recreate the file
 system hierarchy and do something like an rsync.  I don't think there is
 anything more elegant, I'm afraid.

Is that right?  You can't use zfs send | zfs receive to send from a newer 
version and receive on an older version?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] The format command crashes on 3TB disk but zpool create ok

2012-12-14 Thread Cindy Swearingen

Hey Sol,

Can you send me the core file, please?

I would like to file a bug for this problem.

Thanks, Cindy

On 12/14/12 02:21, sol wrote:

Here it is:

# pstack core.format1
core 'core.format1' of 3351: format
- lwp# 1 / thread# 1 
0806de73 can_efi_disk_be_expanded (0, 1, 0, ) + 7
08066a0e init_globals (8778708, 0, f416c338, 8068a38) + 4c2
08068a41 c_disk (4, 806f250, 0, 0, 0, 0) + 48d
0806626b main (1, f416c3b0, f416c3b8, f416c36c) + 18b
0805803d _start (1, f416c47c, 0, f416c483, f416c48a, f416c497) + 7d
- lwp# 2 / thread# 2 
eed690b1 __door_return (0, 0, 0, 0) + 21
eed50668 door_create_func (0, eee02000, eea1efe8, eed643e9) + 32
eed6443c _thrp_setup (ee910240) + 9d
eed646e0 _lwp_start (ee910240, 0, 0, 0, 0, 0)
- lwp# 3 / thread# 3 
eed6471b __lwp_park (8780880, 8780890) + b
eed5e0d3 cond_wait_queue (8780880, 8780890, 0, eed5e5f0) + 63
eed5e668 __cond_wait (8780880, 8780890, ee90ef88, eed5e6b1) + 89
eed5e6bf cond_wait (8780880, 8780890, 208, eea740ad) + 27
eea740f8 subscriber_event_handler (8778dd0, eee02000, ee90efe8,
eed643e9) + 5c
eed6443c _thrp_setup (ee910a40) + 9d
eed646e0 _lwp_start (ee910a40, 0, 0, 0, 0, 0)



*From:* John D Groenveld jdg...@elvis.arl.psu.edu
# pstack core



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-14 Thread Palmer, Trey
We have found mbuffer to be the fastest solution.   Our rates for large 
transfers on 10GbE are:

280MB/smbuffer
220MB/srsh
180MB/sHPN-ssh unencrypted
 60MB/s standard ssh

The tradeoff mbuffer is a little more complicated to script;   rsh is, well, 
you know;  and hpn-ssh requires rebuilding ssh and (probably) maintaining a 
second copy of it. 

 -- Trey Palmer

From: zfs-discuss-boun...@opensolaris.org [zfs-discuss-boun...@opensolaris.org] 
on behalf of Fred Liu [fred_...@issi.com]
Sent: Thursday, December 13, 2012 11:23 PM
To: Freddie Cash
Cc: zfs-discuss
Subject: Re: [zfs-discuss] any more efficient way to transfer snapshot between 
two hosts than ssh tunnel?

Add the HPN patches to OpenSSH and enable the NONE cipher.  We can saturate a 
gigabits link (980 mbps) between two FreeBSD hosts using that.
Without it, we were only able to hit ~480 mbps on a good day.
If you want 0 overhead, there's always netcat. :)

980mbps is awesome! I am thinking running two ssh services -- one normal and 
one with HPN patches only for backup job.
But now sure they can work before I try them. I will also try netcat.

Many thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-14 Thread Fred Liu

 
 I've heard you could, but I've never done it.  Sorry I'm not much help,
 except as a cheer leader.  You can do it!  I think you can!  Don't give
 up! heheheheh
 Please post back whatever you find, or if you have to figure it out for
 yourself, then blog about it and post that.


Aha! Gotcha! I will give it a try.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-14 Thread Fred Liu
Post in the list.

 -Original Message-
 From: Fred Liu
 Sent: 星期五, 十二月 14, 2012 23:41
 To: 'real-men-dont-cl...@gmx.net'
 Subject: RE: [zfs-discuss] any more efficient way to transfer snapshot
 between two hosts than ssh tunnel?
 
 
 
 
  Hi Fred,
 
  I played with zfs send/reveive some time ago. One important thing I
  learned was that netcat is not the first choice to use.
  There is a tool called mbuffer out there. mbuffer works similar to
  netcat but allows a specific buffer size and block size.
  From various resources I found out that the best buffer and block
 sizes
  for zfs send/receive seem to be 1GB for the buffer with a block size
 of
  131073.
  Replacing netcat by mubuffer dramatically increases the throughput.
 
 
  The resulting commands are like:
 
  ssh -f $REMOTESRV /opt/csw/bin/mbuffer -q -I $PORT -m 1G -s 131072 |
  zfs receive -vFd $REMOTEPOOL
 
  zfs send $CURRENTLOCAL | /opt/csw/bin/mbuffer -q -O $REMOTESRV:$PORT
 -m
  1G -s 131072  /dev/null
 
 
  cu
 
 
 Carsten,
 
 Thank you so much for the sharing and I will try it.
 
 Fred

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-14 Thread Fred Liu

 
 We have found mbuffer to be the fastest solution.   Our rates for large
 transfers on 10GbE are:
 
 280MB/smbuffer
 220MB/srsh
 180MB/sHPN-ssh unencrypted
  60MB/s standard ssh
 
 The tradeoff mbuffer is a little more complicated to script;   rsh is,
 well, you know;  and hpn-ssh requires rebuilding ssh and (probably)
 maintaining a second copy of it.
 
  -- Trey Palmer
 

In 10GbE env, even 280MB/s is not a so decent result. Maybe the alternative 
could
be a two-step way. Putting snapshots via NFS/iSCSI and receiving them locally.
But that is not perfect. 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any more efficient way to transfer snapshot between two hosts than ssh tunnel?

2012-12-14 Thread Eric D. Mudama

On Fri, Dec 14 at  9:29, Fred Liu wrote:




We have found mbuffer to be the fastest solution.   Our rates for large
transfers on 10GbE are:

280MB/smbuffer
220MB/srsh
180MB/sHPN-ssh unencrypted
 60MB/s standard ssh

The tradeoff mbuffer is a little more complicated to script;   rsh is,
well, you know;  and hpn-ssh requires rebuilding ssh and (probably)
maintaining a second copy of it.

 -- Trey Palmer



In 10GbE env, even 280MB/s is not a so decent result. Maybe the alternative 
could
be a two-step way. Putting snapshots via NFS/iSCSI and receiving them locally.
But that is not perfect.


Even with infinite wire speed, you're bound by the ability of the
source server to generate the snapshot stream and the ability of the
destination server to write the snapshots to the media.

Our little servers in-house using ZFS don't read/write that fast when
pulling snapshot contents off the disks, since they're essentially
random access on a server that's been creating/deleting snapshots for
a long time.

--eric


--
Eric D. Mudama
edmud...@bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] S11 vs illumos zfs compatiblity

2012-12-14 Thread bob netherton
On 12/14/12 10:07 AM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) 
wrote:

Is that right?  You can't use zfs send | zfs receive to send from a newer 
version and receive on an older version?



No.  You can, with recv, override any property in the sending stream that can be 
set from the command line (ie, a writable).  Version is not one of those 
properties.  It only gets changed, in an upward direction, when you do a zfs 
upgrade.


ie:

#  zfs get version repo/support
NAME  PROPERTY  VALUESOURCE
repo/support  version   5-


# zfs send repo/support@cpu-0412 | zfs recv -o version=4 repo/test
cannot receive: cannot override received version



You can send a version 6 file system into a version 28 pool, but it will still 
be a version 6 file system.



Bob



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2012-12-14 Thread Jamie Krier
I have removed all L2arc devices as a precaution.  Has anyone seen this
error with no L2arc device configured?


On Thu, Dec 13, 2012 at 9:03 AM, Bob Friesenhahn 
bfrie...@simple.dallas.tx.us wrote:

 On Wed, 12 Dec 2012, Jamie Krier wrote:



 I am thinking about switching to an Illumos distro, but wondering if this
 problem may be present there
 as well.


 I believe that Illumos is forked before this new virtual memory sub-system
 was added to Solaris.  There have not been such reports on Illumos or
 OpenIndiana mailing lists and I don't recall seeing this issue in the bug
 trackers.

 Illumos is not so good at dealing with huge memory systems but perhaps it
 is also more stable as well.

 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/**
 users/bfriesen/ http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss