Re: [zfs-discuss] zfs send tape autoloaders?

2011-02-15 Thread David Strom

Up to the moderator whether this will add anything:

I dedicated the 2nd NICs on 2 V440s to transport the 9.5TB ZFS between 
SANs.  configured a private subnet  allowed rsh on the receiving V440.


command:  zfs send | (rsh receiving-host zfs receive ...)

Took a whole week (7 days) and brought the receiving host's networking 
down to unusable.  Could not ssh in to the first NIC, as the host would 
not respond before timing out.  Some Oracle db connections stayed up, 
but were horribly slow.  Gigabit Ethernet on a nice fast Cisco 4006 
switch.  Solaris 10, update 5 sending to Solaris 10 update 3, both V440s.


I'm not going to do this ever again, I hope, so I'm not concerned with 
the why or how, but it was pretty bad.  Seems like a zfsdump would be a 
good thing.


--
David Strom


On 1/13/2011 11:46 AM, David Magda wrote:

On Thu, January 13, 2011 09:00, David Strom wrote:

Moving to a new SAN, both LUNs will not be accessible at the same time.

Thanks for the several replies I've received, sounds like the dd to tape
mechanism is broken for zfs send, unless someone knows otherwise or has
some trick?

I'm just going to try a tar to tape then (maybe using dd), then, as I
don't have any extended attributes/ACLs.  Would appreciate any
suggestions for block sizes for LTO5 tape drive, writing to LTO4 tapes
(what I have).

Might send it across the (Gigabit Ethernet) network to a server that's
already on the new SAN, but I was trying to avoid hogging down the
network or the other server's NIC.

I've seen examples online for sending via network, involves piping zfs
send over ssh to zfs receive, right?  Could I maybe use rsh, if I enable
it temporarily between the two hosts?


If you don't already have a backup infrastructure (remember: RAID !=
backup), this may be a good opportunity. Something like Amanda or Bacula
is gratis, and it could be useful for other circumstances.

If this is a one-off it may not be worth it, but having important data
without having (offline) backups is usually tempting fate.

If you're just going to go to tape, then suntar/gnutar/star can write
directly to it (or via rmt over the network), and there's no sense
necessarily going through dd; 'tar' is short for TApe aRchiver after all.

(However this is getting a bit OT for ZFS, and heading towards general
sysadmin related.)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-02-15 Thread Ian Collins

 On 02/16/11 09:50 AM, David Strom wrote:

Up to the moderator whether this will add anything:

I dedicated the 2nd NICs on 2 V440s to transport the 9.5TB ZFS between 
SANs.  configured a private subnet  allowed rsh on the receiving V440.


command:  zfs send | (rsh receiving-host zfs receive ...)

Took a whole week (7 days) and brought the receiving host's networking 
down to unusable.  Could not ssh in to the first NIC, as the host 
would not respond before timing out.  Some Oracle db connections 
stayed up, but were horribly slow.  Gigabit Ethernet on a nice fast 
Cisco 4006 switch.  Solaris 10, update 5 sending to Solaris 10 update 
3, both V440s.


You were lucky to get away with that, the sending filesystems versions 
must have been old enough to be received by U3.


ZFS has come a long way since those releases, we had a lot of lock-up 
problems with update 6, none now.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-14 Thread Brandon High
On Thu, Jan 13, 2011 at 8:09 AM, Stephan Budach stephan.bud...@jvm.de wrote:
 Actually mbuffer does a great job for that, too. Whenever I am using mbuffer
 I am achieving much higher throughput then using ssh.

Agreed, mbuffer seems to be required to get decent throughput. Using
it on both ends of an SSH pipe (or at least at the sending side) helps
smooth out performance more than you'd expect.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread Edward Ned Harvey
 From: Richard Elling [mailto:richard.ell...@gmail.com]
 
  This means the current probability of any sha256 collision in all of the
  data in the whole world, using a ridiculously small block size, assuming
all
 
 ... it doesn't matter. Other posters have found collisions and a collision
 without
 verify means silent data corruption.  Do yourself a favor, enable verify.
  -- richard

Somebody has found sha256 collisions?  Perhaps it should be published to let
the world know.

http://en.wikipedia.org/wiki/SHA-2 says none have ever been found for either
sha-1(160) or sha-2(256 or 512).  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread David Strom

Moving to a new SAN, both LUNs will not be accessible at the same time.

Thanks for the several replies I've received, sounds like the dd to tape 
mechanism is broken for zfs send, unless someone knows otherwise or has 
some trick?


I'm just going to try a tar to tape then (maybe using dd), then, as I 
don't have any extended attributes/ACLs.  Would appreciate any 
suggestions for block sizes for LTO5 tape drive, writing to LTO4 tapes 
(what I have).


Might send it across the (Gigabit Ethernet) network to a server that's 
already on the new SAN, but I was trying to avoid hogging down the 
network or the other server's NIC.


I've seen examples online for sending via network, involves piping zfs 
send over ssh to zfs receive, right?  Could I maybe use rsh, if I enable 
it temporarily between the two hosts?


Thanks again, all.

--
David Strom

On 1/11/2011 11:43 PM, Ian Collins wrote:

On 01/12/11 04:15 AM, David Strom wrote:

I've used several tape autoloaders during my professional life. I
recall that we can use ufsdump or tar or dd with at least some
autoloaders where the autoloader can be set to automatically eject a
tape when it's full  load the next one. Has always worked OK whenever
I tried it.

I'm planning to try this with a new Quantum Superloader 3 with LTO5
tape drives and zfs send. I need to migrate a Solaris 10 host on a
V440 to a new SAN. There is a 10 TB zfs pool  filesystem that is
comprised of 3 LUNs of different sizes put in the zfs pool, and it's
almost full. Rather than copying the various sized Luns from the old
SAN storage unit to the new one  getting ZFS to recognize the pool, I
thought it would be cleaner to dump the zfs filesystem to the tape
autoloader  restore it to a 10TB Lun. The users can live without this
zfs filesystem for a few days.



Why can't you just send directly to the new LUN? Create a new pool, send
the data, export the old pool and rename.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread Stephan Budach

Am 13.01.11 15:00, schrieb David Strom:

Moving to a new SAN, both LUNs will not be accessible at the same time.

Thanks for the several replies I've received, sounds like the dd to 
tape mechanism is broken for zfs send, unless someone knows otherwise 
or has some trick?


I'm just going to try a tar to tape then (maybe using dd), then, as I 
don't have any extended attributes/ACLs.  Would appreciate any 
suggestions for block sizes for LTO5 tape drive, writing to LTO4 tapes 
(what I have).


Might send it across the (Gigabit Ethernet) network to a server that's 
already on the new SAN, but I was trying to avoid hogging down the 
network or the other server's NIC.


I've seen examples online for sending via network, involves piping zfs 
send over ssh to zfs receive, right?  Could I maybe use rsh, if I 
enable it temporarily between the two hosts? 
Actually mbuffer does a great job for that, too. Whenever I am using 
mbuffer I am achieving much higher throughput then using ssh.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-13 Thread David Magda
On Thu, January 13, 2011 09:00, David Strom wrote:
 Moving to a new SAN, both LUNs will not be accessible at the same time.

 Thanks for the several replies I've received, sounds like the dd to tape
 mechanism is broken for zfs send, unless someone knows otherwise or has
 some trick?

 I'm just going to try a tar to tape then (maybe using dd), then, as I
 don't have any extended attributes/ACLs.  Would appreciate any
 suggestions for block sizes for LTO5 tape drive, writing to LTO4 tapes
 (what I have).

 Might send it across the (Gigabit Ethernet) network to a server that's
 already on the new SAN, but I was trying to avoid hogging down the
 network or the other server's NIC.

 I've seen examples online for sending via network, involves piping zfs
 send over ssh to zfs receive, right?  Could I maybe use rsh, if I enable
 it temporarily between the two hosts?

If you don't already have a backup infrastructure (remember: RAID !=
backup), this may be a good opportunity. Something like Amanda or Bacula
is gratis, and it could be useful for other circumstances.

If this is a one-off it may not be worth it, but having important data
without having (offline) backups is usually tempting fate.

If you're just going to go to tape, then suntar/gnutar/star can write
directly to it (or via rmt over the network), and there's no sense
necessarily going through dd; 'tar' is short for TApe aRchiver after all.

(However this is getting a bit OT for ZFS, and heading towards general
sysadmin related.)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-12 Thread Richard Elling
On Jan 11, 2011, at 8:51 PM, Edward Ned Harvey wrote:

 heheheh, ok, I'll stop after this.   ;-)  Sorry for going on so long, but it
 was fun.
 
 In 2007, IDC estimated the size of the digital universe in 2010 would be 1
 zettabyte.  (10^21 bytes)  This would be 2.5*10^18 blocks of 4000 bytes.
 http://www.emc.com/collateral/analyst-reports/expanding-digital-idc-white-pa
 per.pdf
 
 This means the current probability of any sha256 collision in all of the
 data in the whole world, using a ridiculously small block size, assuming all
 of the data is unique and therefore maximizing the probability of
 collision...   2.69 * 10^-41   ...  hehehehehhe   :-)
 
 It's not as unlikely as randomly picking a single atom in the whole planet,
 but still...

... it doesn't matter. Other posters have found collisions and a collision 
without
verify means silent data corruption.  Do yourself a favor, enable verify.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send tape autoloaders?

2011-01-11 Thread David Strom
I've used several tape autoloaders during my professional life.  I 
recall that we can use ufsdump or tar or dd with at least some 
autoloaders where the autoloader can be set to automatically eject a 
tape when it's full  load the next one.  Has always worked OK whenever 
I tried it.


I'm planning to try this with a new Quantum Superloader 3 with LTO5 tape 
drives and zfs send.  I need to migrate a Solaris 10 host on a V440 to a 
new SAN.  There is a 10 TB zfs pool  filesystem that is comprised of 3 
LUNs of different sizes put in the zfs pool, and it's almost full. 
Rather than copying the various sized Luns from the old SAN storage unit 
to the new one  getting ZFS to recognize the pool, I thought it would 
be cleaner to dump the zfs filesystem to the tape autoloader  restore 
it to a 10TB Lun.  The users can live without this zfs filesystem for a 
few days.


So, has anyone had any experience with piping a zfs send through dd (so 
as to set the output blocksize for the tape drive) to a tape autoloader 
in autoload mode?


Hope this makes sense?

TIA.
--
David Strom
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-11 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of David Strom
 
 So, has anyone had any experience with piping a zfs send through dd (so
 as to set the output blocksize for the tape drive) to a tape autoloader
 in autoload mode?

Yes.  I've had terrible experience doing that.  For whatever reason, dd
performance with zfs send was beyond horrible, although dd with other things
was fine, and zfs send with other things was fine.  I logged a support call
with Sun, we worked on it some, reached a sneaking suspicion that there was
a bug in dd, and they told me dd is not supported.  I eventually gave up
doing this, but I suspect putting a memory buffer in between the zfs send
and the dd might help.  Would that be buffer? or mbuffer?  man pages.

Can you connect to the old  new SAN simultaneously?  It would be immensely
preferable (and faster) if you could send directly into the receive on the
other.

Also ... 10TB isn't amazingly huge these days.  Maybe you should consider an
esata card or two, and a few external 2T drive enclosures.  If you're only
using it for a temporary copy of your main pool, you can probably survive
without any redundancy.  It would cost around $600 to $800 but I'll tell you
what, it's going to be a LOT faster than the tape drive.  I think it will
save you a day or two.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-11 Thread Ian Collins

 On 01/12/11 04:15 AM, David Strom wrote:
I've used several tape autoloaders during my professional life.  I 
recall that we can use ufsdump or tar or dd with at least some 
autoloaders where the autoloader can be set to automatically eject a 
tape when it's full  load the next one.  Has always worked OK 
whenever I tried it.


I'm planning to try this with a new Quantum Superloader 3 with LTO5 
tape drives and zfs send.  I need to migrate a Solaris 10 host on a 
V440 to a new SAN.  There is a 10 TB zfs pool  filesystem that is 
comprised of 3 LUNs of different sizes put in the zfs pool, and it's 
almost full. Rather than copying the various sized Luns from the old 
SAN storage unit to the new one  getting ZFS to recognize the pool, I 
thought it would be cleaner to dump the zfs filesystem to the tape 
autoloader  restore it to a 10TB Lun.  The users can live without 
this zfs filesystem for a few days.




Why can't you just send directly to the new LUN?  Create a new pool, 
send the data, export the old pool and rename.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send tape autoloaders?

2011-01-11 Thread Edward Ned Harvey
heheheh, ok, I'll stop after this.   ;-)  Sorry for going on so long, but it
was fun.

In 2007, IDC estimated the size of the digital universe in 2010 would be 1
zettabyte.  (10^21 bytes)  This would be 2.5*10^18 blocks of 4000 bytes.
http://www.emc.com/collateral/analyst-reports/expanding-digital-idc-white-pa
per.pdf

This means the current probability of any sha256 collision in all of the
data in the whole world, using a ridiculously small block size, assuming all
of the data is unique and therefore maximizing the probability of
collision...   2.69 * 10^-41   ...  hehehehehhe   :-)

It's not as unlikely as randomly picking a single atom in the whole planet,
but still...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss