Re: [Veritas-bu] Ssh style encryption of data transfer from client to server

2012-01-10 Thread David Magda
On Jan 10, 2012, at 17:53, Andrew Stueve wrote:

 On Fri, January 6, 2012 10:31, Rosie Cleary wrote:
 
 I ran a test recently and found that Netbackup transfers data from the 
 backup client to the server in clear text. I would prefer to secure the 
 network traffic without encrypting the resulting backup, do you know of any 
 options to do this?
 
 Enable the Netbackup Encryption option?

The keywords are without encrypting the resulting backup.

Encrypting at the client means that you lose dedupe and tape compression 
capabilities. Using the Media Server Encryption Option (MSEO) doesn't help in 
the client-server traffic (and you lose dedupe/compression again).

I replied (accidentally offlist) to the enquiry by suggesting IPsec (or any 
type of VPN-like solution really). Over the wire is ciphertext, but it's all 
cleartext to the NetBackup server after the kernel is done decrypting it.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Tape Library

2011-08-03 Thread David Magda
On Aug 3, 2011, at 03:44, WALLEBROEK Bart wrote:

 In the end IBM, Quantum and Spectra were the only vendors left that were any 
 good (HP and SUN fell of quite fast)
 We choose for Spectra (although it was the more expensive one) for the reason 
 that their robots are far more technologically advanced than the others.  And 
 also because of the (very) small footprint and low power consumption 
 (important during these Green IT days).

Can you explain why HP and SUN fell of quite fast?

I've dealt with STKs in a couple of places, and never really had any issues 
with them (the last being 'just' an SL500). At the place I'm currently at 
there's a Quantum i2000 that's a bit troublesome but, as I've only been here a 
short time, I'm not sure if that's the library's fault or something else in the 
environment.

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] multipathing with tapes on Linux?

2011-07-27 Thread David Magda
Hey,

Anyone know if Linux (specifically RHEL 5.x) supports multi-pathing to
tapes? Is it worth setting up a media server with a 10 GigE interface and
two (or more) FC connections on the other?

AFAICT, the device-mapper-multipath only support MPIO for block devices.
Is this assessment correct?

Thanks for any info.

Regards,
David


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] multipathing with tapes on Linux?

2011-07-27 Thread David Magda
On Wed, July 27, 2011 17:03, Len Boyle wrote:
 David,

 I think the answer depends on what you mean by multi-pathing, and the
 maker of the tape drive.

 The new IBM tape drives include two ports. But I think that it is only for
 failover. And I suspect that the support would only be in the IBM tape
 driver. Which  was released with it's source code the last I looked.

 I also suspect that you do not need multi-pathing to feed the tape drives.
  The  native speed for  a lto-5 tape drive is greater than that of a gige
 card. And if you send data that can be compressed 2-1 or 3-1 then  even
 more so.

 The hard part is getting the data off the disk fast enough to drive the
 tape drive.

We have sixteen LTO-4 drives that we want to drive with as few media
servers as possible for support reasons. If we could get a few servers
with 10 GigE and and multiple HBAs (or a single multiport HBA), then we
could (attempt) saturating the 10 GigE and having 8 or 12 Gbps of FC on
the other end going to multiple drives in a library.

If multi-pathing is not supported, another option would be to configure FC
zone such that: drives 0-3 are only visible to HBA0-port0, drives 4-7
visible to HBA0-port1, drives 8-11 visible to HBA1-port0, and drives 12-15
visible to HBA1-port1.

We'd prefer to have normal multi-pathing if possible, but spreading the
tapes to different HBA ports is another possibility.

We're not concerned with multi-pathing on the drives, but utilizing as
much bandwidth from the clients to the media server, and from the media
server to the library/drives.

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] multipathing with tapes on Linux?

2011-07-27 Thread David Magda
Thanks. I'll guess we'll look at other options.

On Jul 27, 2011, at 17:49, Alexander Leikin wrote:

 Hi David,
 
 There is no multi-pathing for Tape Drives,
 
 Regards,
 Alex

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] multipathing with tapes on Linux?

2011-07-27 Thread David Magda
On Jul 27, 2011, at 17:36, Stier, Matthew wrote:

 Creating zones or vlans is easily done on any modern managed switch (FC
 or Ethernet).
 
 A concern I have, is the IO bus of the system you are using. Even the
 PCI-e bus has bandwidth limits, and depending how the system is
 designed, even a single port HBA may have to share bandwidth with
 adjacent slots.
 
 With some detail as to the environment, you could probably get better
 help.
 
 What system is your media server?
 
 What vendor do you use for your SAN?
 
 Are you writing D2T or D2D2T?

The hardware in question is an HP DL380 G5, with the 10 GigE NIC in one PCIe 
slot, and dual-ported HBAs in two other slots. Each card is in one of 
full-size slots as indicated by the number 12 in the following diagram:

http://h18000.www1.hp.com/products/quickspecs/12477_na/12477_na.html

These slots have a separate bus number (14, 23, 19), so I'm guessing that 
they're generally independent of each other. Slot 3 (bus 14) is listed as 
having an X4 bus bandwidth, and the other two are X8. Since each PCI Express 
(v1) lane is 2 Gb/s, it means that we have 8, 16, and 16 Gb/s available.

http://en.wikipedia.org/wiki/PCI_Express

We won't be using this to back up any SANs, but mostly the stand alone clients. 
For our storage systems (mostly Blue Arc and Isilon) we're doing NDMP over FC. 
Currently we're doing D2T.


This architecture was set up over two years ago, and we've grown our cluster 
storage by about 2-3x since then, so it's high time we looked at it again.

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Equivalent Solaris mailing list.

2011-05-17 Thread David Magda

On May 17, 2011, at 07:38, scott.geo...@parker.com wrote:

 Is there one?
 
 If you can handle the nuisance, IT Toolbox has a fairly busy Solaris 
 group.  My ISP cut off direct access to newsgroups, but I think that 
 comp.unix.solaris still gets some activity.  Darren Dunham used to be a 
 regular on comp.unix.solaris.  He may have better recommendations as well.

The OP may also want to check out Sunmanagers (please check out the FAQ):

http://sunmanagers.org/

There's also Server Fault (and the other Stack Exchange sites):

http://serverfault.com/
http://stackexchange.com/about

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] SUN TRUNKING SW WITH NETBACKUP

2011-04-21 Thread David Magda
On Thu, April 21, 2011 13:18, Asiye Yigit wrote:
 Hello;

 Yes. After some research, I found the link aggregation on new HWs on
 solaris.

 So, I think we use link aggregation.

 Do you know any issue with netbackup while using link aggregation?

You may want to make sure that the load spreading algorithm uses at least
Layer 4 (TCP and UDP port numbers). If you stick with only L2 (MAC) or L3
(IP), then the connections probably won't be well-distributed over the
various NICs.

This has to be done in both the switch and server configuration. See
dladm(1M) for details.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] STK SL8500 Library Console Software....

2011-04-15 Thread David Magda
On Fri, April 15, 2011 11:13, Dennis Peacock wrote:
 Just wondering if anyone here has any tips as to how to get the
 information out of the SLConsole software without having to actually login
 to the GUI. I'd like to be able to do command line scripting to get the
 info I need out of it.

 Anybody have any tips, pointers, hacks, or even a web link??

Another good venue to ask may be Sun Managers:

http://sunmanagers.org/


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] D2D or Tape Libraries

2010-10-07 Thread David Magda
On Thu, October 7, 2010 08:21, Lightner, Jeff wrote:
[...]
 Also with permanent storage on disk as opposed to tape you
 always take the risk that the remote storage might die and kill all your
 backups.   Tapes can degrade but you are far less likely to lose all
 your offsite tapes at one fell swoop.

Tapes also don't take power and generate heat when they're sitting in the
slot. If you move things around for off-siting, tapes can also take a few
bumps a lot better than hard drive heads.

Also, IIRC, SATA disks's error rate is 10^15, SAS/FC disks are at 10^16,
and LTO tapes are at 10^17.

Regardless of which medium the OP goes with, you should probably have at
least two copies of the data: so one copy on D2D disks, and a second on
another D2D unit; or one on D2D and another on tape; or a copy on two
different tapes.


I think the general best practices is that new deployments should first go
to disk (D2D), and then if you need to keep data for more than x weeks,
you clone it to tape. The value of x will be different for each
organization. This is mostly because modern tapes are so fast that trying
to got from the client straight to tape (LTO-4 @ 120 MB/s) is very hard to
do without shoe shining.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Greenplum

2010-07-22 Thread David Magda
Isn't Greenplum's database based on PostgresSQL?

If so, while not officially supported by NBU, whatever you can find  
online with regards to that you can probably leverage: post- and pre-  
scripts as mentioned, but also using WAL files to achieve PITR if you  
don't want to do straight dumps.

On Jul 22, 2010, at 02:36, JC Cheney wrote:

 The simple way would be to dump the data to disk with a pre-backup
 scripts and then remove it with a post-backup script.

 If you wanted to get fancy then you could look at using pipes to  
 connect
 the greenplum database dump command to bpbkar. Although a bit more
 complex to setup in the first instance it does give better flexibility
 in the long run and avoids the need for all that extra disk space...

 Another option would be to quiesce the database and then perform a
 regular backup, or maybe split a mirror off and then back that up.


 -Original Message-
 From: veritas-bu-boun...@mailman.eng.auburn.edu
 [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of
 ccosta@gmail.com
 Sent: 21 July 2010 15:57
 To: VERITAS-BU@mailman.eng.auburn.edu
 Subject: [Veritas-bu] Greenplum


 I do apologize for this question, it is not a NBU specific question,

 However, I was wondering if anybody in this community has Greenplum
 Databases in their environment and if they do, how are they securing
 this data?

 NBU officially does not support this type of DB and I was looking for
 direction as to how to secure this data.

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] isilon backup accelerator

2010-06-09 Thread David Magda
On Jun 9, 2010, at 19:27, A Darren Dunham wrote:

 NFS cannot carry the NTFS ACLS though.  So conceivably you can do all
 CIFS backups and get all security structures.  (I do NDMP and have
 mainly UNIX servers, so it's not something I've tried to test).

Well, NFSv4 does NTFS-style ACLs. See Section 5.11 of the NFSv4 spec  
(RFC 3530). There's also a draft on mapping NFSv4 and draft-POSIX ACLs  
(draft-ietf-nfsv4-acl-mapping-05).

Don't know of many systems that can show both though. I believe  
OpenSolaris can do UID-SID mapping and such with ACLs if you're  
exporting a ZFS file system to both NFS and CIFS/SMB.

NetApp should be able to handle it if you use mixed QTrees:

http://www.netapp.com/us/communities/tech-ontap/nfsv4-0408.html

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NetBackup media server and Sun Coolthread Servers

2010-06-04 Thread David Magda
On Fri, June 4, 2010 08:03, Asiye Yigit wrote:

 do you have any experience about netbackup media servers on Sun
 Coolthread servers?

 I am wondering how netbackup perform well on Coolthread servers?

 Which one do you recommend between Mx000 and CoolThread servers?

All of our recent NetBackup server purchases have been T5120s. They run
just fine. We aggregate the four GigE NICs via dladm(1M) for extra
bandwidth (round-robin at the L4 layer). Plug in dual HBAs and you'll have
plenty of bandwidth both in and out.

The CPU comes with on-die dual 10 GigE if you really want network
bandwidth, but you'll be taking up a PCIe slot for each interface. So if
you want many HBAs (and not just one dual-ported HBA) you many need a
T5220 which has more PCIe slots.

Personally I think the T-series would be better than the M-series, as you
want parallelism for multiple I/O streams, and that's what the CoolThreads
servers were designed to handle: parallelism. AFAIK, the M-series is more
about single-thread performance (we have a few for some of our Oracle DB
stuff).


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Retaining Date for 20 years+

2010-05-19 Thread David Magda
On May 19, 2010, at 02:39, WEAVER, Simon (external) wrote:

 Thanks for this. Yes, this is one method, but what about a backup
 solution - ie: now 20 years out of date, no media, no server to  
 restore
 to and in a format unknown to todays backup systems.

 What would you do then?  :-)
 the client does not seem bothered, and is happy to destroy the Data.  
 But
 if you are a banking client or someone that needs access to 20+ yr  
 Data,
 then surely your planning has to account for this? Or maybe another
 solution?

Whatever component is missing (media, drive, computer, etc.) will have  
to be found on eBay.

Once you have the missing component(s), you have to restore the data.  
Once that is done you transfer it to a machine that has a more up-to- 
date backup client and bring it into your regular backup system.

Of course archiving is different than backup (or so it's repeatedly  
said). I've never had to deal with it, so couldn't really talk about  
the differences much.
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] per-directory policies a la NetWorker?

2010-04-07 Thread David Magda

On Apr 6, 2010, at 21:21, David Magda wrote:

 NetWorker has a feature where you can put a .nsr file in a  
 directory, and then during backup it's contents will be parse so  
 that you can treat particular files or directories in a special way

Thanks to Jeff and Christophe. /usr/openv/netbackup/exclude_list.*  
should be sufficient. (Though things like skip, null, logasm, and  
swapasm would be handy. Oh well.)
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] per-directory policies a la NetWorker?

2010-04-06 Thread David Magda
NetWorker has a feature where you can put a .nsr file in a  
directory, and then during backup it's contents will be parse so that  
you can treat particular files or directories in a special way. An  
example from the man page:

 Having a /usr/src/.nsr file containing:
   +skip: errs *.o
   +compressasm: .
 would cause all files (or directories) in /usr/src named errs or *.o  
 (and anything contained within them) to be skipped.  In addition,  
 all other files contained in /usr/src will be compressed during save  
 and will be set up for automatic decompression on recover.

http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?db=manfname=4%20nsr
http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?db=manfname=uasm

Is there a similar feature in NetBackup 6.5 and above? I know there's  
a $HOME/bp.conf, but is there something that can be dropped in an  
arbitrary directory?

Thanks for any info.


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Sun/StorageTek based LTO4 encryption

2010-03-02 Thread David Magda
On Mar 2, 2010, at 10:26, Shekel Tal wrote:

 It takes roughly 73 clock cycles on a Windows or Linux server or about
 87 clock cycles on UNIX server to perform MSEO compression/encryption
 per BYTE of data backed up. Backing up 100 MB/sec of data through a
 Solaris media server requires 8.7 GHz of CPU processing for MSEO  
 alone,
 plus whatever processing is needed for other tasks. To move 200 MB/sec
 through the media server would require 17.4 GHz of CPU for MSEO.

The smallest SPARC server that Sun/Oracle sells is the T5120. Those  
have built-in encryption right on the CPU die, which I would hope  
Symantec would take advantage of it by linking against libpkcs11.so.

Benchmarks have a single UltraSPARC-T2 doing 38.9 Gbit/s of AES-128:

http://blogs.sun.com/bmseer/entry/ultra_fast_cryptography_on_the

Of course Fujitsu sells M3000, with SPARC64 processors, but they don't  
have crypto AFAIK.

 So it's a very cpu intensive process. You would probably require a  
 large
 multicore process system depending on your throughput requirements

It's kind of hard to find a server that is /not/ multi-core nowadays. :)

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu