Re: [Veritas-bu] NDMP

2009-09-03 Thread Nick Majeran
server_param server_x -f PAX -l will show you the settings for PAX.

readWriteBlockSizeInKB is the parameter you want to change to 256kb.  This
will require a datamover reboot, but you could always fail-over and
fail-back the datamover in question to minimize any downtime.  CIFS clients
will probably need to reconnect, NFS should recover.

There is also a bufsz parameter you can change under the NDMP facility as
well, but this will not affect the block size written to tape like
readWriteBlockSizeInKB.

You also may need to touch SIZE_DATA_BUFFERS_NDMP, depending on your version
of NBU.

-- nick


>
> You need to get from EMC PowerLink the latest version of their document "
> Configuring NDMP Backups on Celerra".  I think a number of the suggestions
> that I've seen only apply to 'remote NDMP' where the tapes are on a
> 'normal' media server with the ndmpmoveragent running, and the data comes
> from the disk mover on the Celerra to the tape move on the media server.
>
> If you have tapes connected directly to the Celerra, you must follow the
> EMC manuals, and you have quite a bit of control.   I'm not sure you can
> increase the buffer size, it may depend on your DART version.  We've got
> rid of our Celerras, so the information I have is quite old.
>
> However there is a PAX parameter:
>
> name = readWriteBlockSizeInKB
> facility_name = PAX
> default_value = 64
> current_value = 64
> configured_value =
> user_action = reboot DataMover
> change_effective = reboot DataMover
> range = (64,256)
> description = Maximum allowed PAX buffer size for NDMP read/write in
> kilobytes
>
> I'm not absolutely certain that this is the tape block size, as it is
> described as "Sets the maximum allowed PAX (portable archive interchange)
> buffer size in KB for NDMP read/write operations."  It definitely will
> change the buffer size between the various threads handling NDMP on the
> Celerras.  However you can clearly use the server_pax -stats command to
> see what it is using as in one of the examples it shows NASW (the thread
> the write to tape):
>
> $ server_pax  -stats -verbose
>
> .
>  NASW STATS 
> nasw00 BACKUP (in progress)
> Session Total Time: 00:00:08 (h:min:sec)
> Session Idle Time: 00:00:00 (h:min:sec)
> KB Tranferred: 98406 Block Size: 64512 (63 KB)
> Average Transfer Rate: 12 MB/Sec 42 GB/Hour
> Average Burst Transfer: 12 MB/Sec 42 GB/Hour
>
> See the 'block size'.
>
> There are many tuning parameters, and the manual has whole flowcharts for
> tuning backups & restores. But, you do need to be able to reboot the data
> moverso you need that failover window.
>
> William D L Brown
>
> > > On that same subject, anyone have any suggestions for tuning an
> > older EMC Celerra environment?  I would like to get my tape buffers
> > up to 256K but can't make heads or tails of the configuration
> > parameters.  I don't have much chance to play with the parameter
> > since the documentation indicates a reboot of the data movers is
> > required.  These are 24x7 devices.
>
>
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] In-File Delta Technology

2009-06-08 Thread Nick Majeran
 Most modern OSen should have a capable fs abstraction layer, like VFS,
which should remove the client from having to know too much about the
underlying file system, I would assume.  Plus, rsync can do it, as long as
you aren't using the --whole-file option.

-- nick

The quick answer is that it's too hard.

In order to do an in-file incremental, you'd have to have intimate knowledge
of the each kind of filesystem structure (ufs, vxfs, ext, ext3, etc).

By doing it at the file level, rather than the block level, the OS takes
care of all that for you.
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] bulk import of images

2008-06-13 Thread Nick Majeran
I just completed one that I wrote in Python, which does both phase 1
and phase 2 imports.
All phases are multi-processed; it will spawn concurrent processes for
each instance of bpimport (and others) and let Netbackup manage the
tape drive resources.

It also works on a best-fit approach, where images (after phase 1) are
logically grouped together in batches based on the number of slots in
your library.  For example, say you have 250 tapes to import, and you
only have a 100 slot library, the script will minimize the number of
loads and unloads so that the library is as close to full capacity as
possible.

It works okay, but I haven't done extensive testing; maybe four or
five runs and I'm still finding bugs.

-- nick

> Folks,
>
>
>
> Anybody out there successfully automated a large volume import of tape
> media into NetBackup?  I am curious as to how you controlled the
> processing, given the potential conflict in using available tapes drives
> for example and how you validated the success of the phases of the
> import.  We have several hundred tapes to process (800+). I am also
> looking for indications as to how long a phase 1 and phase 2 import will
> take on LTO-1 media with close to 2:1 compression (190+ GB).  We have
> multiplexed media and 2 GB fragment sizes which I imagine will slow
> things down too.
>
>
>
> Regards,
>
>
>
> Paul Esson
> Redstor Limited
>
> Direct:   +44 (0) 1224 595381
> Mobile:  +44 (0) 7766 906514
> E-Mail:  [EMAIL PROTECTED]
> Web:www.redstor.com
>
> REDSTOR LIMITED
> Torridon House
> 73-75 Regent Quay
> Aberdeen
> UK
> AB11 5AR
>
> Disclaimer:
> The information included in this e-mail is of a confidential nature and
> is intended only for the addressee.  If you are not the intended
> addressee, any disclosure, copying or distribution by you is prohibited
> and may be unlawful.  Disclosure to any party other than the addressee,
> whether inadvertent or otherwise is not intended to waive privilege or
> confidentiality.
>
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] ndmp cross-platform restore compatibility

2008-05-13 Thread Nick Majeran
As you correctly surmised, it's not possible to do cross-platform
restores, as NDMP just provides the transport methods, but each vendor
is allowed to format the data in any which way they like.

NetApp uses dump, the Celerra can do dump or tar via PAX.


>  should an ndmp backup performed on one platform be able to be restored
>  to another platform?  example: can an ndmp backup done on a netapp be
>  restored to an emc celerra nas device?  or is ndmp just a protocal for
>  backup software to initiate/control backups, and the actual "stream" of
>  data going from a volume to tape is proprietary to each os/vender.  each
>  os/vender could then only restore from their own backups, but they would
>  at least be using ndmp common commands from the backup software.
>
>  does anyone have any success stories on cross platform restores?
>
>  thanks,
>  jerald
>  
>  Confidentiality Note: The information contained in this
>  message, and any attachments, may contain confidential
>  and/or privileged material.  It is intended solely for the
>  person(s) or entity to which it is addressed.  Any review,
>  retransmission, dissemination, or taking of any action in
>  reliance upon this information by persons or entities other
>  than the intended recipient(s) is prohibited.  If you received
>  this in error, please contact the sender and delete the
>  material from any computer.
>  
>
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Netbackup Manual Backup - can't see restore

2008-05-02 Thread Nick Majeran
Hmm, I would what Dave Cutler would have to say about the below?

I'm a *nix dweeb, but...I used VMS my senior year of High School,
where we had one of the first Alpha's running VMS and I could gweep
away on a VT220 terminal, it certainly was an interesting piece of
software.

-- nick



On Fri, May 2, 2008 at 8:48 AM, Jeff Lightner <[EMAIL PROTECTED]> wrote:

> Back when I was young and dumb I thought MS-DOS was a great thing but then
> I got to work on REAL operating systems like UNIX/Linux and learned the
> difference.
>

I have an RHCE so I have some experience in what you're optimistically
calling a real operating system.  I will state that Linux is not yet on par
with REAL operating systems like OpenVMS (or MVS).  I've been managing
OpenVMS systems and clusters for about 25 years.  My current cluster uptime
is pushing 9 years - that's 9 years of continuous availability in an
active/active multi-node, multi-data center cluster.  During those 9 years,
every one of the original systems has been replaced and the replacements
included architectural changes from 32-bit Vax to to 64-bit Alpha.  The
entire storage system was swapped out, including the boot disks.  We've had
multiple single data center outages.  Individual systems have been rebooted
multiple times, operating system upgrades were done, but in all that time,
the core applications have *NEVER* gone down.  And I don't even hold the
record for uptime - that goes to Irish Rail where the cluster was up for
over 17 years.  Search HP's web site for their disaster recovery video where
they blow up a data center (natural gas and a spark - and I do mean "blow
up"!) and see how long it takes each environment to come back online.

Read 'em and weep, you Unix guys :-)

Tybalt> write sys$output f$getsyi("cluster_ftime")
30-MAY-1999 07:28:07.05

Now that I've shown you what a real operating system is, can we get back to
discussing NetBackup?

  .../Ed
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Find tape performance

2008-05-01 Thread Nick Majeran
Sadly, the standard linux tools of sar and iostat do not report on
tape devices.  You'll need to use systemtap to gather that info, and I
think there are a few systemtap scripts floating around which will
report on that, IIRC.

-- nick

>
>  I would like to track the performance of my tape drives to see what the
>  throughput is.  I am running Netbackup 5.1 for Linux (Suse) with 12 LTO3
>  drives and I want to make sure I am using them to the max.  I backup
>  around 40TB/day, and all the drives are used constantly, but I want to
>  look for bottlenecks (if there are any).
>
>
>
>  I am sure someone has done this before, I am just looking for a starting
>  point.
>
>  Thanks in advance.
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Veritas-bu Digest, Vol 24, Issue 68

2008-04-29 Thread Nick Majeran
That's not exactly true; we didn't see issues with an inordinate
number of bpdbm processes on MP5 until the images which were corrupt
had expired and were ready to be pruned from the catalog were unable
to be processed.  In our case, it was between three and six months
after those images were initially written.

In our case, we had another process fail (hot catalog backups) which
led us to the corrupt images and the large number of bpdbm problems.
Now, I'm not trying to say that is his issue, however, it did happen
to us on a select few images that were written with MP5.

-- nick


>  So this suggests that 6.0MP5 is not the cause.  Something else changed, and
>  Symantec can't tell what that was.  You might know, but probably not.  Many
>  times it's what is happening on the clients that hurt you and of course the
>  user community never admits to anything.
>
>  A lot of processes by itself isn't a problem.  It's what they're doing
>  that's the problem...  Many of those could be idle.
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] bpdbm acting wacky -- NB 6.0 MP5

2008-04-25 Thread Nick Majeran
MP5 had a ton of bugs (not really sure how it made it out of the
door), but I would wager that most of those have been resolved in MP6.
 That's what we are running now, and we haven't had many issues.
Compared to some of the folks on here, our environment is probably
mid-sized, as we back up about 750 TB a month.

-- nick


On Fri, Apr 25, 2008 at 1:27 PM, Koping Wang <[EMAIL PROTECTED]> wrote:
> Does anybody has and bad experience with 6.0 MP6? I heard too may bad
>  things of MP5. I am on MP4 and thinking about upgrade to MP6.
>  Sun E450, Solaris 9, NBU6.0 MP4
>
>  Thanks
>  Koping
>
>  
>
>  Message: 1
>  Date: Fri, 25 Apr 2008 11:42:15 -0500
>  From: "Nick Majeran" <[EMAIL PROTECTED]>
>  Subject: Re: [Veritas-bu] Veritas-bu Digest, Vol 24, Issue 61
>  To: veritas-bu@mailman.eng.auburn.edu, [EMAIL PROTECTED]
>  Message-ID:
> <[EMAIL PROTECTED]>
>  Content-Type: text/plain; charset=ISO-8859-1
>
>  We just experienced a rash of these on MP5 -- basically, there were some
>  bugs in MP5 which caused intermittent image corruption.  bpdbm will hang
>  during cleanup or database backups, and database backups failed with
>  status 41.  Fun times, let me assure you!
>
>  We ran through about 5 corrupted images in a week (on images that were
>  expired), and then upgraded to MP6.
>
>  We we had to do was run an lsof, see exactly which files bpdbm is hung
>  up on, kill those bpdbm pids,  and delete the files in question.
>  Everything was clean after that.
>
>  hth,
>
>  >  Solaris 10, Netbackup 6.0 MP5 Master server.
>  >
>  >  My bpdbm process is acting wacky (I think...I've never noticed this
>  > behavior before).  I've actually got 14 bpdbm processes running, but
>  > also 38 active jobs currently.  The logs for my in
>  > netbackup/logs/bpdbm  are very large...around 2 gigs per day's log
>  > file.  I'm seeing such  inforation in there as:
>  >
>  >  image_by_file: processing file
>  >
>  > /usr/openv/netbackup/db/images//115700/-Oracle
>  > -B
>  >  ackup_1157475027_UBAK
>  >  expdate: no match for
>  >
>  > /usr/openv/netbackup/db/images//1203000/-Ora
>  > cl
>  >  e-Backup_1203468053_INCR
>  >
>  >  by bp.conf file has:
>  >  VERBOSE = 1
>  >  ENABLE_ROBUST_LOGGING = NO
>  >
>  >  But the thing is, I've noticed some information in my bpdbm logs
>  > talking  about Informix backups that we haven't done in almost 2 years
>
>  > since  we've moved to Oracle.  The backups are long since expired...so
>
>  > why is  Netbackup processing those files?
>  >
>  >  On my master server, I'm running Solaris 10 on a v440 w/ 16 gigs of
>  > RAM,
>  >  4 CPU's running @ 1593 Mhz.  I do have a large netbackup domain...60
>
>  > Media & SAN Media servers, ~30 clients...but my Master server sees
>  > constant 100% cpu utilization.  The Server slows down, and locks up.
>  >  Could this be related to bpdbm checking all the files in the catalog,
>
>  > and spawning so many bpdbm processes?
>  >
>  >  --
>  >  Mike Sponsler
>  >  [EMAIL PROTECTED]
>
>
>  --
>
>  ___
>  Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
>  http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
>
>
>  End of Veritas-bu Digest, Vol 24, Issue 62
>  **
>
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Veritas-bu Digest, Vol 24, Issue 61

2008-04-25 Thread Nick Majeran
We just experienced a rash of these on MP5 -- basically, there were
some bugs in MP5 which caused intermittent image corruption.  bpdbm
will hang during cleanup or database backups, and database backups
failed with status 41.  Fun times, let me assure you!

We ran through about 5 corrupted images in a week (on images that were
expired), and then upgraded to MP6.

We we had to do was run an lsof, see exactly which files bpdbm is hung
up on, kill those bpdbm pids,  and delete the files in question.
Everything was clean after that.

hth,
nick


>  Solaris 10, Netbackup 6.0 MP5 Master server.
>
>  My bpdbm process is acting wacky (I think...I've never noticed this
>  behavior before).  I've actually got 14 bpdbm processes running, but
>  also 38 active jobs currently.  The logs for my in netbackup/logs/bpdbm
>  are very large...around 2 gigs per day's log file.  I'm seeing such
>  inforation in there as:
>
>  image_by_file: processing file
>  /usr/openv/netbackup/db/images//115700/-Oracle-B
>  ackup_1157475027_UBAK
>  expdate: no match for
>  /usr/openv/netbackup/db/images//1203000/-Oracl
>  e-Backup_1203468053_INCR
>
>  by bp.conf file has:
>  VERBOSE = 1
>  ENABLE_ROBUST_LOGGING = NO
>
>  But the thing is, I've noticed some information in my bpdbm logs talking
>  about Informix backups that we haven't done in almost 2 years since
>  we've moved to Oracle.  The backups are long since expired...so why is
>  Netbackup processing those files?
>
>  On my master server, I'm running Solaris 10 on a v440 w/ 16 gigs of RAM,
>  4 CPU's running @ 1593 Mhz.  I do have a large netbackup domain...60
>  Media & SAN Media servers, ~30 clients...but my Master server sees
>  constant 100% cpu utilization.  The Server slows down, and locks up.
>  Could this be related to bpdbm checking all the files in the catalog,
>  and spawning so many bpdbm processes?
>
>  --
>  Mike Sponsler
>  [EMAIL PROTECTED]
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Type 84 errors on NDMP backups on 6.0MP5 (JAJA (Jamie Jamison))

2008-02-12 Thread Nick Majeran
Yes, we see this error all the time.  I always thought it was a
block-size mismatch between NDMP and non-NDMP backups sharing the same
volume pool and retention, i.e., non-NDMP backups use a block-size of
256kb, while NDMP (waiting on NAS reboots before I move to 256k) uses
63kb.

If its a bug fixed in MP6, I don't think there will be too much longer
of a wait, as a I saw some MP6 files on the ftp site, although not the
main releases.  Any info from Symantec as to when that bug will be
fixed?

-- nick

>
> I upgraded to 6.0MP5 a few weeks ago to fix the notorious pempersist
> problem where NetBackup refused to run any of my scheduled policies or
> let me manually start backups. I had been running 6.0MP4 because MP5 had
> a bug that caused all of my NDMP backups to fail and to get things
> running I had to install MP5 and then install some super duper secret
> binaries for bptm and bpdbm. This fixed the problem with jobs not
> scheduling although I am still seeing the occasional type 41 error on my
> hot catalog backups, which is a symptom of the pempersist problem.
>
> This weekend I started getting type 84 errors on some of my NDMP backups
> with the error message:
>
> Error bptm(pid=13699) FREEZING media id XX, too many data blocks
> written, check tape/driver block size configuration
>
> error. Searching for this on Google produced the following web page:
>
> http://seer.entsupport.symantec.com/docs/295172.htm
>
> ETrack: 117380
> Description: NDMP backup using TIR - positioning error - bptm does not
> advance expected_block_pos[TWIN_INDEX] if bytes_this_buf == 0
>
> Has anyone else had any experience with this? I'm becoming increasingly
> frustrated with NetBackup 6.0. There are nice new features that I love,
> such as hot catalog backups and the ability to queue vault jobs, but for
> every feature I like there's a bug that I really hate, such as the NDMP
> problems in 6.0MP5, the pempersist problem in every 6.0 release and now
> this. It's especially annoying since I'm not using TIR in any of my NDMP
> policies. Indeed as far as I can tell it's not even an option for an
> NDMP policy type backup.
>
> Looking at the webpage listed above is depressing since the page was
> apparently last updated on the 23rd of January, 2008, yet contains this
> sentence
>
> "This issue is currently being considered by Symantec Corporation to be
> addressed in a forthcoming Maintenance Pack or version of the product.
> The fix for this issue is expected to be released in the fourth quarter
> of 2007."
>
> I have this nightmare that I'm going to have to restore some crucial bit
> of corporate data and I'm not going to be able to. Then Symantec will
> post an eTrack notice saying "Oh yeah, we found this bug in the version
> of NetBackup that you're running that causes it to expire all of your
> backup images, run 'rm -rf' on all of your disk based storage units,
> relabel all of the tapes in your library and then overwrite your catalog
> with zeros. Don't worry though, we're working on a fix that should be
> out at some date that's well in the past."
>
> Jamie Jamison
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] catalog in nfs mount point

2008-01-24 Thread Nick Majeran
I know ya'll will think it's crazy and negligent, but FWIW, on our
legacy (5.1MP5), restore-only environment with a 600 GB catalog, NFS
works pretty well.

-- nick

> I'll add the voiice of experience.  I tried it and ended up backing it
> out rapidly.  Netbackup (v4 probably) didn't like NFS.
>
> 
>
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Dominik
> Pietrzykowski
> Sent: Wednesday, January 23, 2008 2:01 PM
> To: [EMAIL PROTECTED]; veritas-bu@mailman.eng.auburn.edu
> Cc: [EMAIL PROTECTED]
> Subject: Re: [Veritas-bu] catalog in nfs mount point
>
>
>
>
> The final nail on that coffin 
>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, 23 January 2008 9:20 PM
> To: veritas-bu@mailman.eng.auburn.edu
> Cc: [EMAIL PROTECTED]
> Subject: Re: [Veritas-bu] catalog in nfs mount point
>
> > Anybody put their NetBackup Catalog in nfs mount
> point from NetApp?
> > Good or Bad ? What need to watch out ?
> > best practices setting ?
>
> >From the NetBackup 6.5 Installation Guide for
> Unix, p. 14:
>
> Symantec does not support installation of
> NetBackup in an NFS-mounted directory. File
> locking in NFS-mounted file systems can be
> unreliable.
>
> p. 30
>
> Note: Symantec does not support installing
> the EMM on an NFS-mount.
>
> >From the NetBackup 6.5 Admin Guide for Windows, p.
> 347, discussing catalog backup to disk:
>
> The NetBackup binary image catalog is more
> sensitive to the location of the catalog.
> Catalog backups that are stored on a remote
> file system may have critical performance
> issues. NetBackup does not support saving
> catalogs to a remote file system such as
> NFS or CIFS.
>
> p. 349, regarding relocating the image catalog:
>
> Note: NetBackup does not support saving the
> catalog to a remote file system. Therefore,
> Symantec advises against moving the image
> catalog to a remote file system such as NFS
> or CIFS.
>
>
> ___
> Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
>
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> http://mailman.eng.auburn.edu/pipermail/veritas-bu/attachments/20080124/6d0457c3/attachment.htm
>
> --
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Restoring NDMP backups over the network

2008-01-23 Thread Nick Majeran
Yep, that's it, at least from my understanding.  We used to be an all
NetApp shop here, and have since switched to all EMC Celerra.
We have to keep a small NetApp around just for restores of this data,
which do occur occasionally.
AFAIK, I don't think Netbackup actually writes anything Netbackup
specific to the headers of NDMP tapes, as the tape drives are
controlled by the filer during the time of backup / restore.  I
believe all that Netbackup does is just grab metadata over the network
and creates an image based on that.

HTH,
nick

> I understand the way things are written to tape much better now and can
> see that we probably would not be able to restore a tape to a non-ndmp
> system such as a hpux server.
>
> The solution we are looking at would only be needed until the data
> retention period for existing backups expire.  We would keep NetBackup
> around during that time.  Once everything expires, the server and tape
> library would find other things to do.
>
> What I'm taking from this is that even if I want to restore back to a
> filer, unless the data is being read off of tape by a filer that
> understands its version of dump, restores won't work.  That means we
> will need to keep the tape library connected to a filer for restores
> until everything expires.
>
> Am I getting closer :-)
>
> Jeff
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] catalog in nfs mount point

2008-01-23 Thread Nick Majeran
While its certainly not supported, we have our legacy (restores only)
catalog, which is currently around 600 GB on NFS, and it's actually
faster than the really old SAN disk on which it used to reside.
Everything seems to work fine, but I don't know how much I would trust
using NFS for a regular catalog.

-- nick

> > Anybody put their NetBackup Catalog in nfs mount
> point from NetApp?
> > Good or Bad ? What need to watch out ?
> > best practices setting ?
>
> >From the NetBackup 6.5 Installation Guide for
> Unix, p. 14:
>
> Symantec does not support installation of
> NetBackup in an NFS-mounted directory. File
> locking in NFS-mounted file systems can be
> unreliable.
>
> p. 30
>
> Note: Symantec does not support installing
> the EMM on an NFS-mount.
>
> >From the NetBackup 6.5 Admin Guide for Windows, p.
> 347, discussing catalog backup to disk:
>
> The NetBackup binary image catalog is more
> sensitive to the location of the catalog.
> Catalog backups that are stored on a remote
> file system may have critical performance
> issues. NetBackup does not support saving
> catalogs to a remote file system such as
> NFS or CIFS.
>
> p. 349, regarding relocating the image catalog:
>
> Note: NetBackup does not support saving the
> catalog to a remote file system. Therefore,
> Symantec advises against moving the image
> catalog to a remote file system such as NFS
> or CIFS.
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Restoring NDMP backups over the network (A Darren Dunham)

2008-01-23 Thread Nick Majeran
Right, NetApp writes in dump, the Celerra will do either dump or tar,
and who knows, you may even find some vendors using cpio.

One important thing to remember is that, as the backup format for NDMP
is strictly platform dependant, so you can't restore your EMC NDMP
dumps to a NetApp or vise versa.  You also shouldn't try and restore
to a local disk, as this won't work pre-6.0, and is very, very slow
(if it works at all) post 6.0.

-- nick


> On Tue, Jan 22, 2008 at 07:57:01PM -0700, Jeff Cleverley wrote:
> > 1.  Are all NDMP backups classified as dumps, or should we still be able
> > to restore selected files to another location either on a filer or
> > another file system?  The backups are configured for the file system
> > level backups, but of type NDMP.
>
> NDMP as used today doesn't specify a file format, but leaves it up to
> the device.  For your purposes though, the answer is yes.  All NDMP
> backups of a Network Appliance filer will use a dump serialization.
> (Other devices can use very different formats).
>
> > 2.  Are there any known gotchas that will either require us to connect
> > the library to the new filer or prevent us from doing a standard restore?
>
> While the stream is a pretty standard netapp dump, NBU will write extra
> information to the tape to identify it.  I don't think you can just hook
> up an tape written via NBU, type 'restore' and expect it to read it
> properly.  At a minimum, I'd think you'd have to forward the tape to the
> correct file, then skip past the NBU header.  I'm not sure if you could
> do that step again on later volumes if the image spans tapes.
>
> This forum post talks about steps to use 'ufsrestore' on a Solaris
> host.  I would assume similar steps would be required to use 'restore'
> on a filer.
>
> https://forums.symantec.com/syment/board/message?board.id=21&thread.id=36270
>
> --
> Darren Dunham   [EMAIL PROTECTED]
> Senior Technical Consultant TAOShttp://www.taos.com/
> Got some Dr Pepper?   San Francisco, CA bay area
>  < This line left intentionally blank to confuse you. >
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Some info on my experiences with 10GbE

2007-10-19 Thread Nick Majeran
Regarding the tape drives and compression -- this is the part that confuses me.

I can max-out an LTO-3 drive at native write speed at 80MB/s with no
problem using pre-compressed data (compressed Sybase dbdumps), even
with a measly 64kb block size.  This is using direct NDMP with 2 Gb/s
fc IBM LTO-3 drives.

Using contrived data, i.e. large files dd'ed from /dev/zero or
hpcreatedata, I have in the past maxed out 2 Gb/s LTO-3 drives at
approximately 170 MB/s, as you claim above.  However, this was using
256kb block sizes.  I have read reports where 2 Gb/s LTO-3 drives can
be pushed to 220-230 MB/s using the maximum block size supported by
LTO-3 (2 MB) and contrived data.

Now, if compression is done at the drive, I would think that with a 2
Gb/s interface, it should be able to receive data at roughly 170 MB/s,
but since the drive natively spins at 80 MB/s, it would compress that
data, 4x, as you claim, to get that 240 MB/s top end.  But, in my
mind, using 2 Gb/s or 4 Gb/s shouldn't make a bit of difference for a
drive that natively writes at 80 MB/s.

Does anyone else have experience with this?

Also, I've seen LTO-3 tapes in our environment marked as "FULL" by
Netbackup with close to 2 TB of data on them.

-- nick



Yep, I'm using jumbo frames.  The performance was around 50% lower
without it.  I'm not currently using any switches for 10GbE, the servers
are connected directly together.

Re 4Gb vs 2Gb tape drives - since the data is compressed at the drive,
we still need to be able to transfer the data to the drives as fast as
possible.  The highest throughput we've been able to get with a single
2Gb fibre HBA is about 190MB/s (using multiple 2Gb disk-subsystem ports
zoned to a single HBA port).  The highest throughput we've gotten with a
single 2Gb tape drive is 170MB/s.  Since this is near the peak we can
get with 2Gb, I assume that the 2Gb interface on the tape drive is
what's limiting our throughput.

Also, we get about 4x compression of this data on the tapes (~1600MB on
an LTO3 tape).  So, with 265MB/s at 4x compression, the physical write
speed of the drive is probably somewhere around 65MB/s (265/4).  Since
the tape compression ratio has remained the same with both 2Gb and 4Gb
drives, I'd guess that the physical drive speeds with the 2Gb drives
were probably closer to 40MB/s (170/4)...

-devon
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Some info on my experiences with 10GbE

2007-10-18 Thread Nick Majeran
Devon, just a few more questions:

So you *are* using jumbo frames?  I saw that it was enabled in ndd,
but you haven't mentioned it outright.

Also, what network switching equipment are you using for these tests?

Also, I'm curious, how is it that 4Gb/s LTO-3 drives can write
"faster" than 2 Gb/s with contrived data?  It seems like it shouldn't
make a difference, since the data stream is compressed at the drive.

thanks!

-- nick



We've been pretty happy with the T2000's.

The tape library is an IBM 3584, the tape drives are IBM's 4Gb FC LTO-3
drives, there's a dedicated 4Gb HBA for each drive, and everything is
connected to 4Gb McData switches.

We used to have IBM's 2Gb FC LTO-3 drives, and with those the peak
performance was around 165MB/s per drive.  These 4Gb drives peak at
around 265MB/s per drive, though with all 3 tape drives active, we see
throughput closer to 220MB/s per drive...I'm guessing we're bottlenecked
by the ports on our disk subsystem at the moment, but since performance
is more than acceptable we're not looking to tune this any further - at
least not until our LTO-4 drives are installed next month ;).

-devon
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Mixing block sizes on LTO Media ... Is it bad ?

2007-10-12 Thread Nick Majeran
It's a bad idea to mix block sizes, especially if you share media pools.
You will see status 84 errors with a message like this:

Error bptm (pid=29350) FREEZING media id xxx123, too many data blocks
written, check tape/driver block size configuration

So, if you are using SSO, and all of your media servers aren't at the
same block size, that could be bad.  Or if you have NDMP hosts that
write at a different block size than your media servers, this problem
will crop up as well.

When you change SIZE_DATA_BUFFERS, it is a global change.  All data
written to tape will be written in sizeof(SIZE_DATA_BUFFERS) bytes.
It's like using tar to write directly to tape; you can specify a
blocking factor (you can do this when writing to a regular file as
well), that's exactly what that setting does.

-- nick

Hi all,

We are looking at tuning SIZE_DATA_BUFFERS and NUMBER_DATA_BUFFERS to keep our
LTO{2,3} drives streaming, however, an interesting question has come up within
our storage group.

Since we interleave blocks from various hosts (grouped by "windows", "unix",
"Exchange", "Oracle", "Sharepoint", etc etc) and the block size will potentially
differ when we implement the "SIZE_DATA_BUFFERS" tuning parameter, will this
cause issues when it comes to doing restores (reading the tapes) ? i.e. some
blocks on the tape will be 64KB and other blocks will be 256KB.

What are the implications of the aforementioned and is it bad practice ? And is
there and authoritative method to deal with mixed block sizes written to tape ?

 -aW
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] sso question

2007-10-09 Thread Nick Majeran
You can load tapes from the master because it has the robot control
and just loads the tape into a position in the library.  You can't
unload them, because the master server can't send the mt commands to
the tape drive to unload the tape that the robot can then grab.

Unless you know that you can feed 3 LTO-3 @ 80 MB/s per HBA, I don't
think a little over- subscription is going to kill you.

-- nick


guys i have master server running sol8 connected to L700.

L700 has 7 drives in it.

Master can see 6 of them through fiber

Med1 can see first 3 drives shared
Med2 can see first 3 drives shared

Med3 just added has access to drives 5,6,7 and can see them fine.

I have just loaded a tape on the master server ( robot control ) and put
a taper into drive 7. When i came to do an unload d7 so i could move the
tape back (m d7 s6) it wouldn't do it. I then realized thats because it
does unloads on rmt paths and i haven't added drive 7 to the master server.

Didnt add for 2 reasons. 1. its recommended to have only 3 drives per
2gb HBA ( which is why it has 6 drives in total ). Plus if i added drive
7 i would need another shared license i guess.

Question:- Is there something wrong with my setup then. Should the
master have device paths set up for every drive in the robot. I know it
can put tapes in but it cant take them out. What i was able to do was a
mt -f /dev/rmt/2 offline from Med3 and then moved it using robtest from
the master.
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NetBackup on Linux (RH4) vs. NetBackup on Solaria

2007-10-05 Thread Nick Majeran
Alexsandr,

We run our entire Netbackup (6.0MP5) environment on RedHat Linux
(AS4U5) on Dell hardware.  We backup about 500 TB a month, which is
about 75% direct NDMP.

Most issues we have with Netbackup are generally Symantec / Veritas
bugs rather than OS problems.  Linux is solid and fast.

As far as some of the other responses I've read here about Solaris
hardware being better, and that RISC processors are "safer", I'd say
that was true 10 years ago, but it certainly isn't now.

Solaris 10 is wonderful, from what I've read and my limited use of it.
 But, I certainly can't say that Sun HW is better than Dell, in fact,
some of my recent experiences would point to the exact opposite
conclusion.

While Intel and AMD are still "CISC" processors, they decode all of
the CISC macro-ops into RISC-like micro-ops.  Over the years, the
amount of die space needed for that translation hardware has become
negligible.

-- nick
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Veritas-bu Digest, Vol 17, Issue 60

2007-09-26 Thread Nick Majeran
If you do any NDMP backups at all, do *not* go to MP5.  MP5 introduces
a number of nasty bugs.  If you decide to stick with 6.0, wait for MP6
in December (or so they say).

-- nick


I am running NBU 6.0 MP4 on Windows, lookig to upgrade
to 6.0MP5 or 6.5
I was wondering if any of you have any insights on
which is better and/or less painful way to go?
Thx
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] 10GB network + perf questions etc

2007-08-24 Thread Nick Majeran
RedHat AS4 x86_64

On 8/24/07, Len Boyle <[EMAIL PROTECTED]> wrote:
> Hello Nick
>
> What OS are you using on the 6850?
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Nick
> Majeran
> Sent: Friday, August 24, 2007 3:23 PM
> To: veritas-bu@mailman.eng.auburn.edu; Curtis Preston
> Subject: Re: [Veritas-bu] 10GB network + perf questions etc
>
> On one of our media servers, I can receive 250 MB/s into my 6850 with
> 4 GigE connections bonded into two LACP bonds, stream that out to six
> fc LTO-3 tape drives, and the box is plenty usable.  We are upgrading
> our network here, and I'd be surprised if I couldn't get closer to
> 300-350MB/s in the future.  The E6900 is still PCI-X, correct?  PCI-e
> is a must to do anything that I/O intensive, or so I would think.
> But, I'm probably not as smart as those Sun d00ds.
> ;-)
>
> -- nick
>
> > --
> >
> > Message: 4
> > Date: Fri, 24 Aug 2007 01:42:33 -0400
> > From: "Curtis Preston" <[EMAIL PROTECTED]>
> > Subject: Re: [Veritas-bu] 10GB network + perf questions etc
> > To: "Dominik Pietrzykowski" <[EMAIL PROTECTED]>,
> > 
> > Message-ID:
> >
> <[EMAIL PROTECTED]>
> > Content-Type: text/plain;   charset="us-ascii"
> >
> > Check out my blog entry on this topic:
> > http://www.backupcentral.com/content/view/133/47/
> >
> > I just spent some time today with some REALLY smart folks who were
> using
> > Intel's 10 GbE NICs with Sun 6900s and Solaris 10.  They can do about
> > 250 MB/s and have the box still function.  They can get it up to 400
> > MB/s, but when they do that, the box won't respond.  They couldn't run
> > top, they can't login, they can't run ps, etc -- NOTHING.
> >
> > So I'm thinking that with Solaris, you're not going to get anywhere
> near
> > 10,000 Mb/s (1200 MB/s).  Maybe with Linux or Windows and a TOE (TCP
> > offload engine) NIC, you might have a chance.  (I only say those OSs
> > because that's where they're making TOE NICs.)  One vendor replied to
> my
> > blog post and I'm looking into it.
> >
> > ---
> > W. Curtis Preston
> > Backup Blog @ www.backupcentral.com
> > VP Data Protection, GlassHouse Technologies
> >
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of
> Dominik
> > Pietrzykowski
> > Sent: Monday, August 13, 2007 4:42 PM
> > To: veritas-bu@mailman.eng.auburn.edu
> > Subject: [Veritas-bu] 10GB network + perf questions etc
> >
> >
> >
> >
> > Hi Group,
> >
> > Just curious to know if anyone is using 10GB network (fibre/copper, is
> > copper available yet at 10GB ???) on their media servers or clients
> and
> > what
> > sort of data transfer rates are they getting ???
> >
> > I'm waiting for my 10GB blade for the switch to come from the US, it's
> > taking ages 
> >
> > Also, are you using Solaris or windows on these ???
> >
> > What's the best you've seen on 100MB and 1GB copper/fibre networks ???
> >
> > What data rates are you seeing via HBAs 2/4GB 
> >
> > Are you using standard backup method or flashbackup ?
> >
> > I've played around with flash and on a server with about 4.7TB of
> small
> > files, I have gone from 10MB/s -> 70MB/s. It's a V490
> >
> > Thanks in advance,
> >
> > Dominik
> >
> >
> > ___
> > Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
> > http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> >
> >
> >
> > --
> >
> > Message: 5
> > Date: Fri, 24 Aug 2007 03:07:27 -0700
> > From: "Peter Marelas" <[EMAIL PROTECTED]>
> > Subject: Re: [Veritas-bu] 10GB network + perf questions etc
> > To: "Curtis Preston" <[EMAIL PROTECTED]>, "Dominik
> > Pietrzykowski" <[EMAIL PROTECTED]>,
> > 
> > Message-ID:
> >
> <[EMAIL PROTECTED]
> itas.com>
> >
> > Content-Type: text/plain;   charset="us-ascii"
> >
> > If it's the card I'm thinking about the TCP checksum is already
> offload
> > to hardware. In fact most gigaswift cards support this today with 1
> Gbit
> > technology.
> >
> > I think a great

Re: [Veritas-bu] 10GB network + perf questions etc

2007-08-24 Thread Nick Majeran
On one of our media servers, I can receive 250 MB/s into my 6850 with
4 GigE connections bonded into two LACP bonds, stream that out to six
fc LTO-3 tape drives, and the box is plenty usable.  We are upgrading
our network here, and I'd be surprised if I couldn't get closer to
300-350MB/s in the future.  The E6900 is still PCI-X, correct?  PCI-e
is a must to do anything that I/O intensive, or so I would think.
But, I'm probably not as smart as those Sun d00ds.
;-)

-- nick

> --
>
> Message: 4
> Date: Fri, 24 Aug 2007 01:42:33 -0400
> From: "Curtis Preston" <[EMAIL PROTECTED]>
> Subject: Re: [Veritas-bu] 10GB network + perf questions etc
> To: "Dominik Pietrzykowski" <[EMAIL PROTECTED]>,
> 
> Message-ID:
> <[EMAIL PROTECTED]>
> Content-Type: text/plain;   charset="us-ascii"
>
> Check out my blog entry on this topic:
> http://www.backupcentral.com/content/view/133/47/
>
> I just spent some time today with some REALLY smart folks who were using
> Intel's 10 GbE NICs with Sun 6900s and Solaris 10.  They can do about
> 250 MB/s and have the box still function.  They can get it up to 400
> MB/s, but when they do that, the box won't respond.  They couldn't run
> top, they can't login, they can't run ps, etc -- NOTHING.
>
> So I'm thinking that with Solaris, you're not going to get anywhere near
> 10,000 Mb/s (1200 MB/s).  Maybe with Linux or Windows and a TOE (TCP
> offload engine) NIC, you might have a chance.  (I only say those OSs
> because that's where they're making TOE NICs.)  One vendor replied to my
> blog post and I'm looking into it.
>
> ---
> W. Curtis Preston
> Backup Blog @ www.backupcentral.com
> VP Data Protection, GlassHouse Technologies
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Dominik
> Pietrzykowski
> Sent: Monday, August 13, 2007 4:42 PM
> To: veritas-bu@mailman.eng.auburn.edu
> Subject: [Veritas-bu] 10GB network + perf questions etc
>
>
>
>
> Hi Group,
>
> Just curious to know if anyone is using 10GB network (fibre/copper, is
> copper available yet at 10GB ???) on their media servers or clients and
> what
> sort of data transfer rates are they getting ???
>
> I'm waiting for my 10GB blade for the switch to come from the US, it's
> taking ages 
>
> Also, are you using Solaris or windows on these ???
>
> What's the best you've seen on 100MB and 1GB copper/fibre networks ???
>
> What data rates are you seeing via HBAs 2/4GB 
>
> Are you using standard backup method or flashbackup ?
>
> I've played around with flash and on a server with about 4.7TB of small
> files, I have gone from 10MB/s -> 70MB/s. It's a V490
>
> Thanks in advance,
>
> Dominik
>
>
> ___
> Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
>
>
>
> --
>
> Message: 5
> Date: Fri, 24 Aug 2007 03:07:27 -0700
> From: "Peter Marelas" <[EMAIL PROTECTED]>
> Subject: Re: [Veritas-bu] 10GB network + perf questions etc
> To: "Curtis Preston" <[EMAIL PROTECTED]>, "Dominik
> Pietrzykowski" <[EMAIL PROTECTED]>,
> 
> Message-ID:
> <[EMAIL PROTECTED]>
>
> Content-Type: text/plain;   charset="us-ascii"
>
> If it's the card I'm thinking about the TCP checksum is already offload
> to hardware. In fact most gigaswift cards support this today with 1 Gbit
> technology.
>
> I think a great deal of tuning would be required to achieve anywhere
> near 10 GB w/ a single TCP stream.
>
> Regards
> Peter Marelas
>
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Curtis
> Preston
> Sent: Friday, 24 August 2007 3:43 PM
> To: Dominik Pietrzykowski; veritas-bu@mailman.eng.auburn.edu
> Subject: Re: [Veritas-bu] 10GB network + perf questions etc
>
> Check out my blog entry on this topic:
> http://www.backupcentral.com/content/view/133/47/
>
> I just spent some time today with some REALLY smart folks who were using
> Intel's 10 GbE NICs with Sun 6900s and Solaris 10.  They can do about
> 250 MB/s and have the box still function.  They can get it up to 400
> MB/s, but when they do that, the box won't respond.  They couldn't run
> top, they can't login, they can't run ps, etc -- NOTHING.
>
> So I'm thinking that with Solaris, you're not going to get anywhere near
> 10,000 Mb/s (1200 MB/s).  Maybe with Linux or Windows and a TOE (TCP
> offload engine) NIC, you might have a chance.  (I only say those OSs
> because that's where they're making TOE NICs.)  One vendor replied to my
> blog post and I'm looking into it.
>
> ---
> W. Curtis Preston
> Backup Blog @ www.backupcentral.com
> VP Data Protection, GlassHouse Technologies
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Dominik
> Pietrzykowski
> Sent: Monday, August 13, 2007 4:42 PM
> To: veritas-bu@mailman.eng.auburn.edu
> Subject: [Veritas-bu] 

Re: [Veritas-bu] Error bptm(pid=3956) SCSI RESERVE failed

2007-07-14 Thread Nick Majeran
Are you doing regular SSO or NDMP SSO?
What OS is the host?

The first thing to try would be to reset the drives.  Failing that,
next try running vmoprcmd -crawlreleasebyname.  Failing both of those
things, reboot your I/O blades in your i2000.

We see this issue quite a bit with our i2000 and NDMP SSO.  Rebooting
the I/O blades always fixes the issue, but it also usually requires a
host reboot as well.

-- nick


> Staring 3 days ago I have been seeing a lot of the following error on one of 
> my media servers.  Has anyone seen this before and do they have a possible 
> cause or solution?
>
> NBU6.0 MP4
> Adic Scalar I2000 library
> 14 SSO drives
>
>
> 7/13/2007 9:26:33 AM - begin Duplicate
> 7/13/2007 9:26:34 AM - requesting resource SCACIBU01-hcart2-robot-tld-0
> 7/13/2007 9:26:34 AM - requesting resource L00882
> 7/13/2007 9:26:34 AM - reserving resource L00882
> 7/13/2007 9:26:34 AM - reserving resource 000418
> 7/13/2007 9:26:34 AM - reserving resource L00723
> 7/13/2007 9:31:48 AM - reserved resource L00882
> 7/13/2007 9:31:48 AM - reserved resource 000418
> 7/13/2007 9:31:48 AM - reserved resource L00723
> 7/13/2007 9:31:48 AM - granted resource L00882
> 7/13/2007 9:31:48 AM - granted resource IBMULTRIUM-TD23
> 7/13/2007 9:31:48 AM - granted resource 000378
> 7/13/2007 9:31:48 AM - granted resource IBMULTRIUM-TD28
> 7/13/2007 9:31:48 AM - granted resource SCACIBU01-hcart2-robot-tld-0
> 7/13/2007 9:31:28 AM - started process bptm (3956)
> 7/13/2007 9:31:34 AM - started process bptm (3956)
> 7/13/2007 9:31:34 AM - mounting 000378
> 7/13/2007 9:31:34 AM - started process bptm (5180)
> 7/13/2007 9:31:35 AM - started process bptm (4152)
> 7/13/2007 9:31:37 AM - started process bptm (5736)
> 7/13/2007 9:31:39 AM - started process bptm (5180)
> 7/13/2007 9:31:39 AM - mounting L00882
> 7/13/2007 9:32:48 AM - mounted; mount time: 00:01:09
> 7/13/2007 9:32:54 AM - positioning L00882 to file 206
> 7/13/2007 9:33:18 AM - positioned L00882; position time: 00:00:24
> 7/13/2007 9:33:24 AM - begin reading
> 7/13/2007 9:33:24 AM - end reading; read time: 00:00:00
> 7/13/2007 9:33:25 AM - positioning L00882 to file 207
> 7/13/2007 9:33:25 AM - positioned L00882; position time: 00:00:00
> 7/13/2007 9:33:26 AM - begin reading
> 7/13/2007 9:33:39 AM - Error bptm(pid=3956) SCSI RESERVE failed (reserve unit 
> scsi command failed)
> 7/13/2007 9:33:40 AM - Warning bptm(pid=3956) media id 000378 load operation 
> reported an error
> 7/13/2007 9:34:40 AM - current media 000378 complete, requesting next media 
> Any
> 7/13/2007 9:34:19 AM - started process bptm (3956)
> 7/13/2007 9:34:19 AM - mounting 000378
> 7/13/2007 9:35:14 AM - granted resource 000378
> 7/13/2007 9:35:14 AM - granted resource IBMULTRIUM-TD28
> 7/13/2007 9:35:14 AM - granted resource SCACIBU01-hcart2-robot-tld-0
> 7/13/2007 9:36:06 AM - Error bptm(pid=3956) SCSI RESERVE failed (reserve unit 
> scsi command failed)
> 7/13/2007 9:36:07 AM - Warning bptm(pid=3956) media id 000378 load operation 
> reported an error
> 7/13/2007 9:37:06 AM - current media 000378 complete, requesting next media 
> Any
> 7/13/2007 9:37:13 AM - started process bptm (3956)
> 7/13/2007 9:37:13 AM - mounting 000378
> 7/13/2007 9:38:10 AM - granted resource 000378
> 7/13/2007 9:38:10 AM - granted resource IBMULTRIUM-TD28
> 7/13/2007 9:38:10 AM - granted resource SCACIBU01-hcart2-robot-tld-0
> 7/13/2007 9:39:13 AM - Error bptm(pid=3956) SCSI RESERVE failed (reserve unit 
> scsi command failed)
> 7/13/2007 9:39:14 AM - Warning bptm(pid=3956) media id 000378 load operation 
> reported an error
> 7/13/2007 9:40:13 AM - current media 000378 complete, requesting next media 
> Any
> 7/13/2007 9:40:53 AM - granted resource 000378
> 7/13/2007 9:40:53 AM - granted resource IBMULTRIUM-TD28
> 7/13/2007 9:40:53 AM - granted resource SCACIBU01-hcart2-robot-tld-0
> 7/13/2007 9:39:55 AM - started process bptm (3956)
> 7/13/2007 9:39:55 AM - mounting 000378
> 7/13/2007 9:41:49 AM - Error bptm(pid=3956) SCSI RESERVE failed (reserve unit 
> scsi command failed)
> 7/13/2007 9:41:50 AM - Warning bptm(pid=3956) media id 000378 load operation 
> reported an error
> 7/13/2007 9:42:50 AM - current media 000378 complete, requesting next media 
> Any
> 7/13/2007 9:43:22 AM - granted resource 000378
> 7/13/2007 9:43:22 AM - granted resource IBMULTRIUM-TD28
> 7/13/2007 9:43:22 AM - granted resource SCACIBU01-hcart2-robot-tld-0
> 7/13/2007 9:42:26 AM - started process bptm (3956)
> 7/13/2007 9:42:26 AM - mounting 000378
> 7/13/2007 9:44:25 AM - Error bptm(pid=3956) SCSI RESERVE failed (reserve unit 
> scsi command failed)
> 7/13/2007 9:44:26 AM - Warning bptm(pid=3956) media id 000378 load operation 
> reported an error
> 7/13/2007 9:45:21 AM - current media 000378 complete, requesting next media 
> Any
> 7/13/2007 9:45:02 AM - started process bptm (3956)
> 7/13/2007 9:45:02 AM - mounting 000378
> 7/13/2007 9:45:37 AM - granted resource 000378
> 7/13/2007 9:45:37 AM - granted resource IBMULTRIUM-TD28
> 7/13/2007

Re: [Veritas-bu] Enterprise Vault backup - increase throughput ....

2007-07-11 Thread Nick Majeran
We have a similar situation, only our EV is bigger, and with more
files.  We've tried FlashBackup, and that only gets us to around 7
MB/s.  What we do right now is take a BCV once a day, mount it on a
separate host, and back that up as a raw, block level backup.  Now we
see between 20 and 35 MB/s throughput, however, we have to backup the
whole partition (1 TB), even though there only 750 GB is used.

You've explored all of the options there are, really.  That's the only
one that we've found that works the best -- although recovery is a
serious nightmare.

-- nick

> OK, seems we have come to a solution that I thought I would share.
>
> According to our Messaging admin, there are several EV partitions that
> he has "closed", which is to say will no longer be accepting new writes
> and only reads. So I am effectvley backing up the same unchanged data
> over and over.
> Thus, I have been able to exclude about 5 partitions from that server
> and simply bpexpdate the tapes with the last full backup of those
> partitions for 7 years (our retention). This means I am backing up only
> changing partitions, hence smaller amounts of data and henceshould bring
> our backup window to within a 20hr period.
>
>
> 
>
> From: WEAVER, Simon (external) [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, July 11, 2007 2:07 AM
> To: Kennedy, Cameron; veritas-bu@mailman.eng.auburn.edu
> Subject: RE: [Veritas-bu] Enterprise Vault backup - increase throughput
> 
>
>
> Hi
> I am in a similar boat, and although the backups go on for a while, I am
> going to implement a SAN Media Server to help with this.
>
> As a rule I turn off any AV, disk i/O software on this box during the
> backup and ensure tracker.exe is not tunning.
>
> HTH a bit
>
>
>
> Regards
>
> Simon Weaver
> 3rd Line Technical Support
> Windows Domain Administrator
>
> EADS Astrium Limited, B23AA IM (DCS)
> Anchorage Road, Portsmouth, PO3 5PU
>
> Email: [EMAIL PROTECTED]
> 
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Kennedy,
> Cameron
> Sent: Tuesday, July 10, 2007 1:27 PM
> To: veritas-bu@mailman.eng.auburn.edu
> Subject: [Veritas-bu] Enterprise Vault backup - increase
> throughput 
>
>
> Hi,
> Anyone have some experiences to share about getting maximum
> throughput from their EV backups?
>
> We have Netbackup 5.3mp4 running on AIX with 1 AIX media server,
>
> We have Enterprise Vault (Windows 2003 NTFS) with a GB backup
> LAN.
> I am struggling to get greater than 2500KB/sec backup speed from
> our vault stores partition.
> I know that the greatest issue is that this partition has many
> tiny files and that this adds overhead in backing up them up.
>
> Example: We have right now 606GB consisting of ~983000 files on
> this partition alone.
>
> I get a throughput of ~2500KB/s according to the Netbackup
> console but I would say it is a little lower based on my math.
>
> I have turned off VSP caching, which did help by ~1500KB/s.
>
> We have turned on the EV collector which should go about
> collecting the small files into larger CAB files, however, I am still
> seeing long delays in the time it takes for backups.
> During testing, I have found that I can kick off a new stream
> and backup another partition on the same server and get a throughput of
> 10,000KB/s which leads me to believe it is not a network related setup
> issue.
>
> Is there anything else that someone has done to increase
> throughput, or can anyone share what type of performance they are
> getting?
> I am considering a flashcopy scenario of the SAN disk associated
> with this server which won't fix the issue, but atleast move it from the
> production server to an offline situation.
> I am also considering, doing a raw partition backup (haven't
> used this method before but read about it in the admin guide), knowing
> that only a restore of the entire partition would be an option in a DR
> situation.
>
> thanks for any advice.
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] VTL with NDMP

2007-07-05 Thread Nick Majeran
Yes, I've found the same; with direct NDMP, the SIZE_DATA_BUFFERS_NDMP
must match, however, three-way NDMP is less picky.

On 7/5/07, Adams, Dwayne <[EMAIL PROTECTED]> wrote:
> I did some testing with SIZE_DATA_BUFFERS_NDMP using direct attached and
> I think the setting is not used by my filer, but the buffers size you
> set must match the buffer size in the filer configuration.  So there is
> a connection between this setting and the local filer buffer size.  If
> they don't match I get I/O errors.
>
> Dwayne Adams
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Curtis
> Preston
> Sent: Thursday, July 05, 2007 11:32 AM
> To: Nick Majeran; Dellaripa, Rich
> Cc: Rajmund Siwik; veritas-bu@mailman.eng.auburn.edu
> Subject: Re: [Veritas-bu] VTL with NDMP
>
> I _BELIEVE_ that SIZE_DATA_BUFFERS_NDMP applies only to 3rd party NDMP
> backups that are sent to a NetBackup server.  If you're doing filer to
> self or filer to filer backups (3rd party backups to another filer),
> then I don't think that SIZE_DATA_BUFFERS_NDMP applies.  When NetBackup
> is telling a filer to dump to himself or to another filer, I believe the
> only thing it does it begin the process and then stand back and wait for
> it to finish, and for the NDMP server tell the NetBackup server what it
> backed up.
>
> ---
> W. Curtis Preston
> Backup Blog @ www.backupcentral.com
> VP Data Protection, GlassHouse Technologies
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Nick
> Majeran
> Sent: Thursday, July 05, 2007 7:10 PM
> To: Dellaripa, Rich
> Cc: Rajmund Siwik; veritas-bu@mailman.eng.auburn.edu
> Subject: Re: [Veritas-bu] VTL with NDMP
>
> Again, let me reiterate: I have not attempted to tune NDMP backups
> from a NetApp filer to LTO-3 tape.  All of my tests were conducted
> from EMC Celerra NSX NAS devices.
>
> However, one thing you should be able to tune, and which is quite
> easy, is SIZE_DATA_BUFFERS_NDMP.  By default, Netbackup uses a 63k
> block size for NDMP based backups.  Obviously, increasing that from
> the default to something more practical, like say (256k), increases
> speed writing to tape.  LTO-3 supports a maximum block size of
>  2097152 bytes.
>
> On a Celerra, you can configure the PAX parameters below:
>
> param_name   facility  default current
> configured
> paxWriteBuff PAX 64 64
> dump PAX  0  0
> allowVLCRestoreToUFS PAX  0  0
> checkUtf8Filenames   PAX  1  1
> paxStatBuff  PAX128128
> readWriteBlockSizeInKB   PAX 64 64
> filter.numFileFilter PAX  5  5
> paxReadBuff  PAX 64 64
> filter.numDirFilter  PAX  5  5
> noFileStreamsPAX  0  0
> nFTSThreads  PAX  8  8
> scanOnRestorePAX  1  1
> filter.caseSensitive PAX  1  1
> nPrefetchPAX  8  8
> nRestore PAX 16 16
> writeToArch  PAX  1  1
> writeToTape  PAX  1  1
> nThread  PAX 64 64
>
> So, there are quite a few parameters to configure from a NDMP
> standpoint on a Celerra.
>
> -- nick
>
> On 7/5/07, Dellaripa, Rich <[EMAIL PROTECTED]> wrote:
> > I second that inquiry...it was my understanding there wasn't much you
> > could do to tune NDMP backups.
> >
> > ---Rich Dellaripa
> >
> > -Original Message-
> > From: Rajmund Siwik [mailto:[EMAIL PROTECTED]
> > Sent: Tuesday, July 03, 2007 4:35 PM
> > To: Nick Majeran; veritas-bu@mailman.eng.auburn.edu
> > Subject: Re: [Veritas-bu] VTL with NDMP
> >
> > What kind of tuning you are doing on NetApps to get that speed?
> >
> > >
> > > I have newer and older Net App boxes, assuming a 2GB san, what
> > performance can I expect using NDMP?
> > >
> > Out of the box, untuned, 50-80 MB/s to LTO-3.  Tuned, 70-160 MB/s.
> >
> >
> > -- nick
> >
> > >
> > > Thanks in advance!
> > >
> > > Paul Kenny
> > >
> > >
> >
> +-

Re: [Veritas-bu] VTL with NDMP

2007-07-05 Thread Nick Majeran
Again, let me reiterate: I have not attempted to tune NDMP backups
from a NetApp filer to LTO-3 tape.  All of my tests were conducted
from EMC Celerra NSX NAS devices.

However, one thing you should be able to tune, and which is quite
easy, is SIZE_DATA_BUFFERS_NDMP.  By default, Netbackup uses a 63k
block size for NDMP based backups.  Obviously, increasing that from
the default to something more practical, like say (256k), increases
speed writing to tape.  LTO-3 supports a maximum block size of
 2097152 bytes.

On a Celerra, you can configure the PAX parameters below:

param_name   facility  default current   configured
paxWriteBuff PAX 64 64
dump PAX  0  0
allowVLCRestoreToUFS PAX  0  0
checkUtf8Filenames   PAX  1  1
paxStatBuff  PAX128128
readWriteBlockSizeInKB   PAX 64 64
filter.numFileFilter PAX  5  5
paxReadBuff  PAX 64 64
filter.numDirFilter  PAX  5  5
noFileStreamsPAX  0  0
nFTSThreads  PAX  8  8
scanOnRestorePAX  1  1
filter.caseSensitive PAX  1  1
nPrefetchPAX  8  8
nRestore PAX 16 16
writeToArch  PAX  1  1
writeToTape  PAX  1  1
nThread  PAX 64 64

So, there are quite a few parameters to configure from a NDMP
standpoint on a Celerra.

-- nick

On 7/5/07, Dellaripa, Rich <[EMAIL PROTECTED]> wrote:
> I second that inquiry...it was my understanding there wasn't much you
> could do to tune NDMP backups.
>
> ---Rich Dellaripa
>
> -Original Message-
> From: Rajmund Siwik [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, July 03, 2007 4:35 PM
> To: Nick Majeran; veritas-bu@mailman.eng.auburn.edu
> Subject: Re: [Veritas-bu] VTL with NDMP
>
> What kind of tuning you are doing on NetApps to get that speed?
>
> >
> > I have newer and older Net App boxes, assuming a 2GB san, what
> performance can I expect using NDMP?
> >
> Out of the box, untuned, 50-80 MB/s to LTO-3.  Tuned, 70-160 MB/s.
>
>
> -- nick
>
> >
> > Thanks in advance!
> >
> > Paul Kenny
> >
> >
> +--
> > |This was sent by [EMAIL PROTECTED] via Backup Central.
> > |Forward SPAM to [EMAIL PROTECTED]
> >
> +--
> ___
> Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
>
> Conexant E-mail Firewall (Conexant.Com) made the following
> annotations-
> ** Legal Disclaimer
> 
>
> "This email may contain confidential and privileged material for the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution by others is strictly prohibited. If you have received the
> message in error, please advise the sender by reply email and delete the
> message. Thank you."
>
> **
>
> -
>
>
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] VTL with NDMP

2007-07-03 Thread Nick Majeran
We haven't tested at all with NetApp, unfortunately. EMC Celerra with
DMX disk on the back has been our NDMP mule, along with NBU 6 NDMP SSO
and fc LTO-3 as the target. We tested a few VTL vendors (who shall
remain unnamed), and found similar results.

On 7/3/07, Rajmund Siwik <[EMAIL PROTECTED]> wrote:
> What kind of tuning you are doing on NetApps to get that speed?
>
> >
> > I have newer and older Net App boxes, assuming a 2GB san, what
> performance can I expect using NDMP?
> >
> Out of the box, untuned, 50-80 MB/s to LTO-3.  Tuned, 70-160 MB/s.
>
>
> -- nick
>
> >
> > Thanks in advance!
> >
> > Paul Kenny
> >
> >
> +--
> > |This was sent by [EMAIL PROTECTED] via Backup Central.
> > |Forward SPAM to [EMAIL PROTECTED]
> >
> +--
> ___
> Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
>
> Conexant E-mail Firewall (Conexant.Com) made the following
> annotations-**
> Legal Disclaimer 
>
> "This email may contain confidential and privileged material for the sole
> use of the intended recipient. Any unauthorized review, use or distribution
> by others is strictly prohibited. If you have received the message in error,
> please advise the sender by reply email and delete the message. Thank you."
>
> **
>
> -
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] VTL with NDMP

2007-07-03 Thread Nick Majeran
> Message: 9
> Date: Tue, 03 Jul 2007 08:18:23 -0700
> From: Kenny <[EMAIL PROTECTED]>
> Subject: [Veritas-bu]  VTL with NDMP
> To: VERITAS-BU@mailman.eng.auburn.edu
> Message-ID: <[EMAIL PROTECTED]>
>
>
> I am doing some research to see if a VTL will help my NDMP performance.
>
> Currently I backup my Net App environment over the LAN. I am looking to 
> improve backup and restore performance.

So you are running three-way NDMP right now?  Or are you backing up over NFS?

> I have a LTO-3 Tape library that can be fiber attached to the Net App filer. 
> My concern is that the Net App filer will not be able to stream to the tape 
> drives and I will kill my performance since the tape will have to do lots of 
> re-positioning and start and stop.
>
> I was thinking a VTL would be able to match the speed of the net app box  
> thus the performance would be optimized.

LTO-3:  80 MB/s native, 160MB/s compressed.  That's pretty fast.
VTL:  100-180 MB/s.  Make sure your VTL vendor is doing compression in
hardware, otherwise turning on compression can reduce your speeds by
50%.

We have a very large NDMP (EMC Celerra -> LTO-3) shop here, and I can
tell you that very rarely does the backend disk have the horsepower to
push LTO-3 much faster than 95 MB/s with real-world tests.  With
contrived data on a stand-alone Celerra we saw less than a 5%
difference between LTO-3 and the VTLs that we were testing. YMMV, of
course, but we didn't see a large enough performance advantage to move
to VTL.  Not only that, but if you are backing up data with long
retention periods, be prepared to buy a whole lot of disk for your
VTL, unless you will archive off to tape from your VTL.

> Questions:
>
> I have newer and older Net App boxes, assuming a 2GB san, what performance 
> can I expect using NDMP?
>
Out of the box, untuned, 50-80 MB/s to LTO-3.  Tuned, 70-160 MB/s.

>From my experience, VTLs don't really need much tuning, as does tape.
They are fast out of the box, but tape can be almost as fast (within
5%) when tuned properly.

> Do you see an advantage using VTL in place of multiple LTO-3 drives?

Depends on your data.  If you are backing up mostly data that is
already compressed, you will be limited to the native speed of LTO-3,
which is 80 MB/s.  A VTL will be faster in that case.  However, if
your data is mostly uncompressed, then tape is just as fast as VTL.

Also, if you have 2 Gb/s coming out of your NetApp, and 2 Gb/s coming
into your VTL, the most you are going to see is 240 MB/s.  That's
three LTO-3 drives.  Sure, you can create 16 virtual drives on your
VTL, but that's 500% oversubscribed on throughput.

>
> How many streams are possible (Using NBU 6.x)? Do the number of streams help 
> in a VTL environment? how about a tape environment?

A NetApp filer supports 16 concurrent NDMP sessions at one time.  They
cannot be muxed, but you can certainly multi-stream to individual
drives concurrently.

>
> Any recommendations on performance tuning with NBU 6 and NDMP?

This is more of an issue with tape than VTL.  With tape,
SIZE_DATA_BUFFERS_NDMP is your friend.

HTH.

-- nick

>
> Thanks in advance!
>
> Paul Kenny
>
> +--
> |This was sent by [EMAIL PROTECTED] via Backup Central.
> |Forward SPAM to [EMAIL PROTECTED]
> +--
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] NDMP SSO and SCSI Reserve / Release Issues.

2007-05-18 Thread Nick Majeran
Does anyone have any experience turning off SCSI Reserve / Release in
NBU 6.0 with NDMP SSO?  I have 24 LTO-3 drives sharing between a local
Linux host (NBU 6.0MP4) and 11 Celerra data movers, which are
configured to *use* SCSI reserve / release by default.  If a data
mover panics and fails over while holding some SCSI reservations,
those particular drives become useless to me (SCSI reservation
conflicts on my host) until I reboot the robot and the host.  Running
a reset from Netbackup fails, as it thinks the drive is in use, and if
I run it from the command line on the generic device, it clears up the
reservation conflict, but the local device paths are still hosed.

Do I still need SCSI Reserve / Release if Netbackup is the drive
broker in this case?  Looking through some of the Celerra
documentation, it says to disable reserve / release if doing dynamic
drive sharing in ARCServe and CommVault, I'm wondering if NBU would be
similar?

EMC and the robot vendor both say to turn SCSI reserve / release off,
Veritas says to leave it on; anyone out there with any experience with
these issues?

thanks.
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu