server_param server_x -f PAX -l will show you the settings for PAX.
readWriteBlockSizeInKB is the parameter you want to change to 256kb. This
will require a datamover reboot, but you could always fail-over and
fail-back the datamover in question to minimize any downtime. CIFS clients
will
Most modern OSen should have a capable fs abstraction layer, like VFS,
which should remove the client from having to know too much about the
underlying file system, I would assume. Plus, rsync can do it, as long as
you aren't using the --whole-file option.
-- nick
The quick answer is that it's
I just completed one that I wrote in Python, which does both phase 1
and phase 2 imports.
All phases are multi-processed; it will spawn concurrent processes for
each instance of bpimport (and others) and let Netbackup manage the
tape drive resources.
It also works on a best-fit approach, where
Hmm, I would what Dave Cutler would have to say about the below?
I'm a *nix dweeb, but...I used VMS my senior year of High School,
where we had one of the first Alpha's running VMS and I could gweep
away on a VT220 terminal, it certainly was an interesting piece of
software.
-- nick
On Fri,
Sadly, the standard linux tools of sar and iostat do not report on
tape devices. You'll need to use systemtap to gather that info, and I
think there are a few systemtap scripts floating around which will
report on that, IIRC.
-- nick
I would like to track the performance of my tape drives to
That's not exactly true; we didn't see issues with an inordinate
number of bpdbm processes on MP5 until the images which were corrupt
had expired and were ready to be pruned from the catalog were unable
to be processed. In our case, it was between three and six months
after those images were
We just experienced a rash of these on MP5 -- basically, there were
some bugs in MP5 which caused intermittent image corruption. bpdbm
will hang during cleanup or database backups, and database backups
failed with status 41. Fun times, let me assure you!
We ran through about 5 corrupted images
Message: 1
Date: Fri, 25 Apr 2008 11:42:15 -0500
From: Nick Majeran [EMAIL PROTECTED]
Subject: Re: [Veritas-bu] Veritas-bu Digest, Vol 24, Issue 61
To: veritas-bu@mailman.eng.auburn.edu, [EMAIL PROTECTED]
Message-ID
Yes, we see this error all the time. I always thought it was a
block-size mismatch between NDMP and non-NDMP backups sharing the same
volume pool and retention, i.e., non-NDMP backups use a block-size of
256kb, while NDMP (waiting on NAS reboots before I move to 256k) uses
63kb.
If its a bug
I know ya'll will think it's crazy and negligent, but FWIW, on our
legacy (5.1MP5), restore-only environment with a 600 GB catalog, NFS
works pretty well.
-- nick
I'll add the voiice of experience. I tried it and ended up backing it
out rapidly. Netbackup (v4 probably) didn't like NFS.
Right, NetApp writes in dump, the Celerra will do either dump or tar,
and who knows, you may even find some vendors using cpio.
One important thing to remember is that, as the backup format for NDMP
is strictly platform dependant, so you can't restore your EMC NDMP
dumps to a NetApp or vise
While its certainly not supported, we have our legacy (restores only)
catalog, which is currently around 600 GB on NFS, and it's actually
faster than the really old SAN disk on which it used to reside.
Everything seems to work fine, but I don't know how much I would trust
using NFS for a regular
Yep, that's it, at least from my understanding. We used to be an all
NetApp shop here, and have since switched to all EMC Celerra.
We have to keep a small NetApp around just for restores of this data,
which do occur occasionally.
AFAIK, I don't think Netbackup actually writes anything Netbackup
Regarding the tape drives and compression -- this is the part that confuses me.
I can max-out an LTO-3 drive at native write speed at 80MB/s with no
problem using pre-compressed data (compressed Sybase dbdumps), even
with a measly 64kb block size. This is using direct NDMP with 2 Gb/s
fc IBM
Devon, just a few more questions:
So you *are* using jumbo frames? I saw that it was enabled in ndd,
but you haven't mentioned it outright.
Also, what network switching equipment are you using for these tests?
Also, I'm curious, how is it that 4Gb/s LTO-3 drives can write
faster than 2 Gb/s
Alexsandr,
We run our entire Netbackup (6.0MP5) environment on RedHat Linux
(AS4U5) on Dell hardware. We backup about 500 TB a month, which is
about 75% direct NDMP.
Most issues we have with Netbackup are generally Symantec / Veritas
bugs rather than OS problems. Linux is solid and fast.
As
If you do any NDMP backups at all, do *not* go to MP5. MP5 introduces
a number of nasty bugs. If you decide to stick with 6.0, wait for MP6
in December (or so they say).
-- nick
I am running NBU 6.0 MP4 on Windows, lookig to upgrade
to 6.0MP5 or 6.5
I was wondering if any of you have any
On one of our media servers, I can receive 250 MB/s into my 6850 with
4 GigE connections bonded into two LACP bonds, stream that out to six
fc LTO-3 tape drives, and the box is plenty usable. We are upgrading
our network here, and I'd be surprised if I couldn't get closer to
300-350MB/s in the
RedHat AS4 x86_64
On 8/24/07, Len Boyle [EMAIL PROTECTED] wrote:
Hello Nick
What OS are you using on the 6850?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Nick
Majeran
Sent: Friday, August 24, 2007 3:23 PM
To: veritas-bu
Are you doing regular SSO or NDMP SSO?
What OS is the host?
The first thing to try would be to reset the drives. Failing that,
next try running vmoprcmd -crawlreleasebyname. Failing both of those
things, reboot your I/O blades in your i2000.
We see this issue quite a bit with our i2000 and
We have a similar situation, only our EV is bigger, and with more
files. We've tried FlashBackup, and that only gets us to around 7
MB/s. What we do right now is take a BCV once a day, mount it on a
separate host, and back that up as a raw, block level backup. Now we
see between 20 and 35 MB/s
: Rajmund Siwik [mailto:[EMAIL PROTECTED]
Sent: Tuesday, July 03, 2007 4:35 PM
To: Nick Majeran; veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] VTL with NDMP
What kind of tuning you are doing on NetApps to get that speed?
I have newer and older Net App boxes, assuming a 2GB san
] On Behalf Of Curtis
Preston
Sent: Thursday, July 05, 2007 11:32 AM
To: Nick Majeran; Dellaripa, Rich
Cc: Rajmund Siwik; veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] VTL with NDMP
I _BELIEVE_ that SIZE_DATA_BUFFERS_NDMP applies only to 3rd party NDMP
backups that are sent
Message: 9
Date: Tue, 03 Jul 2007 08:18:23 -0700
From: Kenny [EMAIL PROTECTED]
Subject: [Veritas-bu] VTL with NDMP
To: VERITAS-BU@mailman.eng.auburn.edu
Message-ID: [EMAIL PROTECTED]
I am doing some research to see if a VTL will help my NDMP performance.
Currently I backup my Net App
We haven't tested at all with NetApp, unfortunately. EMC Celerra with
DMX disk on the back has been our NDMP mule, along with NBU 6 NDMP SSO
and fc LTO-3 as the target. We tested a few VTL vendors (who shall
remain unnamed), and found similar results.
On 7/3/07, Rajmund Siwik [EMAIL PROTECTED]
Does anyone have any experience turning off SCSI Reserve / Release in
NBU 6.0 with NDMP SSO? I have 24 LTO-3 drives sharing between a local
Linux host (NBU 6.0MP4) and 11 Celerra data movers, which are
configured to *use* SCSI reserve / release by default. If a data
mover panics and fails over
26 matches
Mail list logo