Well the narrow range applies only at the FT Media Server, which to us
'belong to IT' so we can choose. It happens it fits what we run anyhow.
The SAN Client can be pretty much anything.
The win we think will be the administration - it really is just the Client
kit, so upgrades are much
I've written a couple of scripts to help with this, which would work I
think for both Vaulting and other duplications like SLPs. We found that
with D2D2T using AdvancedDisk staging, RMAN thinks the backup is on
something like @6. Restores work if the tape it is actually on is
still
Simon, I think that some of the replies on the list are not quite correct.
We are implementing the SAN Client/FT Media Server for a production data
centre, and I've spent several months working on it in our test labs.
It's true that the documentation is slightly confusing, especially if you
I can only comment on the library *with* the IO blades as that is how
we've always run it. So I really cannot say if the claims about the speed
buffering to even out data rates to the drives stack up. Nor can I say if
they are a bottleneck, though we are about to use 4 x LTO4 on each 4Gb
On a similar track, has anyone experience of using staging disk (not DSSU)
with lifecycle policies with RMAN?
The question I'm trying to answer (about to test) is that if the backup
phase of the SLP is to a disk STU, the backint will report to RMAN where
the backup has landed on disk.
The
I think our problem is that although I agree that what you say works, we
don't keep an onsite copy of the backup. So while if a restore is
initiated from RMAN, NetBackup will work out where the primary backup
image is, in our case that will just cause an operator request to get the
tape and
I don't think you can avoid the situation that when all options are
exhausted, the job will (a) retry and then (b) fail.
You can configure how many retries there are, and I think the time it
waits, but in the end if the retries are exhausted I believe that it will
fail the job. We are setting
I agree with the principle of what you say, though it is only practical if
your storage units are configured in a way that allows that. It would be
pretty easy and sensible for example with Oracle servers that have SAN
Media Server, and hence a Storage Unit that is only used to backup the
If you are on NBU 6 consider a couple of possibilities.
There are some things that require you to have both short and
fully-qualified names in the EMM database. I could imagine a database
error if that kind of lookup failed. There is a command to add aliases
for media servers.
nbemmcmd
What are people using for the fragment size on LTO4 (or I guess LTO3) with
NBU 6.x?
I ask because the default is 1TB, i.e. more or less don't fragment.
The argument for a large fragment is that the backup doesn't have to stop
so often as it does briefly at the end of each fragment to update
You need to get from EMC PowerLink the latest version of their document
Configuring NDMP Backups on Celerra. I think a number of the suggestions
that I've seen only apply to 'remote NDMP' where the tapes are on a
'normal' media server with the ndmpmoveragent running, and the data comes
from
You could look at:
http://blogs.sun.com/ValdisFilks/category/Technical
Improving I/O throughput for T2000 servers
You can also run CoolTuner, possibly, depending on the rules in your
environment - or look at other 'Cool Tools'.;
However, don't run the PDDO plug-in on T2000 - the lack of
On a Windows 2003 SAN Client, using 'vxlogview -p 51216 -o 200' after
running a backup job, I've clearly got some major problem...but has anyone
ever come across any description of what the counters reported mean?
21/08/2009 13:37:58.413 [Counters]C O U N T E R S
Well NOM can do this for you if your servers are 6.x. It has a couple of
built-in reports, one for 'drives in use' (as in currently) and one for
'drive usage' for which you set a time frame, like 'last 24 hours' or
'last 7 days'.
I agree that I've seen people use DIY with vmoprcmd triggered
I think you are correct. I also found that you cannot add the special
barcode rules NONE and DEFAULT through the GUI any more, as the angle
brackets upset the new filters on what is allowed in the CLI commands that
it creates.
William D L Brown
veritas-bu-boun...@mailman.eng.auburn.edu
We did a similar exercise a few years ago to consider a successor to the
L700. I looked in detail at a number of libraries available at the time,
and we chose what was then the ADIC Scalar i2000 - now Quantum. We had
collated reqirements and scored them before asking for information from
I'm having a problem with the SAN Client on RHEL5 x64, has anyone on the
list got this working? I've managed to get it to work on Solaris and AIX,
but on RHEL5 it cannot find the devices.
The OS is great, it finds the ARCHIVE Python devices as soon as they are
zoned in, creates the 'sg'
I'll admit straight up that no, I have no *experience* of this. However
we did have long discussions with Sun and Symantec about this, as Sun
raised it as a possible issue as the Sybase ASA is single threaded.
We had decided that on balance the T6320 (which is the same as the T5220
but in a
I have a question about the 5.1- 6.5 upgrade, relating to the use on
non-reserved ports.
This is what the 5.1 manual says:
Accept Connections on Non-reserved Ports
The Accept Connections on Non-reserved Ports property specifies that the
NetBackup client service (bpcd) can accept remote
I'm sure it says in the manual that you can mix MSEO and non-MSEO on a
single tape. You may think that is going to create a problem as you'll
have to treat all your media as if it contains encrypted data. I know the
idea of encryption is so you don't lose data when you lose media, but it's
I was talking to Bill Bolton from Brocade at a SNIA Academy day in London,
asking about how to get an LKM that was not tied to one product - we might
want to use the same LKM for Brocade encryption switches for example - we
don't want a row of LKMs.
I got this info:
[There is an...]
Well NDMP logging is done differently, so you may want to search for the
technotes for that - it will likely give more information. However, I've
heard that it can produce an enormous amount of logging.
I've not tried remote NDMP any time recently, so I can't claim real-world
experience. I
I don't think it will. NOM 6.5.4 introduces the concept of a read-only
user. However NOM has no mechanism that I've seen to limit what a user
can see. All that I think you can limit via NBAC is what the NOM server
can see.
The private/public server groups don't have any means to say who can
We are only just starting to do this, but we did some testing and our plan
is:
Very small servers:
Install NetBackup Standard Client in the Global Zone and backup all file
systems from there - can only be done if there are no applications like
Oracle in the non-global zones.
Medium servers:
You could look at bpstsinfo -comparedbandstu but that only applies to
OpenStorage and I think in fact possibly a subset of those.
William D L Brown
veritas-bu-boun...@mailman.eng.auburn.edu wrote on 09/06/2009 17:59:17:
Is there a way in 6.5.x to verify all the pieces of an image exist
Has anyone tried adjusting the Sybase ASA cache sizes for either NBU or
NOM?
The 6.5.2 document updates introduces the dbadm tool, which amongst other
things allows you to change the cache settings in the
/usr/openv/var/global/server.conf file for NBU.
NOM has a similar file, documented in
I'm fairly certain that does not apply if you create an ndmp user account
on a NetApp filer. The documents have said that for years and we've
managed fine with a user-defined account. However, we are not using NBU
6.
We did have to limit the NDMP passsword length to 8 characters, which was
Is there a way to perform incremental backups using in-file
delta technology (backup only changes within a file)? I've seen a lot
of
features/options for Netbackup but I have never seen this...If indeed
there
isn't ...why isn't this type of ESSENTIAL technology part of
Netbackup?
As
You said:
I have four policy running every day from 5 PM to next day 7 AM .some
servers done the backup sucessful and the other not completed ,on the next
day the the server which was not copmleted yesterday completed on this day
etc..
OK, that sounds like your jobs are just overrunning,
Assuming that you are not using VxSS, Did you restart bprd and bpdbm after
you added the SERVER= entry to the bp.conf? It is not enough to just
re-read the configuration.
Check for active backups using:
/usr/openv/netbackup/bin/bpps -a
To stop bpdbm:
/usr/openv/netbackup/bin/bpdbm
Do you have an entry:
CONNECT_OPTIONS = localhost 1 0 2
in the bp.conf on your Master Server? See page 50/51 of the Performance
Tuning Guide. It does refer to when the server is busy, which yours is
not. As I read it, this causes internal connections not to be funnelled
through vnetd but
http://seer.entsupport.symantec.com/docs/307083.htm
William D L Brown
Kathryn Hemness kfhemn...@ucdavis.edu
Sent by: cc...@gryffindor.ucdavis.edu
29-May-2009 18:44
To
veritas-bu@mailman.eng.auburn.edu
cc
william.d.br...@gsk.com
Subject
Re: [Veritas-bu] NB 6.5.3 slow response for
I looked in a lot of detail last year at VTLs and disk backup solutions.
They all have pros and cons - a lot depends as someone else pointed out on
what you are trying to fit it into.
If you have to create long term retention tapes the Avamar has a real
problem - it is not what it was designed
I'm trying to set up NOM on a Windows server.
I followed the recommendations in the ICS documents to set up a root
broker (RB) only server, and then separate Authentication Broker (AB) only
servers for Windows and UNIX.
NOM at 6.5.x allows you to install the VxAT as 'typical', which is
And after installing the 'dongle' you have to restart the library for it
to recognise it. Then it will say on the front panel that it is
web-enabled.
It is not the world's best GUI and the annual license charge is very
steep, considering there has never as far as I've seen ever been any
The L500 is I think a multi-LUN library; make certain your HBAs are
configured to enable multi-LUN support. Many SCSI adapters default to
having this disabled.
This would cause what you see.
William D L Brown
[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
19-Jul-2006 15:23
To
Well, you say that you have followed the Device Configuration manual, but
the message you show suggests that there is some step that has not worked.
The 'special procedure' is described in the manual, but what it says is
that this is done for you by sg.build and sg.install, which is why there
I'm fairly sure that the HCART vs HCART2 has no effect on density etc. The
settings for this are controlled in a couple of places, but relate to the
configuration of the drive at the OS level.
At the UNIX OS level there are several device files created, with
different characteristics - all
No, but there are technotes about how to restore from an NDMP tape to a
non-NDMP system. I seem to recall it involves skipping over the first
file on the tape. I think you would also need to think carefully about
the format the tape is written in. Celerra defaults to 'dump', if you
enable
We have thought a bit about this but the Decru units did not have the
connectivity needed, and are not cheap. We have dozens of tape drives,
and to insert a fibre-fibre encryption device would cost a huge amount.
We'd also have to pay to provide one at the (contracted) DR site.
So if you have
I have a Word template I created that uses the 3of9 barcode font. That I
use to make DLT barcodes, but only for test purposes. For production we
buy them, that way you get useful colour coding if you need it, and can be
sure they work. I have to cut mine up and fit them - as someone else
My understanding is that the very latest DataONTAP (7.1.1) and NetBackup 6
versions have finally addressed the problem of sharing the drive. However
I also thought I read that with NetBackup 6 the drive if on the server no
longer needs to be dedicated to NDMP anyhow - what was broken was the
We SUSPEND all tapes as they are ejected from the robot after the
overnight backups.
That way, if we have to bring them back on site for a restore, NetBackup
will not try to append further backups to them. We had many cases where a
tape was put back into the robot for a restore (physically
Yes, we found that any system that had VSP installed started trying to use
VSP, as the VSP_USE registry key is no longer used or honoured. The
toggle for Windows Open File Backup (WOFB) has moved to the client
database on the Master Server. Of course if VSP works it may not cause
an
I think if you turn on software compression this only applies to the data
in-flight, i.e. it is compressed by the client but decompressed by the
server, regardless of the type of storage unit. It is designed to be used
for slow links, where the CPU overhead at each end is more than
I've been looking at this and you have rather limited choices:
1. Install NBU while the server is still on Solaris 9; you can then
upgrade to Solaris 10 and push updates in the normal way.
2. Push the install from the Master Server using either the 'trusted
hosts' (r commands
The next step I want to accomplish is to install NBU on MEDIA_S1.
Questions
related to this are
1) I believe that I need to install the NBU media server software
directly on MEDIA_S1 from a
CD distribution. Is this correct? I don't push software from
MASTER to MEDIA_S1
With the 5.1 Java Admin console you could not connect directly to a 4.5
server, though it said the console was compatible. It turned out that
what you can do is connect to a 5.1 server, and then change server to a
4.5 server. It requires the 5.1 server to be 'trusted' by the 4.5 server
by
You can get good test programs and test data file generation from the HP
web site.
http://h2.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=lpg50460locale=en_US
Look for 'hptapeperf' and 'hpcreatedata' - the latter can produce specific
compressibility levels of the data.
That
Does anyone have an 'official' st.conf entry for IBM LTO-3? I can only
find ones for HP LTO-3.
William D L Brown
___
Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
Try this:
http://seer.support.veritas.com/docs/205940.htm
William D L Brown
Carlos Britto [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
28-Mar-2006 20:24
Please respond to [EMAIL PROTECTED]
To
veritas-bu@mailman.eng.auburn.edu
cc
Subject
[Veritas-bu] Restore from the Tape
All,
we see 40-50MB/sec when doing disk to tape jobs.
That is at the minimum required for LTO3. Is anyone using LTO3 for
D2D2T? I'd like to understand what does and what does not work. There
is some pressure to start using LTO3 as the price gap narrows, but I doubt
many of our systems can
NetBackup 5.n was licenced by tape drives. NetBackup 6.n is licenced by
TB in the VTL. I don't know the detail, e.g. is that raw TB etc. I also
don't know if they changed the licencing for 5.n, or if it is still
needing drive licences.
William D L Brown
Gary Williams [EMAIL PROTECTED]
I think this is due to be changed in an upcoming MP for NBU 6to allow
multiple streams off one DSSU. I'm sure I heard they were doing
something about that.
William D L Brown
[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
23-Mar-2006 15:59
To
Paul Keating [EMAIL PROTECTED]
cc
This is documented - either in a technote or in the troubleshooting guide.
Having to skip past a file by using the 'mt' command rings a bell.
William D L Brown
Suhas BHIDE [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
03-Mar-2006 08:12
To
veritas-bu@mailman.eng.auburn.edu
cc
Subject
I have not used it with NetBackup, however I have seen it used by other
applications against tape libraries. It may in fact be the other way
round - the SCSI command set includes (for some but not all libraries) the
init element status (with range). Robtest, being a generic utility, is
just
Did you put much time into tuning the TCP/IP parameters for the Bandwidth
Delay Product over the link? I've been doing a lot of reading about TCP
tuning and a lot of it seems to relate to WAN link tuning for performance.
Basically keeping the pipe full by turning on window scaling and
The following NetBackup licenses are available only for the NetBackup
Enterprise Server version.
• NetBackup Advanced Client
• NetBackup SAN Media Server
• NetBackup Shared Storage Option
• NetBackup StorageTek Virtualization Option
I'll send full detail off-list as it will be a binary file.
As Len said, it works fine in general with the caveat about DAR. How
much of a problem that is depends on the skill of your backup staff. It
is very easy when a call comes in to a helpdesk to 'restore folder xyz' to
just do it the same as any other non-NDMP restore. This is a bad idea,
as
We use them and there is some tuning. I'm not near a system with them,
but I do know we used the Sun documentation on these cards and set the TCP
high watermarks to 65535. Some changes that used to go in /etc/system
for 100Mb cards now go in a separate file for the ce cards.
These in
If you look back in the list archives you will see this explained.
HBAs and SCSI cards that use the windows miniport driver have a 64k
limitation set by the maximum_sg_list parameter - or rather its default
value as it is usually not defined. You can raise this in the registry
settings for
Version 6 can share drives between NDMP and non-NDMP, but you say the
drives are 'directly connected' to the NetApp. If they are SAN-attached
and using the NetApp HBA for tape, then you can also zone an HBA on e.g.
the Master server to the same drives...according to the theory. That
could
Yes, though there is almost nothing you can tune. Search on the internet
for the document entitled San Foundation Suite Tunables.
We have also found that our library vendor does not actually support using
the qlc driver with their libraries, as it does not comply with the
fibre-channel
Indeed in our test lab I've used the Qlogic card with the QLogic driver,
which is what ADIC recommend. No issues at all but I don't have enough
hardware to test SSO intensively.The latest Qlogic driver is
dynamically reconfigurable - it says - as long as you use their tools to
do so, and
You don't mention the server platform, but we've been warned (and found)
that our Solaris servers need to be up to date on Kernel patches to avoid
problems, just going to 5.1. I realise 6.0 is very different, but I
suggest that at least if you use Solaris it is likely to still be true.
Are you certain that you have the exact correct device strings?
I think if you do a
prtconf -vD | grep -A 1 inquiry-product-id
(Provided you have GNU grep)
You will see the product strings of all the devices, including the LTO-3.
Check that they match *exactly* what you have in st.conf.
I
Only issue is when
autodiscovering via the wizard, under limitations, the LTO3 drives are
showing up as Yes, See limitations, and limitations says unable to
determine drive type.
That is definitely the mappings, make sure you have the very latest
downloaded.
William D L Brown
Well at a guess I would re-apply the Emulex driver to the HBAs. Go into
device manager, right click on the HBA, and ask to Update Driver. If you
tell it to look in /windows/system32 for the driver it may find it,
without you needing the CD. Excuse me being a bit vague, I have not
tried
The problem will be in the setup of your library, not in NetBackup. You
are reporting the rightmost 6 characters of the barcode. Normally the
library will read the 'L1' 'L2' etc and identify the tape type - NetBackup
does not see those letters.Hunt in the library manuals for how to set
Definitely not, and I can't see it ever happening with SCSI-based tape
drives.
Your options really come down to:
1. Tune up the OS and NetBackup in great detail to make certain you
have the very best you can get from the media servers.
2. Look closely at how the data is coming to
Yes, the folder structure will be recreated 'above' the files. I guess
it may not be possible to restore an empty directory
There is also a limit of 1024 files selected this way to restore, or NBU
again turns off DAR. You can break your restore up into lists of 1024
files, or you can
Did you request to restore files or folders? DAR is not supported for
directory restores - so it is *very* important that you pick files to
restore, and never folders. Otherwise it scans the whole tape, even if it
already found what you wanted.
William D L Brown
Paul Keating [EMAIL
Has anyone experienced problems with upgrading to 5.1MP3A? We have
recently upgraded a fairly large master/media server from 4.5FP8, and ran
into problems with the scheduler.
Some were fixed by making the (Solaris) kernel parameters much bigger (to
cope with 200+ jobs starting at once), but
We were upgrading in this case from 4.5FP8.
William D L Brown
Paul Keating [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
13-Dec-2005 16:00
To
veritas-bu@mailman.eng.auburn.edu
cc
Subject
RE: [Veritas-bu] Scheduler problems with 5.1MP3A?
What were you upgrading from?
There were 2
The DataONTAP manuals go into some detail about how to configure shared
drives, and the use of the scsi reservations. As I read it this is only
really designed if you are using the native 'dump' kind of backups.
In theory you may be able to do what you want if you follow the NetApp
When they did the webcast I asked - answer was Yes when MP1 ships, due
Jan 2006.
So, don't try too hard just yet
William D L Brown
___
Veritas-bu maillist - Veritas-bu@mailman.eng.auburn.edu
I think you have got to tell us a bit more about the issue you are trying
to address.
Are you looking to get rid of tape altogether? Or are you looking at
alternatives as a primary backup, with a tape as secondary offsite copy?
Could you use the ability of FalconStor and NetApps to clone
Windows registry at a guess.
\HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\elx{something}\Parameters\Device
maximum_sglist is the parameter.
William D L Brown
[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
21-Nov-2005 08:49
To
veritas-bu@mailman.eng.auburn.edu
cc
Subject
The Windows SCSI drivers that use the miniport driver often are limited to
64K. The limit is in the SCSI driver, and can be raised by adjusting the
'maxiumum scatter-gather list' size. This is in the registry under the
parameters for the specific SCSI driver, so step 1 is to work out which
79 matches
Mail list logo