In my experience, if you do it correctly, you won't need the
"REQUIRED_INTERFACE" setting. It's caused me more problems then it has fixed.
I'm sure there are some specific areas were you'll need it, but I'd suspect
your overall environment could be designed better in those cases.
+---
Anyone have an experience with doing this, or other solutions I should look at
to do this?
thanks.
+--
|This was sent by spal...@spaldam.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+
If I were to have my Information Security team provide only one of the
passphrases when setting up the KMS database, which one would it be?
In other words, which one would be required to rebuilt the entire databsae, and
without it I would not be able to re-create any of the keys used by the tape
I had the same issue with tapes not getting de-assigned after upgrading from
6.5.4 to 6.5.6.
The solution was to do some cleanup:
Stop NetBackup
Rename /usr/openv/netbackup/db/jobs/pempersist2 to
/usr/openv/netbackup/db/jobs/pempersist2.sav
Rename /usr/openv/netbackup/bin/bpsched.d/last_ti
To run multiple streams of bpduplicate form the command line, requires running
mulitple bpduplicate commands. You'll have to write a script that splits up
the images list into groupings and then runs a separate bpduplicate command for
each grouping.
Be careful with this however, if you are rea
When backing up Hyper-V guests, I've seen throughput averaging around 11MB/s,
but with peaks as high as 40 MB/s. I would like to see better, and suspect it
would be better as the load on the servers were minimal during my testing.
+---
For the Hyper-v host, you probably only need a SAN media sever, as you won't to
using it to backup anything else, right? That is if the one node is able to
backup all the other nodes i the cluster without having to pull data across the
network.
+
mozje wrote:
> Just as a followup should people care. This is working perfectly in nbu, you
> only need to setup the cifs share correctly :) which was not the case at my
> first attempts.
How do you set it up correctly? (as apposed to incorrectly)
Thanks.
+---
I've used the method of swinging the SAN attached /usr/openv/ over to the new
server and it worked perfectly. TAR will have the same affect.
As other have mentioned... Make sure you new system has NetBackup installed
before hand, Test out the new server to make sure everything works on it
bef
Marion Hakanson wrote:
> spaldam via netbackup-forum < at > backupcentral.com said:
> Oh, we have in-line copy, we just don't have Advanced Disk and SLP's,
> which is what they want $50k for. As I said, Not Going to Happen.
> Our customers cannot afford any of
Martin, Jonathan wrote:
> How about dropping the No.Restrictions touch file into db\altnames\
Even under 6.5 it's not there by default. You have to create it.
+--
|This was sent by spal...@spaldam.com via Backup Central.
|For
Marianne Van Den Berg wrote:
> I've been racking my brain to try and figure out where these orphaned entries
> are coming from - Seems these drives are still physically in the robot - a
> total of 19 drives. The 6 drives that show up in vmglob output with only a
> serial # can be seen by the 's
Marion Hakanson wrote:
> However, the Admin Console does not present one with the option of setting
> the retention -- only the storage unit and volume pool are available (also
> "if this copy fails" and "media owner").
It doesn't allow you to do it because you probably aren't licensed for it.
It's an NDMP backup in the sens that the source is using NDMP, but it's not a
full NDMP backup in that your target is not NDMP. It will have to go threw a
media server to utilize OST, were most people using NDMP send it straight to a
NDMP enabled tape drive (bypassing the media sever).
Maybe y
I've never done it, and don't recommend it for performance reasons, but I
believe you need to first mount the CIFS share as a drive on the local
NetBackup Server, and then setup basic disk to use the drive letter (not the
network path).
+-
The question you really have to ask about PTT is if your media servers are
capable of handling the I/O or not (for the duplication jobs from the DXi7500
to tape), and it's really a question of if you are pushing your tape drives
fast enough to fully utilize them. If you need help in this area,
I love the L700's, but after doing a lot of research last year I found that
Storage-Tek was falling behind the technologies. The front runners for me was
SpectraLogic, and Quantum/ADIC. We ended up with an Quantum i2000 with 10
LTO4's and 200 slots with plenty of room for future expansion by
I have the same problem with Hot catalog backups hanging with 6.5.3 on Solaris
10. It's supposedly fixed in 6.5.4; I'll know by the end of next week if it
truly is.
I work around it by waiting until all backups are done, and killing the hung
jobs, then doing an "nbrbutil -resetall". After tha
I've got the same problem with my catalog backup occasionally hanging with
6.5.3 running o Solaris. I worked around it by until no jobs are running and
then doing an "nbrbutil -resetall". You may then have to use robtest to
manually un-load and tapes left in drives.
This typically happens be
Make all you're drives HCART, and all your tapes HCART, then they become
interchangeable. Of course you'll then have to find some other method of
tracking witch tapes are of what type, such as volume pools and bar-code labels.
Dean wrote:
> I have this problem with 2 different capacity IBM 35
You can use bar-code rules to put different types of tapes into different
Volume Pools based on their labels, but you only get a single "scratch" pool.
If you want to use Scratch pools, you have to do it based on media type/density
settings.
However, to use the two different tape technologies
Thanks for everyone's response on DPM. I agree it looks like a great tool for
backing up remote offices, and possibly even other Microsoft specific
applications like SQL, SharePoint, and Exchange. My real problem though, is
that we are talking about using it for our entire data center, or at
So how to I convince my VP of IT that BPM doesn't even play in the same space
as NetBackup, when the keep hearing the BPM is "Enterprise Level" and think
that they can replace NetBackup with it?
I'd rather not have to actually set it up and end up with two backup
enviorments to manage.
+-
For Netbackup, I'd just use the bppllist command to list all fo the policies,
and then run it again, once for each policy to get the details. You can then
pull out the information you find relvant. I'd think you'd want to give a
listing of all systems being backed up, when they are backed up,
Sounds like either a networking issue with the client, or a timeout issue
caused by the client software not being able to read a file, or getting hung up
on a directory with a lot of little files in it.
Try defrag & checkdisk on the client and see if it hangs too.
+
Check your barcode rules, or use the "vmchange" command to force it to HCART.
+--
|This was sent by spal...@spaldam.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--
Jack.Forester wrote:
> I seem to remember reading in the HA guide that going from a non-clustered
> server to a clustered server is not supported. Doesn't mean it's not
> possible, though. Marianne's suggestion of building the server as a cluster,
> then doing a bprecover sounds like it might
We've tested doing it in the order you described as well, but after the
recovery the cluster configuration within NetBackup gets messed up as netbackup
looses any knowledge of it being clustered. Once again it appears to be
because the EMM database gets overwritten; this time with the recovere
A few of you're survey questions are poorly worded and make no sense, or seam
loaded. Yes you can reduce costs related to backups for some of the scenarios,
but at potential costs to your overall budget, especially if a disaster occurs
and restores take too long.
+
We are using the clustering script that comes with NetBackup. The problem I
think is that the script drops the old EMM database and recreates a new one
with the new cluster configuration. It then says it's populating the new
database, but apparently it's not doing a very good job of putting a
I'm trying to replace the Master server with some new hardware, (simple enough
to do, and I've done this successfully before) but I'd also like to cluster the
master server. I've done some testing on this using a catalog recovery that
looks perfectly fine at first, until it's brought into the
I want to use a VIP for my NOM install so that when I replace/move servers
around in the future I can keep the same IP and DNS name for the NOM connection
and not have to re-do firewall rules to all of our different NetBackup
environments. Is this even possible? Will using the bp.conf in the
I saw this problem myself, and adding another barcode rule fixed it.
+--
|This was sent by spal...@spaldam.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+---
Be save use ALL_LOCAL_DRIVES, then exclude all your database, temporary,
device, and other special files. There are plenty of other settings that you
can use to control other performance/trashing concerns.
+--
|This was sent b
I started to see problems with the Master server not allowing connections to
the EMM database shortly after setting up NOM. Apparently the EMM database has
a limit on the number of connections it will allow, and with NOM and a half
dozen Java Admin Consoles running, we started to see problems.
Sounds like what you want to do is a merging of masters witch officially
required Symantec consulting services; or you could make one of your masters
into a master of masters.
+--
|This was sent by spal...@spaldam.com via Backu
We tried this, and came to the conclusion that it won't work because of the way
Exchange and the NetBackup extension for Exchange work together. It's
basically required that you use the DNS name for the exchange server to access
the database properly to back it up properly.
+-
I've got a two man Engineering/Admin team, and between 1-3 operators on duty at
any given time to help manage our 1 large implementation and 4 other small
international implementations. Of course the Operators also have other
responsibilities as well, and one of the Engineers/Admins also spend
Sounds like you might be getting there, but it's not getting back. Make sure
your firewall to the DMZ is open both ways for the NetBackup ports. Try using
vnetd if your pre v6.x.
+--
|This was sent by [EMAIL PROTECTED] via Ba
I think it depends on your file list. If you specify a folder, it will remove
the folder (i.e. d:\archivefolder). If you specify specific files, it will
only remove the specific files (i.e. d:\archivefolder\*).
The best way to prove it is to test it...
+--
If you can only aford one master, I wouldn't cluster it across the WAN. If
your link goes down and both nodes start acting as the primary node, you'll
never get them back in sync without blowing away something valuable on one or
the other. Also, I don't think NetBackup masters are supported i
I've dealt with this exact issue many times before. Its a balancing act that
really boils down to this:
Only one restore job can access the same tape at a time, and each restore job
can only use one tape at a time (there is a new feature for doing parallel
restores that were multiplexed, but
Sounds like your volume groups may not be consistent. Vault relies on them to
know where tapes are and which ones it's supposed to eject.
Another thing that it could be is if you are doing deferred ejects and
consolidated reports.
You need a separate Vault setup for each different type of med
ICS has its own version numbers separate from NBU.
+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--
_
First, you cannot run NetBackup servers in a Zone. The UNIX admin here already
tried that and the install script even said it wasn't supported.
Second, I wouldn't run master/media servers across a WAN; especially since NOM
has the ability to manage multiple Master servers from a centralized co
WEAVER, Simon \(extern... wrote:
> Is there a good way to work out how many slots I could well need when going
> from 12 x LTO2 drives to a SL500 8 x LTO4 drives
What library are you currently using? I ask because you might be better off
keeping it. I'm currently dealing with a full rack si
I've always written scripts to do this for me. I pull information like total
amount of data backed up each week. How many jobs are active within a given
time interval, etc.
You can also use the "tpclean -l" command to collect mount times for each tape
drive, and keep a log of how they change
If you want a 100% fool proof verification, restore the data to /dev/null.
Even then, who knows what might happen to that tape the very next day...
That's why multiple copies are so important, or retention periods that overlap
by a couple of reiterations.
+---
I've dealt with HIPPA, FDA, and other legal issues (despite what some people
seem to think, SOX has no say in this matter). Usually the retention is 7
years; though if it's a law-suite, and you haven't destroyed the tapes, the
lawyers will make you re-inventory them and restore everything on th
What little experience I've had with Omniback tells me that it's not as fully
featured as NetBackup, but that it works really well for remote sites (one of
the reason's NetBackup came up with Pure Disk - to better compete in that area).
I'd suggest reading up as much as you can on Omniback, so
I've see this issue come up before: ""Our Backup environment doesnt work very
well!!"", and for some reason they think software is the answer; ignoring the
fact that the new software also gets new hardware, a newly engineered strategy,
and a fresh new install to go with it.
NetBackup will al
You should only need to use SSH for new installs. Upgrades should be able to
use the NetBackup client it'self to push the updates across.
Here's some of my notes:
/usr/openv/netbackup/bin/install_client_files ssh NEWHOST
NOTE: you must first have ssh keys for "root" in place first for th
I'd do a bpmedialist -m MEDIAID on the tape you are trying to expire (or the
tape the image resides on) and see what server name it gives you.
+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL P
Has anyone had issues with their SAN catalog backups going slow, and have a
solution for it?
We do Hot Catalog Backups inline to tape (off-site - copy 1) and disk (on-site
- copy 2). We are now testing the restorability of the on-site SAN based copy,
and are having issues with it going slow (
ACS takes a lot of the control away from NetBackup, so NetBackup can't be
guaranteed it's getting exactly what it needs to complex operations like this.
Too many variables would be my guess, so they don't feel conformable certifying
the combination.
+--
Quantum uses a variable block size for their dedup, which they claim gives them
a much higher de-duplication ratio. If you multiplex, the "natural boundaries"
that quantum looks for get chopped up and that lowers the effectiveness of the
de-duplication.
As for having problems with SSO and phy
We are converting to capacity based licensing, and SSO is included in the
Enterprise tier that we are getting, so no licensing concerns are involved.
The reason I'm looking at using SSO is because the VTL only supports up to 30
virtual drives or 30 streams of data at a time, and since it's also
Maybe I need to read up a little more on this media sharing. Any suggestions
on using it with Vault? We are using VTL's, and then using Vault to duplicate
and send off-site physical tapes.
+--
|This was sent by [EMAIL PROTECT
I've heard that it's not a good idea to use SSO with VTL's. I'm looking for
specifics as to why, and what kinds of problems it can cause. Any personal
experiences with specifics would be helpful. Thanks.
We are using a Quantum DXi5500 (emulating ADIC i500 and dlt7000's per Quantums
recomm
What do you mean by Vaulting? Are you duplicating, or just using it to eject
tapes?
If you are duplicating, I would suggest using an "alternate read server" and do
all your duplicating threw a single server so the data gets condensed onto
fewer tapes.
I don't see why the alternate restore se
NetBackup shouldn't care what port SSH uses. All it cares about is that you
want to use SSH. Your SSH configuration and "services" configuration should
determine the rest.
+--
|This was sent by [EMAIL PROTECTED] via Backup Ce
6.5.2 has been working wonderfully for me for the last month, running on
Solaris 9. The upgrade fixed a lot of issues we had after going to 6.5.0, and
we were on 5.1 MP5 before that no known issues. (5.1 MP5 really was a very
solid version in my experience, and so far 6.5.2 seems to be working
The best thing I can say is: "Make Multiple Copies".
This is why I always make sure I have at least 2 copies of my next longest
retention before expiring the shorter retention(S).
For example:
If you do weekly full backups, keep you're daily incremental backups for at
least 2 weeks.
In turn,
bjgreenberg wrote:
> For all of you doing heavy reporting on NBU, I've discovered an inconsistency
> about how NBU reports job information between bpdbjobs and bpimagelist.
They should be different. bpdbjobs shows how much data was backed up based on
how it looks from a file system standpoin
If you can't trust your Backup administrator(s) - regardless of what backup
software you are using - then you are in big trouble. You can lock NetBackup
down so that only certain people can access the backup/restore functionality on
the master; but that also means they can't have "root" or "ad
If the filer doesn't have any actual NetBackup software behind it (using
NDMP?), it might not be possible to label a tape in a drive (virtual or not)
that is connected to a filer. It might be a good idea to have at least one
drive in the VTL show up on the master or another media server, so yo
You don't necessarily have to do the bplabel with drives connected to the
filer, but there should be a "-h" or "-host" switch you can use to tell bplabel
which media server to perform the label on.
+--
|This was sent by [EMAIL
Apparently the problem was due to a custom report setup in NOM that had some
issues with it. Once I corrected the problem, I could see all my scheduled
reports just fine...
+--
|This was sent by [EMAIL PROTECTED] via Backup Ce
Ok, now I can't seem to get any scheduled reports to work. If I set it up to
do a daily report, it will send the report immediately, but when I go to
managed my scheduled reports, it's blank. No scheduled reports. I've setup
half a dozen of them, but they just seem to disappear as soon as I
The "Catalog" section in the Admin GUI makes it extreamly easy to do for little
one-off's like this.
+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+---
I know Symantec recommend NOM be on its own server, but is that really
essential?
If I have a big enough Master Server, and I'm not too concerned about security,
are there any other reasons why NOM (and all its pre-requisites) couldn't be
installed on the Master Server?
I've got 1 large envir
I see error 13's a lot on Windows servers, and usually a reboot with fix it.
I'm not sure if the same thing will work for NetWare, but I suspect it's
because of a file or group of files that didn't get file locks released
properly.
+-
There's a specific install package you have to use for Windows 2008 x64, and
yes it has to be a 6.5.2 base install (no patching allowed).
+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTEC
The Admin Console is on the same CD as the server software, because it has a
lot of the same componants as a full server install. It's essentially a server
that dosn't have any of the media or master server functionality on it.
+-
Sounds to me like some of your NetBackup services aren't running.
+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+---
Apparently I had to install both authentication pieces, configure the
authentication, and then re-configure NOM once I had the authentication
services working. It's all up and running now
What a confusing installation process. I had to read 5 different documents,
and essentially configur
I've seen this happen when the windows client isn't configured on the master as
a client. The No.Restirctions file will let you get away with browsing, but
when you initiate the restore, it'll hang because it's not a valid client.
kdeems wrote:
> Im at Sungard testing DR.
>
> I just ran into
I can get the web-site to come up, but if I try to login it tells me:
"User authentication failed. Verify Symantec Authentication Service is running."
It's not running, if I try to start it nothing happens.
Here's the /var/VRTSat/vxatd.log file:
(858|1) Invalid AB configuration - cannot get co
selwyn wrote:
> After updating to 6.5.2 all my NDMP backup jobs were failing with status code
> 114.
>
> Looking at the job log I found this message path UNKNOWN. For some reason it
> looks like the ndmp agent is not sending the correct information about the
> backup path to Netbackup.
>
briandiven wrote:
> "At this time NetBackup does not support LDAP with AIX. As NetBackup is
> compiled to work with AIX 5 (5.1, 5.2, 5.3), it has to be built against
> the most common version. The AIX 5.1 (which NetBackup is complied
> against) and AIX 5.2 did not (by default) contain the LDAP
I'm not sure if this is really the problem, but I don't think NOM is supported
in VM enviorments. Check the NetBackup performance and planning guide. NOM
likes a good sized box to run on.
+--
|This was sent by [EMAIL PROTECTE
81 matches
Mail list logo