I usually don't use SSH do do that, but via FTP. Let's say your client is
Solairis 10, what i do is: svcadm enable ftp
Then on the netbackup server /usr/openv/netbackup/bin/install_client_files ftp
HOSTNAME client_username
did you try that by ftp ? can you?
About that, is it possible to do a hotbackup, and then write it to disk? (so it
can be copied over network to another system, for backup purpose)
thanks,
+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward
OK...I read through the documentation. Found out about these ports. We are
trying to develop a standard with our firewall team for:
1. What ports need to be opened on a new master server setup.
2. Which direction does the communications need to take place with each
requested port
We have 152 Master backup servers here. We backup just over 2.7PB per month. We
have the following backup products:
Netbackup - 71
Networker - 52
BackEx - 28
Tivoli TSM - 1
Legato 7.4.2 is stable and the new GUI front-end is nice.
I've been doing data protection and disaster recovery for
Yes the netbackup hot catalog backups can be written to disk.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of toaster
Sent: Thursday, October 02, 2008 9:27 AM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: [Veritas-bu] NBU catalog backups hot vs cold
Disk Pools are not supported. I think this is ridiculous because if I want
NBU to handle putting it on disk, I have to create a normal dssu and
segragate storage just for that.
Rusty Major, MCSE, BCFP, VCS ▪ Sr. Storage Engineer ▪ SunGard
Availability Services ▪ 757 N. Eldridge Suite 200,
See my answers below
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Dennis
Peacock
Sent: Thursday, October 02, 2008 8:38 AM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: [Veritas-bu] Netbackup 6.x firewall ports - Help!
OK...I read through the
I'm not sure on 6.5 but on previous versions you need to be in the
client directory OS you want to upgrade dont you?
e.g to install to a solaris 10 client i would :-
cd /usr/openv/netbackup/client/Solaris/Solaris10
./install_client_files client
Cheers
Klebba, Don wrote:
We’re currently
Has anyone run into this before?
http://seer.support.veritas.com/docs/258763.htm
The technote seems a bit vague to me. *Which* host name can't be
resolved? *What* storage unit configuration is involved?
How do I track down the host it's concerned with? Even if I run
bpexpdate with '-h
Hi Simon
I have a couple of questions for you.
What are you choosing for your backup selection when trying to do the Document
level restore backup? I believe it only works when you choose the web app site,
or the content database(s).
Secondly, is your SQL backend running on a cluster? I know
Has anyone had issues with their SAN catalog backups going slow, and have a
solution for it?
We do Hot Catalog Backups inline to tape (off-site - copy 1) and disk (on-site
- copy 2). We are now testing the restorability of the on-site SAN based copy,
and are having issues with it going slow
I'd do a bpmedialist -m MEDIAID on the tape you are trying to expire (or the
tape the image resides on) and see what server name it gives you.
+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL
You should only need to use SSH for new installs. Upgrades should be able to
use the NetBackup client it'self to push the updates across.
Here's some of my notes:
/usr/openv/netbackup/bin/install_client_files ssh NEWHOST
NOTE: you must first have ssh keys for root in place first for
I believe this technote is saying that one of the media servers is having
issues resolving its own hostname, or the master can't resolve the media
server hostname. Though if that was the issue, you'd be having more issues
than just bpexdate failures.
Have you decommed any media servers lately?
I've see this issue come up before: Our Backup environment doesnt work very
well!!, and for some reason they think software is the answer; ignoring the
fact that the new software also gets new hardware, a newly engineered strategy,
and a fresh new install to go with it.
NetBackup will also
What little experience I've had with Omniback tells me that it's not as fully
featured as NetBackup, but that it works really well for remote sites (one of
the reason's NetBackup came up with Pure Disk - to better compete in that area).
I'd suggest reading up as much as you can on Omniback, so
Run from Omniback.run fast and hard. Netbackup is your solution.
Shoot, I'd rather use TAR and DD rather than Omniback.
Thank You,
Dennis Peacock
EBCA
Acxiom Corporation
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of spaldam
Sent: Thursday, October
I've dealt with HIPPA, FDA, and other legal issues (despite what some people
seem to think, SOX has no say in this matter). Usually the retention is 7
years; though if it's a law-suite, and you haven't destroyed the tapes, the
lawyers will make you re-inventory them and restore everything on
On Thu, Oct 02, 2008 at 03:01:52PM -0500, [EMAIL PROTECTED] wrote:
I believe this technote is saying that one of the media servers is having
issues resolving its own hostname, or the master can't resolve the media
server hostname. Though if that was the issue, you'd be having more issues
If you want a 100% fool proof verification, restore the data to /dev/null.
Even then, who knows what might happen to that tape the very next day...
That's why multiple copies are so important, or retention periods that overlap
by a couple of reiterations.
I've always written scripts to do this for me. I pull information like total
amount of data backed up each week. How many jobs are active within a given
time interval, etc.
You can also use the tpclean -l command to collect mount times for each tape
drive, and keep a log of how they change
On Thu, Oct 2, 2008 at 3:43 PM, Mark Steel [EMAIL PROTECTED] wrote:
It would be Solaris platform so could run functions (such as
the master server) in a zone for failover, and media server if
required as separate zones.
A media server in a non-global zone is not supported. I think that's
Hi
I need to design and build a NBU config for a 'virtual datacentre'
with two sites, both with production hosts.
I also need to keep systems to a minimum, and I need HA/DR.
I was thinking about cross-site clustered master/media server as the
start. It would be Solaris platform so could run
WEAVER, Simon \(extern... wrote:
Is there a good way to work out how many slots I could well need when going
from 12 x LTO2 drives to a SL500 8 x LTO4 drives
What library are you currently using? I ask because you might be better off
keeping it. I'm currently dealing with a full rack
ICS has its own version numbers separate from NBU.
+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--
I agree. Though it would be more expensive (for licenses), I also
recommend a Master at each site. If you do the remote method, you will
eventually lose the link to the Master and you won't be having much fun.
Rusty Major, MCSE, BCFP, VCS ▪ Sr. Storage Engineer ▪ SunGard
Availability Services
I've dealt with this exact issue many times before. Its a balancing act that
really boils down to this:
Only one restore job can access the same tape at a time, and each restore job
can only use one tape at a time (there is a new feature for doing parallel
restores that were multiplexed,
If you can only aford one master, I wouldn't cluster it across the WAN. If
your link goes down and both nodes start acting as the primary node, you'll
never get them back in sync without blowing away something valuable on one or
the other. Also, I don't think NetBackup masters are supported
I think it depends on your file list. If you specify a folder, it will remove
the folder (i.e. d:\archivefolder). If you specify specific files, it will
only remove the specific files (i.e. d:\archivefolder\*).
The best way to prove it is to test it...
Sounds like you might be getting there, but it's not getting back. Make sure
your firewall to the DMZ is open both ways for the NetBackup ports. Try using
vnetd if your pre v6.x.
+--
|This was sent by [EMAIL PROTECTED] via
Can you quantify this a bit? I agree in principle, now I need to sell
where NB crushes HP like the niche product it is.
-M
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Peacock
Dennis - dpeaco
Sent: Thursday, October 02, 2008 2:15 PM
To:
In my setup with windows clients and specifying a directory for the file list,
the directory and newly added files do not get removed by the archive job.
Only the files that existed at the time the archive started get removed.
On Thu, Oct 2, 2008 at 3:56 PM, spaldam
[EMAIL PROTECTED]wrote:
Second, I wouldn't run master/media servers across a WAN; especially since
NOM has the ability to manage multiple Master servers from a centralized
console. If you lose the WAN, you done and all you backups fail.
I disagree -
33 matches
Mail list logo