[Veritas-bu] NUMBER data buffers

2011-10-18 Thread Heathe Yeakley
So I've read the tuning guide, I've played around with different options for
SIZE and NUMBER of buffers and I understand the formula of SIZE * NUMBER *
drives *MPX as it relates to shared memory.

Here's my question. Of the four parameters:

MPX level

# of drives (I have 12 drives)

NUMBER of buffers

SIZE of buffers (must be multiple of 1024 and can't exceed the block size
supported by your tape or HBA)

The NUMBER of buffers and MPX level seem to be the two variables here. I
have MPX set pretty low (2 or 3) and NUMBER of buffers set to either 16 or
32. When I multiply it all out, I get a hit on my shared memory of less than
a GB. My media servers are dedicated linux hosts that only function as media
servers and that's it. Furthermore, they each have somewhere around 35 - 50
GB of memory a piece.

With my current configuration, I'm not even scratching the surface of the
amount of shared memory that's sitting idle in my system while my backups
run at night. Is there any reason I *shouldn't*** jack the NUMBER of data
buffers up to... say... 500? 1000? I've seen some people mention that they
have the number of buffers set to 64, but can we go higher?

I've searched around to see if there's a technote on the upper limit of the
NUMBER buffers parameter. If there is such a tech note, I can't find it.

Any ideas?
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] BMR Shared resource trees

2011-10-13 Thread Heathe Yeakley
I'm playing around with getting Bare Metal Restore setup in my environment.
I've:

* Read the guide (most of it)
* setup BMR per the guide
* Setup a windows boot server
* configured a single shared resource tree
* setup a vanilla windows client
* performed a BMR backup of the system
* blown my test system away and BMR restored it

I was actually amazed at how simple the whole thing was. Now that I have my
system back up, I have some questions about BMR.

#1) On my test windows server, when it came back up, it wanted me to put in
the Windows 2008 license key. I didn't see anything in the BMR guide about
putting license keys in the SRT. Did I miss a step or is this by design? Do
I need to develop a process to have my  license keys on standby in case we
had an actual disaster and I had to BMR an entire datacenter? (Aside from
the license key, everything looks fine.)

#2) How granular do you folks get with Shared Resource Trees? Right now I
just have one 64-bit SRT, and I'm going to make a 32-bit one as well. Is
that how most of you do it? Or do I need to get more specific, like:

SRT #1) 64-bit development systems
SRT #2) 32-bit development systems
SRT #3) 64-bit production systems
SRT #4) 32-bit production systems
SRT #5) 64-bit test systems
SRT #6) 32-bit test systems

You get the idea. How does everyone else do it? Do you have 2-3 generic
SRTs, or do you get extremely specific based on your environment?

As always, thanks again.

- HKY
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] NDMP in Practice

2011-10-04 Thread Heathe Yeakley
I have a procedural question, not a technical one.

I just switched over to capacity based licensing and I'm trying to setup
NDMP.

Master Server - RHEL 5
Media Server #1 - RHEL 5
Media Server #2 - RHEL 5
Storage Array - NetApp 6080

I've read the release notes, the NDMP guide, and the related guide mentioned
in the NDMP guide (the one that tells you how to enable NDMP on the
different filers out there).

I've:
* Run a fiber to the array
* flagged that port as an initiator
* zoned the array to see my library
* setup authentication from my master server to the array
* activated ndmpd on the array
* set the scsi reservation setting on the array
* verified connectivity from my master server via tpautoconf -verify 'array
name'
* Built a policy and tested a backup.

Everything seems to be working. I can backup and restore.

I go to setup my policy and note that according to the NDMP guide, there is
no equivalent directive to the ALL_LOCAL_DRIVES option for server backups.

I asked my NetApp storage administrator if there is a command that displays
a list of exported NFS and CIFS shares. He directed to the command
'exportfs'. I'm using the output of that command as the basis for what goes
into the Backup Selection section of the policy.

Here's my question. I'm the backup administrator, but I am *NOT* the SAN
administrator. Since there is no NDMP equivalent of ALL_LOCAL_DRIVES, I
forsee a scenario where my SAN team deploys a new share and doesn't tell me
about it. Since I'm not aware of the new share, I don't add it to the backup
selection list and it doesn't get backed up.

I've been brainstorming a solution to this. What I'm doing for now, is I've
setup a reminder in my calendar to remind me the first week of every month
to just walk over to my SAN team and ask them if they've added any new
shares that I need to be backing up.

It works, but I'm wondering if there is a more sophisticated way to keep
track of the shares on an NDMP policy.

For those of you that utilize NDMP, how do you account for this?

Thanks.
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] BMR

2011-09-28 Thread Heathe Yeakley
Hello, I am thinking of deploying NetBackup BMR and I had some question on
it.

I'm on page 18 (Chapter 2) of the administrator guide.

The first thing it wants me to do is set up tftp and dhcp. I understand
*why* it wants me to setup these services, but I'm concerned with policies
being enforced by the network and infrastructure departments.

First of all, note that I have never setup, configured or maintained DHCP. I
know that when I boot of a DHCP enabled system, it goes through a process to
connect to a server which then leases it all the information it needs to
participate in the network. That's the extent of my knowledge of DHCP.

With that said, the book wants me to setup a DHCP and tftp server on my
Linux BMR boot server. The enterprise LAN I'm on already has corporate level
DHCP servers for all the clients on the network. Is the step on page 18
having me setup a config file that will direct incoming requests to the
corporate DHCP servers that already exist on my network, or am I setting up
my own DHCP server? I'm just trying to understand this so that I don't
enable something that will result in me getting shot by the network staff.

Please clarify.

Thank you.

Environment:

Master Server: Red Hat Enterprise 5 running NBU 7.1
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Backup Archive and Restore taking forever on DR server

2011-02-09 Thread Heathe Yeakley
I'm at a DR exercise. I've installed, patched, configured NBU and
successfully imported a catalog. I'm in Backup, Archive Restore and
using the Directory Structure pane to drill down to the files I want
to restore. I'm restoring files to an alternate server than the one
where the original files were taken from. I've set the parameters for
source and destination and everything looks good. When I pull up my
list of available backups, the list takes like 5 minutes to pull up. I
select the policy I want to restore from and it takes like 5 minutes
to put the directory tree in the Directory Structure pane. I select
the icon next to root to expand root, and it takes like 5 minutes to
show me the next layer.

You get the idea. On my master server back home, this process takes
seconds. Boom, boom, boom. I drill down select my file and go.

Here I expand root, then 5 minutes later I expand /dirA, then 5
minutes later I expand /dirA/dirB...

I've been digging around for about 30 minutes to see if there's some
type of a timeout setting or something that's causing this.

Have any of you seen this before?

- Heathe Kyle Yeakley
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Backup Archive and Restore taking forever on DR server

2011-02-09 Thread Heathe Yeakley
My apologies, I should have added that.

In my live environment, I have two servers:
ziggurat - Master/Media RHEL4
obelisk - media RHEL4

For DNS I'm just relying on the /etc/hosts file as I'm only trying
to recover 5 systems. The source system that I'm restoring from is not
here, not have we built a replica on it. I'm trying to browse files
that were backed up on server-A in my live environment and restore
them to server-B here in the DR environment.

I can browse down the directory tree, it just takes 20 minutes to do
what I can do in 10 seconds in prod. I'm assuming it's trying to call
out the server-A. I don't see why it should have to do that since the
metadata for the restore files should be in the catalog here on my
master server.

-Thanks.

- HKY
On Wed, Feb 9, 2011 at 3:38 PM, Infantino, Joseph jinfa...@harris.com wrote:
 What is the OS of the Master and Media server(s)?
 Is DNS working flawlessly?

 Thank you,

 Joseph A. Infantino II
 BackUp/Recovery Administrator
 HARRIS IT Services
 Assured Infrastructure Management
 Office: 321-724-3011 | Fax: 321-724-3392
 Email: joseph.infant...@harris.com




 -Original Message-
 From: veritas-bu-boun...@mailman.eng.auburn.edu 
 [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Heathe Yeakley
 Sent: Wednesday, February 09, 2011 4:24 PM
 To: NetBackup Mailing List
 Subject: [Veritas-bu] Backup Archive and Restore taking forever on DR server

 I'm at a DR exercise. I've installed, patched, configured NBU and
 successfully imported a catalog. I'm in Backup, Archive Restore and
 using the Directory Structure pane to drill down to the files I want
 to restore. I'm restoring files to an alternate server than the one
 where the original files were taken from. I've set the parameters for
 source and destination and everything looks good. When I pull up my
 list of available backups, the list takes like 5 minutes to pull up. I
 select the policy I want to restore from and it takes like 5 minutes
 to put the directory tree in the Directory Structure pane. I select
 the icon next to root to expand root, and it takes like 5 minutes to
 show me the next layer.

 You get the idea. On my master server back home, this process takes
 seconds. Boom, boom, boom. I drill down select my file and go.

 Here I expand root, then 5 minutes later I expand /dirA, then 5
 minutes later I expand /dirA/dirB...

 I've been digging around for about 30 minutes to see if there's some
 type of a timeout setting or something that's causing this.

 Have any of you seen this before?

 - Heathe Kyle Yeakley
 ___
 Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Admin app for Android, iPhone or iPad

2010-12-19 Thread Heathe Yeakley
Hey list,

 With all the apps that are coming out for Android, iPhone and
iPad, are there any hot apps that allow any access to NBU information?
I wouldn't think an app would need to do much. Maybe kick off backups
and restores and show the activity monitor.

I've done some Google searches and haven't seen anything, but I
thought I'd ask in case anyone has any info on the topic.

   Thanks.


- Heathe Kyle Yeakley
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] VTL to FAS conversion

2010-10-26 Thread Heathe Yeakley
We purchased a NetApp VTL 1400 about two years ago. I integrated it
into my backup environment, but I've just never gotten it to do the
things I feel a VTL should be able to do. One of the problems is that
we don't do a lot of restores in our environment, and so I'm not
really sure what the VTL bought us in terms of actual use versus just
being new and shiny.

I've seen other comments here on the board and articles in trade
journals and what not that indicate the industry is trending away from
VTL and opting for either:

* Disk to Disk to Tape
* Disk to a remote site via some type of replication technology
* People ditching tape altogether and just relying or various disk technologies

Now, due to the nature of the data I'm backing up, I have to keep tape
in the picture. In my VTL guide, it says that I can convert my VTL1400
into a FAS 3070 device. It got me to thinking

Since I've already purchased this VTL and the Front End Terabyte
license from Symantec, what if I:

* Converted my VTL to a small SAN
* Carved up the disks on the SAN and presented dozens of LUNS to my
media servers
* Via NetBackup, turn those LUNS into advanced disk storage units
* Sent all my backups to the Disk storage units
* Used Storage Lifecycle policies to duplicate the images off to tape

In this scheme, when I came in in the morning, I'd have two copies of
everything: 1 copy sitting on my disks storage units (with, say a 1
month retention) and then a secondary copy on tape which can then be
shipped to Iron Mountain.

This, I feel, utilizes my existing infrastructure using equipment,
licenses and support agreements that I already have in place and
optimizes my existing NetBackup domain.

What are your thoughts on this proposed solution? Am I on the right
track? Is this a stupid idea?

One follow up question (assuming idea so far is good):

One thing the VTL had going for it is that I could build dozens of
drives and when my backups kick off at night, everyone gets a drive.
There's almost no jobs that queue up waiting for a storage unit. So
the amount of time it takes to actually perform the backups is greatly
reduced.

If I change my VTL into a SAN and present LUNS to my media servers to
act as disk storage units, should I make 1 or 2 monolithic 5 or 6 TB
luns per media server to hold all my backup images, or should I make
dozens of 500 GB luns so that each media server has multiple available
storage units which will let me perform multiple backups concurrently?

Thanks for the information in advance.

- HKY
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Setting up FT on SAN attached /usr/openv

2010-10-06 Thread Heathe Yeakley
I'm trying to setup Fibre Transport for the first time. I'm running
NBU 7.0 with 1 master server and 2 media servers, all running RHEL
5.0. I've got the 7.0 SAN Client Guide up and I'm on page 34
Configuring an FT media server.

Small problem.

Step 1 says Ensure that the HBAs are not connected to the SAN.

My OS in running on internal storage, but I built a LUN on the SAN and
presented it to each system and mounted it as /usr/openv so I could
dynamically grow the disks if need be. If I disconnect all my HBAs
from the SAN, I lose access to the partition where NBU is running
which means I can't run the commands to setup FT.

Do I need to tear down my NBU installation, mount /usr/openv on
internal storage, reinstall NBU, and then rerun the FT setup commands?

- HKY
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Setting up FT on SAN attached /usr/openv

2010-10-06 Thread Heathe Yeakley
My 3 servers each have 4 QLogic PCIe QLE 2462 HBAs.

On Wed, Oct 6, 2010 at 2:49 PM, Bahadir Kiziltan
bahadir.kizil...@gmail.com wrote:
 Hi Heathe,

 First, you need to have at least one supported Qlogic HBA in order to
 configure FT media server.
 You can't/won't use that one as initiator due to the fact that it has to be
 marked as target by NetBackup.
 During FT media server configuration NetBackup needs to temporarily unload
 qla2xxx module which causes to lose SAN connectivity

 So, it's not clear what HBA (brand/model) you have.

 Bahadir.

 On Wed, Oct 6, 2010 at 10:18 PM, Heathe Yeakley hkyeak...@gmail.com wrote:

 I'm trying to setup Fibre Transport for the first time. I'm running
 NBU 7.0 with 1 master server and 2 media servers, all running RHEL
 5.0. I've got the 7.0 SAN Client Guide up and I'm on page 34
 Configuring an FT media server.

 Small problem.

 Step 1 says Ensure that the HBAs are not connected to the SAN.

 My OS in running on internal storage, but I built a LUN on the SAN and
 presented it to each system and mounted it as /usr/openv so I could
 dynamically grow the disks if need be. If I disconnect all my HBAs
 from the SAN, I lose access to the partition where NBU is running
 which means I can't run the commands to setup FT.

 Do I need to tear down my NBU installation, mount /usr/openv on
 internal storage, reinstall NBU, and then rerun the FT setup commands?

 - HKY
 ___
 Veritas-bu maillist  -  veritas...@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] MSEO

2010-09-30 Thread Heathe Yeakley
I have a small NBU 6.5 environment where we are currently using a
Decru encryption appliance to secure our backups. We want to get away
from the Decru and have chosen to install and setup NetBackup Media
Server Encryption Option. I've downloaded the software and manuals,
which I'm sitting down to read. Any tips, tricks or pearls of wisdom
from those of you out there already running it?

Thanks.

- HKY
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] MSEO

2010-09-30 Thread Heathe Yeakley
Unfortunately, this particular NBU environment is running LTO2 drives.
I'm sure we'll upgrade the library eventually. I didn't know KMS was
free. I'll keep this in mind for future installations.

One follow up question. The environment in which I'm setting up MSEO
is currently running 6.5. I plan on upgrading it to 7.0 in the next 6
months. Which would be better:

1) Upgrade to 7.0 first, then install and configure MSEO

2) Go ahead and install MSEO and get it running. In a few months,
upgrade to NBU 7 and deal with any MSEO upgrade issues at that point.

-HKY


On Thu, Sep 30, 2010 at 11:09 AM,  judy_hinchcli...@administaff.com wrote:
 My little pearl- if your tape drives are capable of hardware encryption ( 
 most are)  use kms instead.
 If you have the same manual as I have it is chapter 6.

 1) free - no license required
 2) easy set up
 3) overhead is on the tape drives and not on the client or media server
 4) drawback is you can only have 2 encrypted pools on 6.5 but can have 20 on 
 7.0


 -Original Message-
 From: veritas-bu-boun...@mailman.eng.auburn.edu 
 [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Dave
 Sent: Thursday, September 30, 2010 11:05 AM
 To: 'Heathe Yeakley'; 'NetBackup Mailing List'
 Subject: Re: [Veritas-bu] MSEO

 My only tip is to make sure your media server can handle the CPU cycles
 needed for MSEO. It can be a resource hog.

 Setup is actually very easy.

 Have fun

 -Original Message-
 From: veritas-bu-boun...@mailman.eng.auburn.edu
 [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Heathe
 Yeakley
 Sent: Thursday, September 30, 2010 8:36 AM
 To: NetBackup Mailing List
 Subject: [Veritas-bu] MSEO

 I have a small NBU 6.5 environment where we are currently using a
 Decru encryption appliance to secure our backups. We want to get away
 from the Decru and have chosen to install and setup NetBackup Media
 Server Encryption Option. I've downloaded the software and manuals,
 which I'm sitting down to read. Any tips, tricks or pearls of wisdom
 from those of you out there already running it?

 Thanks.

 - HKY
 ___
 Veritas-bu maillist  -  veritas...@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


 ___
 Veritas-bu maillist  -  veritas...@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Updating bp.conf on a *nix client

2010-09-28 Thread Heathe Yeakley
Greetings List,

--== WHAT I'M TRYING TO ACCOMPLISH ==--

I'm building a new NBU environment and I want to add the names of my
new master and two media servers to the bp.conf file on my *nix hosts.
I picked a box at random, logged in, fired up the bp.conf file in vi
and added 'SERVER = servername', saved and exited.

When I try to connect to the client from my new master server, I get a
status code 59 that the new master isn't allowed to talk to the
client. So I assume I incorrectly edited the bp.conf file. I reopen it
and delete my three 'SERVER = servername' entries.

I want to be able to log into a client machine, edit the bp.conf file
in vi (or some text editor) and have my changes take effect without
having to get the master server involved.

--== WHAT I'VE TRIED SO FAR ==--

Now, if I:

* Log into my current master server
* Go to host properties
* Select the same client
* Go to the servers tab under client properties

and add the three new servers there, I can then switch over to the new
master server and connect to the client.

When I log back into my UNIX client and pull up the bp.conf file, I
see the same entries that I had typed in manually.

So my next thought is OK, so it isn't enough that you just edit the
bp.conf file. Maybe after I edit the file I need to restart something
on the client side here.

I looked for the netbackup [start | stop] script in
/usr/openv/netbackup/bin and /usr/openv/netbackup/bin/goodies and
didn't find one. I Googled how to restart the NBU client software on a
client and saw several different answers:

* restart xinetd
* run the netbackup [start | stop] script in /usr/openv/netbackup/bin
or /usr/openv/netbackup/bin/goodies
* run /etc/init.d/s[some number]netbackup [start | stop]

I tried all of these suggestions. Restarting xinetd after editing
bp.conf didn't put my changes into effect.

As I already stated, there is no netbackup [start | stop] script in
the aforementioned directories as there are on a master or media
server.

I didn't find a netbackup startup script in my /etc/init.d directory,
nor do I find any file at all named 'netbackup' even if I do:

find / -name netbackup

I browsed the HOWTOs on Symantec's site and noticed they mention the
use of bpsetconfig and bpgetconfig. I made a text file with the new
'SERVER = servername entries and tried using bpsetconfig. I
discovered that this overwrites the contents of bp.conf on the client,
as opposed to appending the new servers to the existing list.

My last idea was to launch jnbSA with the -l log.log -lc  flags and
try to capture the commands that NBU is using when I update the client
from the master server using the GUI.

No dice. The master server makes the updates and logs no activity to
the command log file.

So... I'm stumped. Is it possible to make a simple edit to the bp.conf
file on a client and apply the changes?

- HKY
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Tips, Tricks, Hacks, and Best Practices

2010-09-28 Thread Heathe Yeakley
Once a month or so, I see a little pearl of wisdom pop up on the list
here. Something that makes me slap my head and say Why aren't I doing
that in my environment?

For example, I saw in a message earlier this week that mentioned
excluding the /proc and /dev filesystems on *nix clients. I'm not
doing that and I'm /facepalming my self for not thinking about that.

My question: I know Symantec publishes white papers on various Best
Practices involving their products, including NetBackup. Is there a
website, Symantec owned or publicly operated, where a lot of these
tips, tricks and hacks for managing a NBU environment are published.
Or do I just need to go to the mailing lists website and start reading
through every thread picking out things and taking notes?

Thanks.

- HKY
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Device Recognition in RHEL5/NBU7

2010-09-24 Thread Heathe Yeakley
As far as the zoning in the old environment, I've gone over both with
a fine tooth comb and it looks like I've zoned both environments
exactly alike.

As far as multipath software, I have device mapper multipath
configured on both environments.

As far as letting the wizard finish, I decided to try that and about
halfway through the wizard, NetBackup realized it's all the same
devices and presented me with 3 robots and 92 drives (the correct
number). I'm still confused why my 6.0 environment sees 3 robots and
92 drives after the first scan on the 2nd page of the Hardware
configuration wizard whereas my 7.0 environment has to go about
halfway through before it sees all the devices as 3 robots and 92
drives

But hey, at least it's working now.

- HKY

On Fri, Sep 24, 2010 at 5:39 AM, Shekel Tal tal.she...@uk.fujitsu.com wrote:
 Did you by any chance zone the devices into multiple HBA ports in the
 new config but not the old?
 Perhaps you had powerpath, DMP or some kind of multipath software
 installed before?

 By the way - there shouldn't be any problems with seeing multiple
 devices and NB will only use the ones configured and it shouldn't
 actually go an configure 16 robots and 400 drives. The device config
 wizard will know that some are the same devices - atleast it does in my
 environment.

 Have you let the device config wizard finish - what does it actually
 configure after detecting all the devices?


 -Original Message-
 From: veritas-bu-boun...@mailman.eng.auburn.edu
 [mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Heathe
 Yeakley
 Sent: 23 September 2010 15:39
 To: NetBackup Mailing List
 Subject: Re: [Veritas-bu] Device Recognition in RHEL5/NBU7

 To clarify my question:

 I currently have a live NetBackup environment where the master server
 is a Dell 2950 and I have two media servers that are solaris boxes
 running RHEL 4. My department wants to decommission the older boxes
 and move NetBackup to newer boxes, and I wanted to upgrade to 7.0
 anyway, so I decided to kill two birds with one stone and build 1 new
 master and 2 new media, install NBU 7 on the new environment and then
 zone my existing libraries into the new environment and then point all
 the servers I'm backing up to the new environment.

 On my live NetBackup environment (the RHEL4 one), when I go into the
 NBU hardware configuration wizard and scan for devices, the wizard
 comes back and says it sees 3 libraries and 92 drives (which is the
 number I expect to see). When I shut down my live environment and
 bring up the NBU 7 environment and run the hardware wizard, it sees
 like 16 robots and 400 drives. I'm trying to figure out why my NBU
 6.0/RHEL4 environment is able to see one device file per robot/tape
 drive, but my NBU 7/RHEL5 environment thinks each path is 1 device.
 I'm not sure where to begin researching this issue. I'm in the process
 of skimming through the NBU 7 device configuration manual, the HBA
 documentation and Red Hat's storage documentation.

 Any light that can be shed on this will be greatly appreciated.

 - HKY
 ___
 Veritas-bu maillist  -  veritas...@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Device Recognition in RHEL5/NBU7

2010-09-23 Thread Heathe Yeakley
I've just finished building a new NetBackup environment. NBU is 7.0
and the OS is Red Hat Enterprise 5. When I brought up the NBU
interface to configure the libraries, I'm seeing each one of my
libraries multiple times. I have 1 master and 2 media servers. Each
has 4 Qlogic HBAs in them. I've zoned all 4 of the HBAs in the master
and the first 2 in both media (I'm going to use the other 2 HBAs for
SAN clients).

I've double checked my SAN zoning and multipathd config in Red Hat and
everything looks fine to me. I have a feeling this is going to be a
'doh!' moment when I finally figure out the issue.

Anyone else out there running Fibre Channel drives in RHEL? What do
you do to get the OS to just see 1 device with multiple paths instead
of thinking each path = 1 device?

Thanks.

- HKY
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Device Recognition in RHEL5/NBU7

2010-09-23 Thread Heathe Yeakley
To clarify my question:

I currently have a live NetBackup environment where the master server
is a Dell 2950 and I have two media servers that are solaris boxes
running RHEL 4. My department wants to decommission the older boxes
and move NetBackup to newer boxes, and I wanted to upgrade to 7.0
anyway, so I decided to kill two birds with one stone and build 1 new
master and 2 new media, install NBU 7 on the new environment and then
zone my existing libraries into the new environment and then point all
the servers I'm backing up to the new environment.

On my live NetBackup environment (the RHEL4 one), when I go into the
NBU hardware configuration wizard and scan for devices, the wizard
comes back and says it sees 3 libraries and 92 drives (which is the
number I expect to see). When I shut down my live environment and
bring up the NBU 7 environment and run the hardware wizard, it sees
like 16 robots and 400 drives. I'm trying to figure out why my NBU
6.0/RHEL4 environment is able to see one device file per robot/tape
drive, but my NBU 7/RHEL5 environment thinks each path is 1 device.
I'm not sure where to begin researching this issue. I'm in the process
of skimming through the NBU 7 device configuration manual, the HBA
documentation and Red Hat's storage documentation.

Any light that can be shed on this will be greatly appreciated.

- HKY
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Alternate Network Interfaces

2010-09-14 Thread Heathe Yeakley
My Fellow Admins,

I have a backup environment comprised of a single RHEL 5 Master server
(which doubles as a media server) and a single dedicated media server,
also running RHEL 5. These 2 servers are backing up a small
environment of approximately 100 servers with a combined weekly backup
volume of about 20 - 30 terabytes.

On each server, I only have 1 of the available 4 NICs configured. Each
morning I come in and a handful of my larger boxes are still running
and getting terrible throughput.

I've looked at CPU, memory and other statistics on the box, and it
doesn't look like I'm overloading my master and/or media server. I
think it's simply a case of my 1 NIC per box is getting saturated
during my backup window and each of my clients isn't getting a lot of
throughput to whichever machine is backing it up.

I have asked my network team to please run a second cable to the
second NIC on each box. I've brought both NICs online. I have selected
1 solaris 10 client in my backup environment that I want to try and
send all its backup traffic to the second interface on whichever box
ends up backing it up and night.

I've scanned through the NetBackup admin guide for Unix/Linux and I've
browsed the knowledgebase on Symantec's website for ideas on how to do
this. I've seend two or three ways to do it and I'm trying to figure
out which one best suits what I'm trying to do.

This first KB article I read is:
http://seer.entsupport.symantec.com/docs/316357.htm

which suggests going into Global attributes and setting Use Specified
Network Interface on the client. My question is, I have two different
interfaces I could potentially use: the 2nd NIC on my master or the
second NIC on my media server. This box implies I can only put in one
interface. Since it's a crap shoot which media server will get the
task of backing the machine up, I'm not sure if this is the option for
what I want to do.

Then I found this article:
http://seer.entsupport.symantec.com/docs/269879.htm

Which talks about the REQUIRED_INTERFACE directive in the
/usr/openv/volmgr/vm.conf file.But upon reading the technote, that
doesn't sound quite like what I'm trying to do.

Any ideas?

- Heathe Kyle Yeakley
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Need to find out which media server is backing up which client...

2010-08-12 Thread Heathe Yeakley
I don't know about a script to generate a list of which media server
backs up which clients, but can't you just pull up Activity Monitor,
click on File - Export and Export the Activity Monitor into a text
file that could then be opened up in Excel and have all but to columns
deleted? Client and media Server?

- HKY

On Thu, Aug 12, 2010 at 8:58 AM, Joseph Despres jdesp...@csc.com wrote:

 Other than using bpdbjobs

 What's a good way to find out which media server is backing up which client?

 Thanks

 Joe Despres
 Backup Engineer
 CSC

 3521 Ribcowski Ct  Raleigh, NC  27616
 GIS  |   (o): 1.919.266.1799   |   (c): 1.919.931.9674   |
 jdesp...@csc.com   |   www.csc.com
 https://c3.csc.com/groups/netback

 This is a PRIVATE message. If you are not the intended recipient, please
 delete without copying and kindly advise us by e-mail of the mistake in
 delivery.
 NOTE: Regardless of content, this e-mail shall not operate to bind CSC to
 any order or other contract unless pursuant to explicit written agreement or
 government initiative expressly permitting the use of e-mail for such
 purpose.
 ___
 Veritas-bu maillist  -  veritas...@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Java Admin Console on Linux

2010-07-28 Thread Heathe Yeakley
It's been a while since I've installed NetBackup, but I thought the
java files (if you want to run it as a remote X application) were
included in the base install. Log into your Linux master and check if
you have the server side Java console.

# file /usr/openv/java/jnbSA

I assume you might be talking about installing the Java console on a
workstation. If so, I forget which disk has those files, but I'll
look. I've used both methods for running the NetBackup console:
installing the Java console on my local machine and running jnbSA off
my master server via remote X windows. I actually prefer the latter
for two reasons:

1) If you upgrade NetBackup on your master server and you're relying
on the workstation Java Console, you have to upgrade the Java Console
on your local workstation also.

2) If you have multiple NetBackup environments running different
versions. I have 3 environments, two 6.0 and one 6.5. I tried
installing the 6.0 Java console and the 6.5 Java console on my local
workstation and had issues.

Now I use a remote X emulator (mobaxterm love it) and I just log
in and run '/usr/openv/java/jnbSA ' to administer any environment. I
don't have to know what level NBU is currently running. It's the same
command every time.

Just my two cents. If you have a cooler way, I'd love to hear it!!

- HKY

On Wed, Jul 28, 2010 at 9:20 AM, Nate Sanders sande...@dmotorworks.com wrote:
 I can't for the life of me figure out where/how to install just the Java
 Admin Console on Linux for 6.5. I have 4 DVDs full of NBU software but
 all I see are clients and server installs. Searching Symantecs site
 usually just leads me to patches and updates, but not the base installer.

 Can anyone help clarify this mess?

 --
 Nate Sanders            Digital Motorworks
 System Administrator      (512) 692 - 1038




 This message and any attachments are intended only for the use of the 
 addressee and may contain information that is privileged and confidential. If 
 the reader of the message is not the intended recipient or an authorized 
 representative of the intended recipient, you are hereby notified that any 
 dissemination of this communication is strictly prohibited. If you have 
 received this communication in error, please notify us immediately by e-mail 
 and delete the message and any attachments from your system.
 ___
 Veritas-bu maillist  -  veritas...@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] command to figure out size of all backups in a given period

2010-07-15 Thread Heathe Yeakley
How much data was backed up in the last week
- /usr/openv/netbackup/bin/admincmd/bpimagelist -U -hoursago 336

How much data was backed up on April 26th
- /usr/openv/netbackup/bin/admincmd/bpimagelist -U -d 04/26/02
00:00:00 -e 04/27/02 00:00:00

How much data was backed up on client XYZ in the last week
- /usr/openv/netbackup/bin/admincmd/bpimagelist -U -client XYC -hoursago 168

- /usr/openv/netbackup/bin/admincmd/bpcatlist -client xxx


On Thu, Jul 15, 2010 at 1:25 AM, ddobek
netbackup-fo...@backupcentral.com wrote:

 I am using NBU 6.0 on a unix mst/media server, and i need to find out the 
 size of all backups in a given week.  I need to request disk size to have 
 backups written to, as we change from using tape to disk for storage.
 i can use either the gui or command line, CLI is my preference is there is a 
 command to find this info.
 thanks for the help.

 +--
 |This was sent by devon_do...@mentor.com via Backup Central.
 |Forward SPAM to ab...@backupcentral.com.
 +--


 ___
 Veritas-bu maillist  -  veritas...@mailman.eng.auburn.edu
 http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Fibre Transport HBAs

2010-07-12 Thread Heathe Yeakley
I've purchased some Enterprise clients and plan on using the Fibre
Transport Feature to back them up. I'm on the market for the QLogic
HBAs that my media servers need in order to backup a SAN client. From
what I've read in the NetBackup 7 FT Guide, it just says that as long
as I get a Q Logic 234x I should be fine. Is there a preferred HBA
that those of you already running FT like to use? I want to make sure
I buy the right HBAs.

Thanks.

- Heathe Kyle Yeakley
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Backing up through NAT

2010-05-20 Thread Heathe Yeakley
I have a client that has a small cluster of systems (maybe 2 dozen)
behind a NAT firewall. I'm being asked to back all of them up. I've
done some Google searches on backing up across NAT, and I'm a little
confused about exactly how to do it.

Anyone have any tips?

- Heathe Kyle Yeakley
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Tru64 NetBackup Performance

2010-03-09 Thread Heathe Yeakley
--== Warning: Wall of text incoming ==--

I have a NetBackup environment consisting of:

-= Local Site =-
1 Red Hat Linux AS 4 Master running NBU 6.0 MP7
2 Red Hat Linux AS 4 Media Servers running NBU 6.0 MP7
3 Tru64 V5.1B (Rev. 2650) SAN Media running NBU 6.0 MP7 (Mix between
O/S patch kit 6 and 7)
1 Spectra Logic T380 with 12 IBM LTO4 drives running latest BlueScale
patches and drive firmware.
1 NetApp 1400 VTL running latest firmware.

-= DR Site =-
1 Red Hat Linux AS 4 Master running NBU 6.0 MP7
1 Tru64 V5.1B (Rev. 2650) SAN Media running NBU 6.0 MP7
1 Spectra Logic T200 with 12 IBM LTO4 drives running latest BlueScale
patches and drive firmware.

Last July we replaced our ADIC i2000 library (LTO2 drives) with a
Spectra Logic T380. Once we got the library deployed we noticed that
our Linux systems are able to write to the library at LTO4 speeds, and
the regular network clients even get decent throughput over a 1gb
ethernet network. But the 3 Tru64 SAN media servers absolutely crawl.
In spite of the fact that I have the SAN media server license
installed, I can only get about 10 - 20 MB/s on the policies using the
Tru64 storage units.

Our main production database sits on a GS1280 (30 CPUS ,114 GB
memory), and we have a ES80 attached to another Spectra Logic library
at our DR site. Every Sunday morning, I backup an RMAN backup to tape,
mail the tapes to my DR site, and restore the RMAN files using a
Spectra Logic T200 attached to the ES80, which also has the SAN Media
Server software installed. My GS1280 system takes 15-20 hours to
backup, but my DR system can restore the same files in 6-7 hours
running at 80 - 110 MB/s. I'm completely baffled how the smaller
system gets such awesome throughput while my production box plods
along at sub-ethernet speeds.

I've spent the past several months researching performance and tuning
suggestions and I've applied settings 1 at a time when I can get an
outage.

To speed up testing, we have another GS1280 with 1/2 the CPU and
memory as the production system, and it only runs test databases, so
it's easier to ask to reboot it if I want to try tuning a particular
kernel parm or what not. I installed the SAN media server software on
this second 1280 and I've been trying to tune it to NetBackup for the
last couple of months.

Within NetBackup, I've tuned the Size and Number of data buffers, and
it has no visible effect.

I've used the hwmgr command to look at the driver and firmware level
of just about every piece of equipment on both systems, up to and
including the individual busses. The GS1280 has everything the ES80
does, it just has more of it.

I've verified HBA drivers on all boxes and all appear to be at the
latest firmware.

I've asked my SAN guys to double check the zoning, LUN masking,
configuration and firmware levels on the SAN switches here and at my
DR site to see if there's anything that might be preventing Tru64 from
writing to either of my libraries at SAN speeds. They have checked and
everything seems to be in order on both SAN environments. Furthermore,
I've asked them to look at port utilization on the SAN switches during
test backups from the 1280 and they tell me that the HBAs are hardly
being utilized.

We recently deployed a NetApp VTL, and I was curious if perhaps the
VTL got better performance (which would indicate some type of
incompatibility between Tru64 and Spectra Logic). There isn't one that
I can find. If I setup a test policy to write to the VTL from my test
GS1280 and let it write to all 80 virtual drives, no one stream
exceeds about 10 - 20 MB/s.

Next, I looked at the fragmentation level of the AdvFS domains on both
systems. While some are heavily fragmented, the I/O performance on
both systems is 100% for every file domain I've checked.

The fact that all my clients (Windows, Linux and the handful of
Solaris 10) work well with both libraries makes me think that this is
something in Tru64. If that's true, then I'm trying to figure out what
is set correctly on my DR ES 80 that's jacked up on my local 1280.

According to section 1.9 of the Tru64 tuning manual
(http://h30097.www3.hp.com/docs/base_doc/DOCUMENTATION/V51B_HTML/ARH9GCTE/TITLE.HTM)
the 5 most commonly tuned kernel subsystems are: vm, ipc, proc, inet,
and socket. Furthermore,
http://seer.entsupport.symantec.com/docs/235845.htm is a technote
advising Tru64 kernel changes for NetBackup. I have examined the
values across all my systems. In most cases, the values on both
systems meet or exceed tuning suggestions I have found in manufacturer
documentation. The two or three values I have tuned so far have had no
effect.

http://www.scribd.com/doc/19213788/Net-Backup-6
I found this TechNote which recommends setting the sem_mni and sem_msl
values to 1,000. sem_msl is currently set to 500 on my local 1280, and
I think this is perhaps the only kernel parm I have yet to tune. I'm
going to ask for an outage this week to increase this setting to
1,000. If that doesn't work, then I 

[Veritas-bu] General Backup Question: Offsite tape rotation

2009-10-25 Thread Heathe Yeakley
I'm deploying a NetApp Virtual Tape Library at work right now, and I
have an engineer from NetApp coming in to help me set it up. I was
explaining to him how we rotate tapes, and he seemed a little
bewildered at our rotation method. I've never done tape
backup/recovery anywhere else, so to me, our way is normal, in that
it's the only way I've ever been shown how to do this.

- - = = Cycle = = - -

About 99% of our customers are backed up via policies with the
following attributes:
* They get a Full backup 1 night a week and differential-incrementals
the other 6 nights.
* We have an on-site vault where tapes go for a week. After a week,
Iron Mountain comes and gets them.
* We ship Full AND Differential-Incrementals off-site for 90 days
(--- This is the bullet point that bewilders my VTL engineer)

In laying out the VTL, my NetApp engineer tells me that he wants to
make a virtual library for all Full backups and a Virtual Library for
the Diffs. I figured we'd just have 1 virtual library for everything.
He explained to me that since we want to write the Full backups out to
physical tape, that we need a separate Virtual Library for the Full
backups on so that we can enable the Direct Tape Creation feature on
that VTL. When I told him I needed to write the Diffs to physical tape
also, so that I could send both offsite, he seemed to think that was
really odd. He claims that all the other VTLs he's deployed typically
look like this:

* Fulls are written to VTL, then to tape (D2D2T). The physical tapes
are then sent offsite for whatever the retention period is.
* Differential-Incremental and Cumulative-Incrementals are written to
the VTL, but then they sit there for maybe 2-4 weeks. They are never
written to tape, and therefore never sent offsite.

On one hand, I kinda understand the logic here. If the definition of
Differential-Incremental and Cumulative-Incrementals is essentially
differing levels of backups since the last full, it wouldn't make
sense to write incrementals out to tape since next week's Full starts
the process over again.

However, in the SLA I have with my customers, I state that I can
recover data from any point within a 90 day window. While the chance
is slim, there's always that possibility that I get a restore request
to recover a file from 89 days ago. If I'm only sending full backups
off site, I'd be able to recover the full backup, but I wouldn't have
any incrementals to restore that file to the exact point in time my
customer needs.

So, I guess my question is:

How does everyone else handle incrementals? Do you send them offsite
with the Fulls, or do you just have Fulls go offsite and keep
incrementals onsite for X retention period?

Thank you.

- Heathe Kyle Yeakley
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu