[Veritas-bu] NBU 6.0 MP5 - Jobs queued, not going active

2008-03-14 Thread Chris_Millet

hah ok we tried MP6 on our environment that was having this problem..  Not 
good!  Within hours the server had become unresponsive with over 700 bpdbm 
processes running.. top said load was 255!  we couldn't even get netbackup stop 
or bp.kill_all to work so we had to just reboot the master server to get it 
unstuck.  It proceeded to do this three times before we backed out the patch.  
The catalog is a mess now with tons of missing .f files we have to go clean up. 
 

Right now our situation is that about 1000 filesystem backups start.. we have 
128 VTL drives across 6 media servers.  no mpx so it's just 1 job per VTL 
drive.  We cannot seem to keep all of the drives busy and sometimes it reaches 
a point where only a couple jobs are active with hundreds still Queued.  Device 
Monitor shows all the drives as ACTIVE but nothing is running on them.  Most of 
the jobs are just saying "requesting resource XX" which is a VTL tape and 
sitting there for hours.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] ndmp error 99 - block size

2008-03-11 Thread Chris_Millet

My understanding is that NUMBER_DATA_BUFFERS and SIZE_DATA_BUFFERS_NDMP only 
apply if you are doing "Remote NDMP" 3-way backup through a Netbackup media 
server.  If the tape device is attached to the NDMP host itself, than those 
variables have to be set on the NDMP host.  See your vendor documentation.

You can read more details about those touch files in the Backup Planning and 
Performance Tuning Guide.

As for that error, you may want to open a support case regarding that.  The 
bptm process performs a simple calculation after completing the writing of the 
backup, confirming that the number of bytes written is divisible by 512. If it 
is, it passes on an exit 0 (success) and ends the job. If the number is not 
divisible by 512, some amount of bytes did not make it thru, and the job fails.

You can also get more detail in the ndmpagent log by doing:

vxlogview -i ndmpdagent -d T,s,x,p | grep PID:10276

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] bpexpdate Could not deassign .. could not deassign media

2008-03-11 Thread Chris_Millet

See this technote.  We see this most commonly in our environment due to the 
tape being assigned for a backup before it can be fully returned to Scratch.  
Just a timing thing.

http://seer.entsupport.symantec.com/docs/294805.htm

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] 6.0 MP4: Number of jobs shown in activity monitor

2008-03-11 Thread Chris_Millet


echum wrote:
> By default, the GUI will only show the last 3 days of backup jobs. Use the 
> bpdbjobs command to change this value:
> 
> /usr/openv/netbackup/bin/admincmd/bpdbjobs -clean -keep_days 
> 
> See technote below for more info
> http://seer.entsupport.symantec.com/docs/291075.htm


I think alternatively you can also put in the registry (for windows) or the 
bp.conf (for UNIX) the KEEP_JOBS_HOURS and KEEP_JOBS_SUCCESSFUL_HOURS since 
bpdbjobs -clean is busted on Windows in v6.0_MP4.

http://seer.entsupport.symantec.com/docs/296104.htm

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] NBU 6.0 MP5 - Jobs queued, not going active

2008-03-11 Thread Chris_Millet

Marianne, could you share the resolution to this issue?  We also see this 
problem on a new environment where we are attempting to do just standard 
filesystem backups on several thousand clients.  The master server has plenty 
of horsepower (16GB, 8 core T2000) but I wonder if Netbackup keeps getting hung 
up when juggling that many client jobs.  Our environment we are seeing this in 
is also v6.0_MP5.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] "piece handles" in Oracle RMAN backups..what are t

2008-01-23 Thread Chris_Millet

Ok, but if I build a list from the file lists in each job submitted to the 
master server during the RMAN backup, I will already have a list of all of the 
actual data files PLUS a bunch of these piece handle files.  The total size is 
almost 2x the actual database like it is backing up the entire thing twice.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] "piece handles" in Oracle RMAN backups..what are they?

2008-01-22 Thread Chris_Millet

While doing some full hotbackups via the Oracle db agent, we've noticed that 
the total size of the jobs ran through Netbackup vs the database size can be 
50-70% more.   Of those jobs, we see the actual data files as one part of the 
backup and something called "piece handles" (from the RMAN) log also being sent 
to Netbackup that amount for the size discrepency.

-Chris

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Getting a list of files from a failed backup

2008-01-08 Thread Chris_Millet

Yeah, as you mentioned with 5.5 million inodes, there is probably a significant 
amount of time before the client can start sending data to the media server, 
causing the default timeout value to get overran causing what looks to be a 
network connection timeout.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] 1 TB limit ?

2008-01-08 Thread Chris_Millet

turn on verbose logging, and check the client side log (i.e. bpbkar).  See if 
it is hanging on a particular directory or file.  If it is, you may have 
corruption or some other problem in the client filesystem.

As of v6.5, I don't know of any 1TB limit in most traditional filesystem 
backups in Netbackup.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2008-01-08 Thread Chris_Millet

Ok, I figured it out.  Since we use Qlogic branded adapters and QLA driver 
instead of the QLC driver there was a setting we needed to change in 
/kernel/drv/qla2300.conf.  It is the equipment of the pci-max-read-request 
variable in qlc.conf.

# PCI-X Maximum Memory Read Byte Count.
#Range: 0, 128, 256, 512, 1024, 2048 or 4096 bytes
#Default: 0 = system default
hba0-pci-x-max-memory-read-byte-count=2048;

Anyway, that plus setting maxphys=0x80 in /etc/system seems to have removed 
all bottlenecks from the T2000 and we are able to get linespeed data transfer 
speeds through the QLE2462.

We are now able to vault jobs at around 160MB/sec from VTL to LTO3.  I've only 
tried two concurrent vaults for x2 performance.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] while vault duplication I/O oddity

2007-12-14 Thread Chris_Millet

ah duh, that would be it.  We just turned on multiplexing last weekend.  Thanks 
for the help.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] while vault duplication I/O oddity

2007-12-14 Thread Chris_Millet

Ok, I'm watching iostat while vault is doing a duplication job.  For some 
reason, the read from the source tape is twice as fast as the write.  

For instance, I will see 60MB/sec of reads from source tape and maybe 30MB/sec 
of writes.  

What would cause this?  This is a new issue on this environment.  I typically 
see vaults match speed on reads/writes.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-12-07 Thread Chris_Millet


Marianu, Jonathan wrote:
> My recollection is that during duplication from VTL to tape , it uses the mpx 
> originally set in the policy unless it is throttled down by the vault policy 
> but you can’t increase it. The MPX is what is so interesting to examine in 
> truss because I observed that any multiplexing will impact the duplication 
> speed. 
> 
> This is impact more pronounced though when you are mixing slow and fast 
> clients together because the fragments are not spread out evenly on the 
> virtual tape and bptm has to rewind the virtual tape and reread, causing a 
> delay to physical library which is torture on the physical tape drive 
> 
> Then I observed that a higher MPX led to a slight improvement in duplication 
> speed. I would use 24-32 
> However there are other tradeoffs. In the end we decided to not use VTL for 
> network client backups for the new master server environment model.  
> A problem with VTLis that a slow client backup will hold onto the vritual 
> media preventing me from using it for duplication. This impacts the RPO of 
> other clients backups. DSU does not have that issue. 
> We use 64TB EVA file systems per network media server which give pretty even 
> performance and we use vxfs. 
> One outstanding issue with file systems that you don’t have with VTL is 
> fragmentation Another benefit of VTL is that I have a media server dedicated 
> to writing the backups and another media server dedicated to duplicating 
> backups. 
> . 
> 
> __
> Jonathan Marianu (mah ree ah' nu)
> AT&T Storage Planning and Design Architect
> (360) 597-6896
> Work Hours 0800-1800 PST M-F
> 
> Manager: David Anderson
> (314) 340-9296


Currently, when client backups are written to the VTL, we are using no 
multiplexing.  Regardless, our vault profile is set to de-multiplex backups 
when they are duplicated for faster restores from tape.  Are you saying to 
increase MPX to the VTL for client backups will improve vault duplication 
performance?  24-32MPX ?

To avoid the issue with backups tying up virtual tapes for duplication jobs, we 
reduced the cartridge size in the VTL to 100GB to have more tapes.  We rarely 
see vault waiting for a tape because it is in-use.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-12-07 Thread Chris_Millet

Ok, I think I might have discovered something key to my poor performance..

I'm watching the duplication job that the vault spawns to copy the backup from 
VTL to LTO3.  For the bptm process that is doing the read from VTL:

<2> io_init: buffer size for read is 64512

Obviously, I want to use a high buffer size for reads, but I already have 
SIZE_DATA_BUFFERS and SIZE_DATA_BUFFERS_RESTORE set to 262144 so I'm not sure 
how to modify that particular buffer setting.

To somebody else who asked, the connectivity between devices is a Qlogic 9000 
series switch and a pair of QLE2462s in the T2000.  4Gb all the way through 
from host to switch and switch to the VTL front-end port.  I think the LTO3 
drives are only 2Gb but that should be plenty.  Using seperate HBA ports for 
the VTL luns and LTO3 drive.

/etc/system is at default.. no tuning done there since this is a Solaris 10 
system with 8GB of memory (so 2GB for shared memory).

Using 128 buffers, 262144 size.  Goign to increase the # since the system has 
plenty of memory.

Also, in dd tests, I'm able to achieve line speeds on the HBA writing to the 
VTL so I know its up for the task.  I gotta think that the 64k buffer used for 
reading though is killing the performance.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Suggestions for large files?

2007-11-26 Thread Chris_Millet

Is there some kind of switch or router between the client and backup server 
that might only be 100mbps capable?  10MB/sec is usually pretty suspect because 
that is about the number you'll get with a 100Mbps connection somewhere in the 
data path.

It sounds like you are getting full bandwidth to the backup server though if 
you can mpx and get better numbers.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-16 Thread Chris_Millet

No other activity on the server except for the read stream from VTL and the 
write stream to LTO3.  There are a couple GB allocated for shmmem.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-16 Thread Chris_Millet

Seems like that shouldn't be the case, but OK i'll try it.  Our Qlogic rep 
states this adapter should be fully capable of 380MB/sec, full duplex, on both 
ports, simultaneously.  "best in industry"  And the T2000 is no slouch in the 
PCI-E department.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] poor backup speeds

2007-11-16 Thread Chris_Millet

We need more details too.  I'm guessing this is probably 420KB/sec (like what 
you would see in the job detail) which is dismal.  Even 4.2MB/sec is pretty 
bad.  42MB/sec would be OK, but I'd be guessing the bottleneck is the VM.

It sounds like you are saying you have one of the tape drives zoned directly to 
the VM and it writes directly to the drive so that data is not going over the 
network.  If so, you should see good performance.

Things to consider..
Make absolutely sure the data is being sent to the drive on the VM.  
Double-check your configs.  Have you confirmed I/O is on the VM going to the 
drive?  Maybe your accidently sending it to the master/media server still over 
the network.

Consider the VM resources.. what is it doing during the backup?  Is the CPU 
pegged, memory util at 100% or what?

Data type.. file systems with millions of tiny files tend to backup slow.  You 
could try a raw device backup or Flashbackup.  Consider this a last resort.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Suggestions for large files?

2007-11-16 Thread Chris_Millet

Have you tried the backup already?  If so, what are the policy settings and how 
fast are your drives streaming?

LTO3 is rated for 70MB/sec native..more if the data compresses well, so make 
sure your backup server has at least 1000Mbps network connection and same goes 
for the client.  If your drives are streaming that fast or better already, you 
might be able to make it go faster by aggregating a second gigabit link on both 
servers to remove that bottleneck.  Then you could try to get two tape drives 
to do the backup by breaking up the backup selection in the policy into two 
chunks using the NEW_STREAM directive.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-16 Thread Chris_Millet

I'm starting to experiment with the use of T2000 for media servers.  The backup 
server is a T2000 8 core, 18GB system.  There is a Qlogic QLE2462 PCI-E dual 
port 4Gb adapter in the system that plugs into a Qlogic 5602 switch.  From 
there, one port is zoned to a EMC CDL 4400 (VTL) and a few HP LTO3 tape drives. 
 The connectivity is 4Gb from host to switch, and from switch to the VTL.  The 
tape drive is 2Gb.

So when using Netbackup Vault to copy a backup done to the VTL to a real tape 
drive, the backup performance tops out at about 90MB/sec.  If I spin up two 
jobs to two tape drives, they both go about 45MB/sec.   It seems I've hit a 
90MB/sec bottleneck somehow.  I have v240s performing better!

Write performance to the VTL from incoming client backups over the WAN exceeds 
the vault performance.

My next step is to zone the tape drives on one of the HBA ports, and the VTL 
zoned on the other port.

I'm using:
SIZE_DATA_BUFFERS = 262144
NUMBER_DATA_BUFFERS = 64

Any other suggestions?

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Re: Permissible values of SIZE_DATA_BUFFERS & NUMBER_DATA_BU

2007-06-15 Thread Chris_Millet

On 6/14/07, Khurram Tariq < [EMAIL PROTECTED] ([EMAIL PROTECTED])> wrote:
>  By the way In my case I've found out that 2 x FC LTO3 drives with 
> multiplexing and multistreaming gets me the best performance (reached a max 
> of 170MB/s) and adding more drives and streams does not increase the total 
> speed. Any pointers here? 
> 
> Regards,
> Khurram
> 
> 
> 


We need a bigger picture of your environment.

What kind of Sun systems are these?  Maybe you've hit the limits of the system. 

Do you have enough fiber channel bandwidth to the device host and tape drives?  
Sounds like you might be hitting the upper limit of 2Gbit - especially if other 
traffic is going on that link.  Not sure if you are direct attaching the tape 
drives to the host or if you have some kind of tape SAN switch infra.

If the data is coming from over the network client backups..maybe you have a 
network bandwidth issue on the backup host.  170MB/sec though would already 
indicate > Gbit so you'd have to be aggregating more than one link already if 
that is the case.  Add more.

Anyway, it could be lots of things.  You just need to take a hard look at the 
data path and everything along the way to find out the least common denominator.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Re: SSO with NDMP in NBU 6.0 -- does it work

2007-06-15 Thread Chris_Millet


Adams, Dwayne wrote:
> Steve, 
> 
> I just started using it in a new deployment. (6.0 MP4) I am only 4 weeks into 
> production but it appears to run as advertised. I have 6 native fibre drives, 
> 2 filers and 3 media servers. All of the hosts are zoned to all 6 drives. Let 
> me know if you have in specific questions. 
> 
> Dwayne Adams 
> 


Dwayne, are you doing any 3-way NDMP backups from a filer to a media server w/ 
SSO drives?

Do you just have all the drives in a storage unit / storage unit group and your 
policies assigned to that?

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Re: Tuning Solaris 10 for NetBackup Media Server

2007-05-07 Thread Chris_Millet

How are you aggregating those 6 gigE links?  Sun trunking?

Heck, if there is that much power I'd consider adding more links to our T2000 
backup servers for more throughput.





___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu