[Veritas-bu] Large SAP backup problem

2010-05-05 Thread luciano prata

Hello,

I have a problem with one backup, this backup now have more than 10 Tb, it’s a 
SAP base backup, my problem is because this backup take more than 36 hours to 
finish, the throughput is not bad, its do 4 jobs with 15Mb/s per job totally 
60Mb/s but my backup window is only 13 hours for 06:00 PM to 07:00 AM and I 
don’t know what I have to do to take this backup more fast. Below the 
description for my environment.

I have one NBU master Server Solaris and 3 media servers with NBU 6.5.2 in all 
servers and clients.

One media solaris attached to a SL500 library with 7 drives
One media solaris attached to the same SL500 and a VTL with 24 virtual drives
One media AIX attached to a VTL with 48 virtual drives

The AIX media server is on a P595 and these is the same machine than the server 
I need to backup, there is a Shared Ethernet Adapter controlled by a Virtual IO 
server to shared all the network card for this P595 to create the backup 
network.

I tested these backup with all the media servers, VTL and SL500 and it’s the 
same throughput.
I tested with 2 jobs, 4 jobs, 9 jobs and it’s the same throughput.
Tested multiplexing, the same throughput.
Online and offline backups, the same throughput.

I have a SAP client to do this backup, and I think the AIX media server is the 
best to do this because it’s the same machine, and my question is, there is 
other way to do this backup more fast?

Thanks,

Atenciosamente,
Luciano Prata
Analista de Backup
Service IT Solutions
(21) 2211-4473
RS-PR-SP-RJ-ARG
www.service.com.br

+--
|This was sent by luciano.pr...@light.com.br via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] Netbackup Migration.

2010-05-05 Thread fozz

As someone who has done a number of these conversions, they've gotten to be 
pretty trivial since 6.x.  Prior to 6.x they were a royal pain.

If you can preserve the same master server name, that will help.  

In my experience, it was always easier to delete all the tape devices and 
recreate them after the upgrade.   If you end up changing your master server 
name, you'll need to use the nbemmcmd -renamehost command to change all the 
references from the old master server name to the new.  

Try to make sure the same version of NBU is installed on both platforms.   

Post move, you'll also need to run nbdb_move to reflect the new location.  

Aside from that, you will not be able to transfer the job history.  Most of the 
files should be transferred using ASCII ftp with the exception of the IMAGE 
files, which should be transferred using Binary ftp.  

I would to a test migration first as I am recalling all the info from memory 
and it's been a few months since I had to do a migration.

+--
|This was sent by f...@dumbegg.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Architectural question (staging)

2010-05-05 Thread Victor Engle
Just a follow-up question...

In my environment, I'm going to be given access to some SAN storage
for use as staging areas. I have a tape library with 4 drives and I'm
backing up 13TB per week. It is tough getting this done within the
backup windows with the 4 tape drives so I want the DSSU space to ease
the load a bit. It seems to me that having the DSSU's should
essentially extend my backup window since all data written to DSSUs
can be de-staged outside of the backup window.

So my question is how best to configure the DSSUs with the goal of
optimized de-staging. I will have 6TB to configure as desired on the
backup server. If I understand correctly, the more concurrent streams
allowed to the DSSUs, the slower the de-staging because of interleaved
backup streams. So, for the 6TB of SAN storage it would seem to be
more efficient to create several smaller volumes, each with its own
filesystem and mountpoint rather than one big 6TB volume with 1
filesystem. Am I correct to assume that the interleaving blocks from
multiple streams is occurring at the filesystem level? Since the SAN
provisioned LUN is virtual and made up of several disks in a stripe
configuration I expect disk reads to be reasonably quick.

Thanks,
Vic


On Mon, Apr 26, 2010 at 4:34 PM, Martin, Jonathan jmart...@intersil.com wrote:


 I started with a calculation like that, but I also incorporate destaging
 speed. I've never used anything higher than 16 concurrent writes, and that
 is for incremental backups. With incremental backups, I don’t care as much
 because even if the destaging speed is poor, the amount of data is so small
 that it will still be written to tape quickly.



 However for full backups, I’ve found that the more streams you have, the
 slower your destaging will be.  I created the graphic below to help explain
 it to management.  The issue isn’t fragmentation as much as it is sequential
 blocks of data. If you were to write one sequential stream to a raid group
 then reading it back would be a sequential operation.  And in my case, SATA
 disks read sequential data very quickly. But as soon as you multistream, the
 data “fragments” or becomes non-sequential (really, the file sizes are so
 large every file is fragmented).



 Image removed due to size limitation.



 So to find your “sweet” spot you will have to test.  I started by building
 my environment and then testing with a single stream, from client to disk,
 then disk to tape.  This was my best case scenario, and I was able to drive
 LTO3 with it.



 My configuration: Dell MD1000 w/ 12 500GB DATA disks in a Raid 5. PERC 5/e
 card with Adaptive Read Ahead and Write Back enabled. GPT Partition table w/
 64KB block sizes.



 The 64KB block size and adaptive read ahead settings were key to my
 performance. I was going to test the same hardware with a linux ext2 file
 system and 4KB block sizes for comparison, but I never got the chance.



 Once you have a baseline you can run more streams. You also need to consider
 job length.  For example, I backup an NFS server with 16 streams to a DSSU
 that allows all 16 streams because 12 of those streams take less than an
 hour each.  I’ve also tested with 1TB SATA Disks and 300GB 15K SAS Disks.  I
 still rely on the 500GB SATA as my work horse because I can drive LTO3 if I
 need to. Most of my DSSUs are 8 streams and push 70MB/sec to tape.



 -Jonathan







 -Original Message-
 From: Victor Engle [mailto:victor.en...@gmail.com]
 Sent: Monday, April 26, 2010 3:39 PM
 To: Martin, Jonathan
 Cc: veritas-bu@mailman.eng.auburn.edu
 Subject: Re: [Veritas-bu] Architectural question (staging)



 Thanks Martin. Good info. What criteria do you use to determine the

 number of concurrent jobs for a DSSU? Is it reasonable to determine

 the concurrent DSSU sessions based on the speed of the clients? For

 example, If I have a disk to which I can write 100MB/s and clients

 that can write 5MB/s then I would use 20 concurrent sessions?



 Thanks all for your responses!



 Vic





 On Mon, Apr 26, 2010 at 10:14 AM, Martin, Jonathan

 jmart...@intersil.com wrote:

 We use Disk Stage Storage Units (DSSU) almost exclusively for our backups.

 As someone has already mentioned, you can stream many slow clients to a
 DSSU

 without impacting your speed significantly. To do this with tape you would

 have to use multiplexing, which is a real performance killer come restore

 time.  The DSSUs essentially allow you to “multiplex” to disk, then single

 stream to tape. Regarding Ed’s speed issue below, I’ve got data here that

 directly correlates number of concurrent streams written to a DSSU with
 the

 performance to tape.  You’ll need to balance this when setting this on the

 DSSU, but we get away with 8-12 streams on 14x500GB SATA Disks in a Raid-5

 and still drive LTO3.







 Here we were also able to purchase a much smaller tape library.  I
 replaced

 a library with 8 x SDLT220 drives with a library with 2 x LTO3 drives (and


[Veritas-bu] Media Master Server

2010-05-05 Thread gone_a_lyon

Hello,
advantages with a separte master server 
- you can apply patches for netbackup without any impact on your applications 
(applications are on the media servers only).
- Backup administrators may have root login on to the master not media server : 
they do not ask everyday for root login on the media server.

disadvantages with a separte master server 
- You will need one more server (the master).
- the master has kownledge of IP adresses of all netbackup clients. If your 
clients are into a DMZ, it may not be recommanded.

disadvantages with a master and media server
Your media server has an application. If you shutdown the media server (because 
of application maintenance, crash ...) you will stop all your backups because 
the master will stop.


 :? sorry for bad english. :?

+--
|This was sent by damien.chail...@alyotech.fr via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Architectural question (staging)

2010-05-05 Thread Victor Engle
Thanks Ed but I'm confused. If I say that Netbackup can send 15 streams to a
DSSU and then I have 15 clients sending streams all at once to that DSSU,
wouldn't the data be more fragmented than if I sent only 1 stream? Could you
elaborate a bit on how that works?

Thanks,
Vic


On Wed, May 5, 2010 at 4:05 PM, Ed Wilts ewi...@ewilts.org wrote:

 On Wed, May 5, 2010 at 2:57 PM, Victor Engle victor.en...@gmail.comwrote:

 So my question is how best to configure the DSSUs with the goal of
 optimized de-staging. I will have 6TB to configure as desired on the
 backup server. If I understand correctly, the more concurrent streams
 allowed to the DSSUs, the slower the de-staging because of interleaved
 backup streams.


 The DSSU consists of a set of files with each file being a backup image and
 you define the maximum size of each file within an image.  There is no
 interleaving.  When you destage, one image at a time goes to tape.

 Watch your fragment sizes and watch your disk file system fragmentation...


.../Ed


 Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE
 ewi...@ewilts.org
 Linkedin http://www.linkedin.com/in/ewilts

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Architectural question (staging)

2010-05-05 Thread Bryan Bahnmiller
Agreed.

  Also, be aware that you will typically not be able to stream data to a 
disk array as fast as you can to tape drives. (Assuming LTO3 or 4 type 
performance.) Unless you have a pretty beefy disk array with your RAID 
configured for streaming. The nice part is that since it is disk, small 
backups and slow backups won't have shoeshine problems like you would 
with tape.

   I like to set a high water mark on the disk to keep it at 85% or lower. 
Generally, 85% full is the point where disk performance starts getting hit 
hard. Fragmentation will also start hitting the performance hard at that 
point too.

   I've yet to see de-staging perform well no matter what the disk array 
used for the DSSU.

Bryan




Ed Wilts ewi...@ewilts.org 
Sent by: veritas-bu-boun...@mailman.eng.auburn.edu
05/05/2010 03:05 PM

To
Victor Engle victor.en...@gmail.com
cc
veritas-bu@mailman.eng.auburn.edu
Subject
Re: [Veritas-bu] Architectural question (staging)






On Wed, May 5, 2010 at 2:57 PM, Victor Engle victor.en...@gmail.com 
wrote:
So my question is how best to configure the DSSUs with the goal of
optimized de-staging. I will have 6TB to configure as desired on the
backup server. If I understand correctly, the more concurrent streams
allowed to the DSSUs, the slower the de-staging because of interleaved
backup streams. 

The DSSU consists of a set of files with each file being a backup image 
and you define the maximum size of each file within an image.  There is no 
interleaving.  When you destage, one image at a time goes to tape.

Watch your fragment sizes and watch your disk file system 
fragmentation...  

   .../Ed


Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE 
ewi...@ewilts.org
Linkedin___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu



BR_
FONT size=2BR
DTCC DISCLAIMER: This email and any files transmitted with it are
confidential and intended solely for the use of the individual or
entity to whom they are addressed. If you have received this email
in error, please notify us immediately and delete the email and any
attachments from your system. The recipient should check this email
and any attachments for the presence of viruses.  The company
accepts no liability for any damage caused by any virus transmitted
by this email./FONT___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Architectural question (staging)

2010-05-05 Thread Martin, Jonathan
I'd hate not to disagree with someone as grumpy and disagreeable as Ed.
Personally, I wouldn't take advice on this matter from someone who
worked with disk staging units for at least a year and gave up.
(Also, I think Ed is a wet blanket.) I had this thing figured out 4
years ago when we first implemented DSSUs in production. I may not be
the biggest NBU shop on the planet, but I back up more than 50TB a week
using this method exclusively, so I can tell you that it does work.

 

As far interleaving, there is most certainly interleaving at the file
system level when you run multiple streams to a DSSU. How Ed can say
there is no interleaving and then tell you to watch your disk
fragmentation is beyond me. Fragmentation = disk interleaving as far as
I am concerned. The point is that the files are non-contiguous.

 

Here's my proof.

 

 

 

This is a snippit of a utility called DiskView from SysInternals /
Microsoft. The yellow bits are the actual 1K fragments of data on disk
for that image file above. The little red dots indicate the beginning
and end of file fragments. There are 64 little yellow dots between the
red dots indicating my 64K clusters.

 

 

 

Here's that same section of disk, different image file. These two
streams ran simultaneously last night (along with 6 others) and I can
guarantee you that the top image wrote faster, and will destage to tape
faster than the image below. 

 

Why? Imagine you are bpdupicate.exe requesting the first file back to
write to tape. Compared to the 2nd image, you are going to get a lot
more reading done and a lot less seeking as your head(s) cross the disk
to pickup fragments. Or, so goes my theory.  There is a utility
available from Dell that will show me the amount of time spent reading /
writing versus seeking per disk but I didn't have the time to acquire it
and test.

 

Now, I know there are variables here. As I stated before, one of the big
improvements to my speed was using a 64K cluster size. Last time I
checked this wasn't available in Unix/Linux. Then again, ext2/3 file
systems also like to leave space between their writes to account for
file growth, which may help (but I doubt it.) I intended to test this
several years back, but my management put the kibosh on Linux media
servers. The raid controller, simultaneous read/write, spindle count,
and disk type also add a lot of variability.

 

I haven't tested any of this on a SAN volume, only on direct attached. I
don't think there is much to be gained by taking a 6TB lun and
partitioning it at the OS or breaking it into multiple luns at the SAN.
After partitioning, the entire DSSU is still on the same raid group /
set, which ultimately controls your performance. If you could take your
6TB lun and break it into 3 x 2TB raid groups / luns then I think that
would help. I've actually considered breaking my 14 disk RAID5s into 14
single disks for performance testing (single stream each), but that's an
entirely different management nightmare (14 DSSUs per media server
etc...) A single SATA disk can drive LTO3, assuming the data is all
nicely lined up.  The minute that head has to go seeking, you are in a
world of hurt. 

 

Again, I would start with a single stream to that 6TB DSSU and see what
you get both writing to the DSSU and destaging to tape. Whatever
performance you get out of that configuration is your best case
scenario. Multiple streams or creating multiple partitions will only
drag your numbers down. The crux of the issue (at least for me) is
balancing the number of streams I need to run to get my backups to DSSU
within my windows, versus the destaging speed I need to get that data
off to tape on time.

 

Good luck,

 

-Jonathan

 

 

From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of Ed Wilts
Sent: Wednesday, May 05, 2010 4:06 PM
To: Victor Engle
Cc: veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Architectural question (staging)

 

On Wed, May 5, 2010 at 2:57 PM, Victor Engle victor.en...@gmail.com
wrote:

So my question is how best to configure the DSSUs with the goal of
optimized de-staging. I will have 6TB to configure as desired on the
backup server. If I understand correctly, the more concurrent streams
allowed to the DSSUs, the slower the de-staging because of interleaved
backup streams. 


The DSSU consists of a set of files with each file being a backup image
and you define the maximum size of each file within an image.  There is
no interleaving.  When you destage, one image at a time goes to tape.

Watch your fragment sizes and watch your disk file system
fragmentation...  

   .../Ed



Ed Wilts, RHCE, BCFP, BCSD, SCSP, SCSE 
ewi...@ewilts.org

 Linkedin http://www.linkedin.com/in/ewilts 

image001.jpgimage002.pngimage003.png___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Architectural question (staging)

2010-05-05 Thread Ed Wilts
On Wed, May 5, 2010 at 4:18 PM, Martin, Jonathan jmart...@intersil.comwrote:

  I wouldn't take advice on this matter from someone who worked with disk
 staging units for at least a year and gave up.

We worked extensively with Symantec on this issue.  We were in regular
contact with the customer focus team.  They were onsite.  We had engineering
onsite.  We went to their engineering offices a few miles down the road from
us.  We met with product management.  Symantec was unable to solve the
performance issues after well over a year of trying.

Obviously your mileage will vary.  I'm glad it works for some people -
Symantec was unable to make it work for us.

   .../Ed
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Large SAP backup problem

2010-05-05 Thread Pedro Moranga Gonçalves
Hi Luciano,

I think the way you are doing that backup is not the best one, the best option 
for huge amount of data is to avoid using the network, and use the SAN 
infrastructure.

To accomplish that, I'd suggest you to make a split mirror backup of your SAP 
and mount the mirror copy on the media server.

In our enviroment with this architecture the bottleneck is the LTO4 drive (120 
MB/s), using 2 MPX per drive.


Best regards,

Pedro


-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu 
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of luciano prata
Sent: quarta-feira, 5 de maio de 2010 15:36
To: VERITAS-BU@MAILMAN.ENG.AUBURN.EDU
Subject: [Veritas-bu] Large SAP backup problem


Hello,

I have a problem with one backup, this backup now have more than 10 Tb, it's a 
SAP base backup, my problem is because this backup take more than 36 hours to 
finish, the throughput is not bad, its do 4 jobs with 15Mb/s per job totally 
60Mb/s but my backup window is only 13 hours for 06:00 PM to 07:00 AM and I 
don't know what I have to do to take this backup more fast. Below the 
description for my environment.

I have one NBU master Server Solaris and 3 media servers with NBU 6.5.2 in all 
servers and clients.

One media solaris attached to a SL500 library with 7 drives One media solaris 
attached to the same SL500 and a VTL with 24 virtual drives One media AIX 
attached to a VTL with 48 virtual drives

The AIX media server is on a P595 and these is the same machine than the server 
I need to backup, there is a Shared Ethernet Adapter controlled by a Virtual IO 
server to shared all the network card for this P595 to create the backup 
network.

I tested these backup with all the media servers, VTL and SL500 and it's the 
same throughput.
I tested with 2 jobs, 4 jobs, 9 jobs and it's the same throughput.
Tested multiplexing, the same throughput.
Online and offline backups, the same throughput.

I have a SAP client to do this backup, and I think the AIX media server is the 
best to do this because it's the same machine, and my question is, there is 
other way to do this backup more fast?

Thanks,

Atenciosamente,
Luciano Prata
Analista de Backup
Service IT Solutions
(21) 2211-4473
RS-PR-SP-RJ-ARG
www.service.com.br

+--
|This was sent by luciano.pr...@light.com.br via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



AVISO: A informação contida neste e-mail, bem como em qualquer de seus anexos, 
é CONFIDENCIAL e destinada ao uso exclusivo do(s) destinatário(s) acima 
referido(s), podendo conter informações sigilosas e/ou legalmente protegidas. 
Caso você não seja o destinatário desta mensagem, informamos que qualquer 
divulgação, distribuição ou cópia deste e-mail e/ou de qualquer de seus anexos 
é absolutamente proibida. Solicitamos que o remetente seja comunicado 
imediatamente, respondendo esta mensagem, e que o original desta mensagem e de 
seus anexos, bem como toda e qualquer cópia e/ou impressão realizada a partir 
destes, sejam permanentemente apagados e/ou destruídos. Informações adicionais 
sobre nossa empresa podem ser obtidas no site http://sobre.uol.com.br/.

NOTICE: The information contained in this e-mail and any attachments thereto is 
CONFIDENTIAL and is intended only for use by the recipient named herein and may 
contain legally privileged and/or secret information.
If you are not the e-mail´s intended recipient, you are hereby notified that 
any dissemination, distribution or copy of this e-mail, and/or any attachments 
thereto, is strictly prohibited. Please immediately notify the sender replying 
to the above mentioned e-mail address, and permanently delete and/or destroy 
the original and any copy of this e-mail and/or its attachments, as well as any 
printout thereof. Additional information about our company may be obtained 
through the site http://www.uol.com.br/ir/.
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] performance on windows cluster

2010-05-05 Thread Kevin Corley
Anybody running a clustered 6.5.x or 7.0 master on Windows 2003 or 2008 with 
MSCS?

Looking at this option for a new 10,000+ job per night master.

Any comments are appreciated.


This message is private and confidential. If you have received it in error, 
please notify the sender and remove it from your system.

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Large SAP backup problem

2010-05-05 Thread Harpreet SINGH
Hi,

As per your database size, I think the backup through put is too slow.

I am doing Oracle database backup  i.e. BCV backup from FC disk. I am
getting an average backup through put from 80-95mb/sec.

My Netbackup Client backup speed is also about to 50mb/Sec

With Warm Regards
=-=-=-=-=-=-=-=-=-=-=-=-=-
Harpreet Singh Chana

Phone  : (O) 6895 - 4326
Fax   : (O) 6895 - 4991
=-=-=-=-=-=-=-=-=-=-=-=-=-


Notice
The information in this message is confidential and may be legally
privileged.  It is intended solely for the addressee.  Access to this
message by anyone else is unauthorized.  If you are not the intended
recipient,  any disclosure,  copying or distribution of the message,  or
any action taken by you in reliance on it,  is prohibited and may be
unlawful.  If you have received this message in error,  please delete it
and contact the sender immediately.  Thank you.




   
 luciano prata 
 netbackup-forum@ 
 backupcentral.com  To 
  VERITAS-BU@MAILMAN.ENG.AUBURN.EDU   
 Sent by:   cc 
 veritas-bu-bounce 
 s...@mailman.eng.aub Subject 
 urn.edu   [Veritas-bu]  Large SAP backup  
   problem 
   
 05/06/2010 02:36  
 AM
   
   
   
 Please respond to 
 veritas...@mailma 
 N.ENG.AUBURN.EDU  
   
   





Hello,

I have a problem with one backup, this backup now have more than 10 Tb,
it’s a SAP base backup, my problem is because this backup take more than 36
hours to finish, the throughput is not bad, its do 4 jobs with 15Mb/s per
job totally 60Mb/s but my backup window is only 13 hours for 06:00 PM to
07:00 AM and I don’t know what I have to do to take this backup more fast.
Below the description for my environment.

I have one NBU master Server Solaris and 3 media servers with NBU 6.5.2 in
all servers and clients.

One media solaris attached to a SL500 library with 7 drives
One media solaris attached to the same SL500 and a VTL with 24 virtual
drives
One media AIX attached to a VTL with 48 virtual drives

The AIX media server is on a P595 and these is the same machine than the
server I need to backup, there is a Shared Ethernet Adapter controlled by a
Virtual IO server to shared all the network card for this P595 to create
the backup network.

I tested these backup with all the media servers, VTL and SL500 and it’s
the same throughput.
I tested with 2 jobs, 4 jobs, 9 jobs and it’s the same throughput.
Tested multiplexing, the same throughput.
Online and offline backups, the same throughput.

I have a SAP client to do this backup, and I think the AIX media server is
the best to do this because it’s the same machine, and my question is,
there is other way to do this backup more fast?

Thanks,

Atenciosamente,
Luciano Prata
Analista de Backup
Service IT Solutions
(21) 2211-4473
RS-PR-SP-RJ-ARG
www.service.com.br

+--
|This was sent by luciano.pr...@light.com.br via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

ForwardSourceID:NT0015242E
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] Large SAP backup problem

2010-05-05 Thread WEAVER, Simon (external)
From the description its unclear if this SAP Server is a lan based
client or an actual SAN Media Server.

A San Media Server (that is the SAP System backing up itself) and having
direct access to the Tape Library using SSO over the fabric would help.

But again, this assumes the client is not a SAN or Media Server itself.
Simon 

-Original Message-
From: veritas-bu-boun...@mailman.eng.auburn.edu
[mailto:veritas-bu-boun...@mailman.eng.auburn.edu] On Behalf Of luciano
prata
Sent: Wednesday, May 05, 2010 7:36 PM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: [Veritas-bu] Large SAP backup problem


Hello,

I have a problem with one backup, this backup now have more than 10 Tb,
it's a SAP base backup, my problem is because this backup take more than
36 hours to finish, the throughput is not bad, its do 4 jobs with 15Mb/s
per job totally 60Mb/s but my backup window is only 13 hours for 06:00
PM to 07:00 AM and I don't know what I have to do to take this backup
more fast. Below the description for my environment.

I have one NBU master Server Solaris and 3 media servers with NBU 6.5.2
in all servers and clients.

One media solaris attached to a SL500 library with 7 drives One media
solaris attached to the same SL500 and a VTL with 24 virtual drives One
media AIX attached to a VTL with 48 virtual drives

The AIX media server is on a P595 and these is the same machine than the
server I need to backup, there is a Shared Ethernet Adapter controlled
by a Virtual IO server to shared all the network card for this P595 to
create the backup network.

I tested these backup with all the media servers, VTL and SL500 and it's
the same throughput.
I tested with 2 jobs, 4 jobs, 9 jobs and it's the same throughput.
Tested multiplexing, the same throughput.
Online and offline backups, the same throughput.

I have a SAP client to do this backup, and I think the AIX media server
is the best to do this because it's the same machine, and my question
is, there is other way to do this backup more fast?

Thanks,

Atenciosamente,
Luciano Prata
Analista de Backup
Service IT Solutions
(21) 2211-4473
RS-PR-SP-RJ-ARG
www.service.com.br

+--
|This was sent by luciano.pr...@light.com.br via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--



This email (including any attachments) may contain confidential
and/or privileged information or information otherwise protected
from disclosure. If you are not the intended recipient, please
notify the sender immediately, do not copy this message or any
attachments and do not use it for any purpose or disclose its
content to any person, but delete this message and any attachments
from your system. Astrium disclaims any and all liability if this
email transmission was virus corrupted, altered or falsified.
-o-
Astrium Limited, Registered in England and Wales No. 2449259
Registered Office:
Gunnels Wood Road, Stevenage, Hertfordshire, SG1 2AS, England
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu