[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2008-01-08 Thread Chris_Millet

Ok, I figured it out.  Since we use Qlogic branded adapters and QLA driver 
instead of the QLC driver there was a setting we needed to change in 
/kernel/drv/qla2300.conf.  It is the equipment of the pci-max-read-request 
variable in qlc.conf.

# PCI-X Maximum Memory Read Byte Count.
#Range: 0, 128, 256, 512, 1024, 2048 or 4096 bytes
#Default: 0 = system default
hba0-pci-x-max-memory-read-byte-count=2048;

Anyway, that plus setting maxphys=0x80 in /etc/system seems to have removed 
all bottlenecks from the T2000 and we are able to get linespeed data transfer 
speeds through the QLE2462.

We are now able to vault jobs at around 160MB/sec from VTL to LTO3.  I've only 
tried two concurrent vaults for x2 performance.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-12-08 Thread bob944
> I'm watching the duplication job that the vault spawns to 
> copy the backup from VTL to LTO3.  For the bptm process that 
> is doing the read from VTL:
> 
> <2> io_init: buffer size for read is 64512
> 
> Obviously, I want to use a high buffer size for reads, but I 
> already have SIZE_DATA_BUFFERS and SIZE_DATA_BUFFERS_RESTORE 
> set to 262144 so I'm not sure how to modify that particular 
> buffer setting.

What makes you think there is a SIZE_DATA_BUFFERS_RESTORE parameter?
There isn't one in the NetBackup Backup Planning and Performance Tuning
Guide for 6.0, at least.

Normal variable-length tape I/O processing has always been to do a read
into a buffer "large enough" and see how much data comes back.  So, in
NetBackup, AFAIK, the concept of setting a restore buffer size is
nonsensical; the buffer size will be whatever the tape was written at.

Sounds as if your VTL tape was wrirten in that Solaris-ish 63KB size.
You can confirm by, for instance, reading a few blocks with dd.
Assuming that pans out, why did the VTL tape get written that way?
SIZE_DATA_BUFFERS incorrect, in the wrong place, on the wrong media
server, typo, ignored by the VTL driver, ...  Do you see the size and
number of buffers you specified showing up in the bptm logs when backing
up?


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-12-07 Thread Chris_Millet


Marianu, Jonathan wrote:
> My recollection is that during duplication from VTL to tape , it uses the mpx 
> originally set in the policy unless it is throttled down by the vault policy 
> but you can’t increase it. The MPX is what is so interesting to examine in 
> truss because I observed that any multiplexing will impact the duplication 
> speed. 
> 
> This is impact more pronounced though when you are mixing slow and fast 
> clients together because the fragments are not spread out evenly on the 
> virtual tape and bptm has to rewind the virtual tape and reread, causing a 
> delay to physical library which is torture on the physical tape drive 
> 
> Then I observed that a higher MPX led to a slight improvement in duplication 
> speed. I would use 24-32 
> However there are other tradeoffs. In the end we decided to not use VTL for 
> network client backups for the new master server environment model.  
> A problem with VTLis that a slow client backup will hold onto the vritual 
> media preventing me from using it for duplication. This impacts the RPO of 
> other clients backups. DSU does not have that issue. 
> We use 64TB EVA file systems per network media server which give pretty even 
> performance and we use vxfs. 
> One outstanding issue with file systems that you don’t have with VTL is 
> fragmentation Another benefit of VTL is that I have a media server dedicated 
> to writing the backups and another media server dedicated to duplicating 
> backups. 
> . 
> 
> __
> Jonathan Marianu (mah ree ah' nu)
> AT&T Storage Planning and Design Architect
> (360) 597-6896
> Work Hours 0800-1800 PST M-F
> 
> Manager: David Anderson
> (314) 340-9296


Currently, when client backups are written to the VTL, we are using no 
multiplexing.  Regardless, our vault profile is set to de-multiplex backups 
when they are duplicated for faster restores from tape.  Are you saying to 
increase MPX to the VTL for client backups will improve vault duplication 
performance?  24-32MPX ?

To avoid the issue with backups tying up virtual tapes for duplication jobs, we 
reduced the cartridge size in the VTL to 100GB to have more tapes.  We rarely 
see vault waiting for a tape because it is in-use.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-12-07 Thread Chris_Millet

Ok, I think I might have discovered something key to my poor performance..

I'm watching the duplication job that the vault spawns to copy the backup from 
VTL to LTO3.  For the bptm process that is doing the read from VTL:

<2> io_init: buffer size for read is 64512

Obviously, I want to use a high buffer size for reads, but I already have 
SIZE_DATA_BUFFERS and SIZE_DATA_BUFFERS_RESTORE set to 262144 so I'm not sure 
how to modify that particular buffer setting.

To somebody else who asked, the connectivity between devices is a Qlogic 9000 
series switch and a pair of QLE2462s in the T2000.  4Gb all the way through 
from host to switch and switch to the VTL front-end port.  I think the LTO3 
drives are only 2Gb but that should be plenty.  Using seperate HBA ports for 
the VTL luns and LTO3 drive.

/etc/system is at default.. no tuning done there since this is a Solaris 10 
system with 8GB of memory (so 2GB for shared memory).

Using 128 buffers, 262144 size.  Goign to increase the # since the system has 
plenty of memory.

Also, in dd tests, I'm able to achieve line speeds on the HBA writing to the 
VTL so I know its up for the task.  I gotta think that the 64k buffer used for 
reading though is killing the performance.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-21 Thread Peters, Devon C
For our SAN media servers, we do see restore performance gains with this
setting.  The difference between the default setting, and 512 has been
around 20% for us.  We haven't done a whole lot of tuning or analisys on
this - I just set it to match NUMBER_DATA_BUFFERS.  :)

For our less high-performance backups (i.e. anything going over the
network) I've never looked into it.

-devon

-Original Message-
From: Justin Piszcz [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 21, 2007 4:05 PM
To: Peters, Devon C
Cc: Mike Andres; VERITAS-BU@mailman.eng.auburn.edu
Subject: RE: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

Has anyone here done benchmarks to see what type of potential speed up
is 
gained with the NUMBER_DATA_BUFFERS_RESTORE directive?

On Wed, 21 Nov 2007, Peters, Devon C wrote:

> I just did a test, and it looks like the duplication process uses
> NUMBER_DATA_BUFFERS for both read and write drives.  I'm guessing that
> there's just a single set of buffers used by both read and write
> processes, rather than a separate set of buffers for each process...
>
> Config on the test system:
>
> # cat /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS
> 256
> # cat /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_RESTORE
> 128
>
>
> Here's the bptm io_init info from the duplication - PID 22020 is the
> write process,  PID 22027 is the read process:
>
> 10:43:20.523 [22020] <2> io_init: using 262144 data buffer size
> 10:43:20.523 [22020] <2> io_init: CINDEX 0, sched Kbytes for
monitoring
> = 2
> 10:43:20.524 [22020] <2> io_init: using 256 data buffers
> 10:43:20.524 [22020] <2> io_init: child delay = 20, parent delay = 30
> (milliseconds)
> 10:43:20.524 [22020] <2> io_init: shm_size = 67115012, buffer address
=
> 0xf39b8000, buf control = 0xf79b8000, ready ptr = 0xf79b9800
> 10:43:21.188 [22027] <2> io_init: using 256 data buffers
> 10:43:21.188 [22027] <2> io_init: buffer size for read is 262144
> 10:43:21.188 [22027] <2> io_init: child delay = 20, parent delay = 30
> (milliseconds)
> 10:43:21.188 [22027] <2> io_init: shm_size = 67115060, buffer address
=
> 0xf39b8000, buf control = 0xf79b8000, ready ptr = 0xf79b9800, res_cntl
=
> 0xf79b9804
>
>
> Also, there are no lines in the bptm logfile showing
> "mpx_setup_restore_shm" for these PIDs...
>
> -devon
>
> ____________
>
> From: Mike Andres [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 21, 2007 9:49 AM
> To: Justin Piszcz
> Cc: Peters, Devon C; VERITAS-BU@mailman.eng.auburn.edu
> Subject: RE: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>
>
> Thanks.  I guess my question could be more specifically stated as
"does
> the duplication process utilize NUMBER_DATA_BUFFERS_RESTORE or
> NUMBER_DATA_BUFFERS."   I don't have a system in front of me to test.
>
> 
>
> From: Justin Piszcz [mailto:[EMAIL PROTECTED]
> Sent: Wed 11/21/2007 8:58 AM
> To: Mike Andres
> Cc: Peters, Devon C; VERITAS-BU@mailman.eng.auburn.edu
> Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>
>
>
> Buffers in memory to disk would be dependent on how much cache the
raid
> controller has yeah?
>
> Justin.
>
> On Wed, 21 Nov 2007, Mike Andres wrote:
>
>> I'm curious about NUMBER_DATA_BUFFERS_RESTORE and duplication
> performance as well.  Anybody know this definitively?
>>
>> 
>>
>> From: [EMAIL PROTECTED] on behalf of Peters,
> Devon C
>> Sent: Tue 11/20/2007 1:32 PM
>> To: VERITAS-BU@mailman.eng.auburn.edu
>> Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>>
>>
>>
>> Chris,
>>
>> To me it looks like there's a 1Gb bottleneck somewhere (90MB/s is
> about all we ever got out of 1Gb fibre back in the day).  Are there
any
> ISL's between your tape drive, your switch, and your server's HBA?
> Also, have you verified that your tape drives have negotiated onto the
> fabric as 2Gb and not 1Gb?
>>
>> When we had 2Gb LTO-3 drives on our T2000's, throughput to a single
> drive toped out around 160MB/s.  When we upgraded the drives to 4Gb
> LTO-3, throughput to a single drive went up to 260MB/s.  Our data is
> very compressible, and these numbers are what I assume to be the
> limitation of the IBM tape drives.
>>
>> Regarding buffer settings, my experience may not apply directly since
> we're doing disk (filesystems on fast storge) to tape backups, rather
> than VTL to tape.  With our setup we see the best performance with a
> buffer size of 1048576 and 512 buffe

Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-21 Thread Justin Piszcz
Has anyone here done benchmarks to see what type of potential speed up is 
gained with the NUMBER_DATA_BUFFERS_RESTORE directive?

On Wed, 21 Nov 2007, Peters, Devon C wrote:

> I just did a test, and it looks like the duplication process uses
> NUMBER_DATA_BUFFERS for both read and write drives.  I'm guessing that
> there's just a single set of buffers used by both read and write
> processes, rather than a separate set of buffers for each process...
>
> Config on the test system:
>
> # cat /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS
> 256
> # cat /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_RESTORE
> 128
>
>
> Here's the bptm io_init info from the duplication - PID 22020 is the
> write process,  PID 22027 is the read process:
>
> 10:43:20.523 [22020] <2> io_init: using 262144 data buffer size
> 10:43:20.523 [22020] <2> io_init: CINDEX 0, sched Kbytes for monitoring
> = 2
> 10:43:20.524 [22020] <2> io_init: using 256 data buffers
> 10:43:20.524 [22020] <2> io_init: child delay = 20, parent delay = 30
> (milliseconds)
> 10:43:20.524 [22020] <2> io_init: shm_size = 67115012, buffer address =
> 0xf39b8000, buf control = 0xf79b8000, ready ptr = 0xf79b9800
> 10:43:21.188 [22027] <2> io_init: using 256 data buffers
> 10:43:21.188 [22027] <2> io_init: buffer size for read is 262144
> 10:43:21.188 [22027] <2> io_init: child delay = 20, parent delay = 30
> (milliseconds)
> 10:43:21.188 [22027] <2> io_init: shm_size = 67115060, buffer address =
> 0xf39b8000, buf control = 0xf79b8000, ready ptr = 0xf79b9800, res_cntl =
> 0xf79b9804
>
>
> Also, there are no lines in the bptm logfile showing
> "mpx_setup_restore_shm" for these PIDs...
>
> -devon
>
> ________
>
> From: Mike Andres [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, November 21, 2007 9:49 AM
> To: Justin Piszcz
> Cc: Peters, Devon C; VERITAS-BU@mailman.eng.auburn.edu
> Subject: RE: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>
>
> Thanks.  I guess my question could be more specifically stated as "does
> the duplication process utilize NUMBER_DATA_BUFFERS_RESTORE or
> NUMBER_DATA_BUFFERS."   I don't have a system in front of me to test.
>
> ____
>
> From: Justin Piszcz [mailto:[EMAIL PROTECTED]
> Sent: Wed 11/21/2007 8:58 AM
> To: Mike Andres
> Cc: Peters, Devon C; VERITAS-BU@mailman.eng.auburn.edu
> Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>
>
>
> Buffers in memory to disk would be dependent on how much cache the raid
> controller has yeah?
>
> Justin.
>
> On Wed, 21 Nov 2007, Mike Andres wrote:
>
>> I'm curious about NUMBER_DATA_BUFFERS_RESTORE and duplication
> performance as well.  Anybody know this definitively?
>>
>> 
>>
>> From: [EMAIL PROTECTED] on behalf of Peters,
> Devon C
>> Sent: Tue 11/20/2007 1:32 PM
>> To: VERITAS-BU@mailman.eng.auburn.edu
>> Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>>
>>
>>
>> Chris,
>>
>> To me it looks like there's a 1Gb bottleneck somewhere (90MB/s is
> about all we ever got out of 1Gb fibre back in the day).  Are there any
> ISL's between your tape drive, your switch, and your server's HBA?
> Also, have you verified that your tape drives have negotiated onto the
> fabric as 2Gb and not 1Gb?
>>
>> When we had 2Gb LTO-3 drives on our T2000's, throughput to a single
> drive toped out around 160MB/s.  When we upgraded the drives to 4Gb
> LTO-3, throughput to a single drive went up to 260MB/s.  Our data is
> very compressible, and these numbers are what I assume to be the
> limitation of the IBM tape drives.
>>
>> Regarding buffer settings, my experience may not apply directly since
> we're doing disk (filesystems on fast storge) to tape backups, rather
> than VTL to tape.  With our setup we see the best performance with a
> buffer size of 1048576 and 512 buffers.  For us these buffer sizes are
> mostly related to the filesystem performance, since we get better disk
> throughput with 1MB I/O's than with smaller ones...
>>
>> I'm also curious if anyone knows whether the
> NUMBER_DATA_BUFFERS_RESTORE parameter is used when doing duplications?
> I would assume it is, but I don't know for sure.  If it is, then the
> bptm process reading from the VTL would be using the default 16 (?)
> buffers, and you might see better performance by using a larger number.
>>
>>
>> -devon
>>
>>
>> -
&g

Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-21 Thread Marianu, Jonathan
My recollection is that during duplication from VTL to tape , it uses
the mpx originally set in the policy unless it is throttled down by the
vault policy but you can't increase it. The MPX is what is so
interesting to examine in truss because I observed that any multiplexing
will impact the duplication speed.

 

This is impact more pronounced though when you are mixing slow and fast
clients together because the fragments are not spread out evenly on the
virtual tape and bptm has to rewind the virtual tape and reread, causing
a delay to physical library which is torture on the physical tape drive

 

Then I observed that a higher MPX led to a slight improvement in
duplication speed. I would use 24-32

However there are other tradeoffs. In the end we decided to not use VTL
for network client backups for the new master server environment model. 

A problem with VTLis that a slow client backup will hold onto the
vritual media preventing me from using it for duplication. This impacts
the RPO of other clients backups. DSU does not have that issue.

We use 64TB EVA file systems per network media server which give pretty
even performance and we use vxfs.

One outstanding issue with file systems that you don't have with VTL is
fragmentation Another benefit of VTL is that I have a media server
dedicated to writing the backups and another media server dedicated to
duplicating backups.

.


__
Jonathan Marianu (mah ree ah' nu)
AT&T Storage Planning and Design Architect
(360) 597-6896
Work Hours 0800-1800 PST M-F

Manager: David Anderson
(314) 340-9296

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-21 Thread Peters, Devon C
I just did a test, and it looks like the duplication process uses
NUMBER_DATA_BUFFERS for both read and write drives.  I'm guessing that
there's just a single set of buffers used by both read and write
processes, rather than a separate set of buffers for each process...
 
Config on the test system:
 
# cat /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS
256
# cat /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_RESTORE
128

 
Here's the bptm io_init info from the duplication - PID 22020 is the
write process,  PID 22027 is the read process:
 
10:43:20.523 [22020] <2> io_init: using 262144 data buffer size
10:43:20.523 [22020] <2> io_init: CINDEX 0, sched Kbytes for monitoring
= 2
10:43:20.524 [22020] <2> io_init: using 256 data buffers
10:43:20.524 [22020] <2> io_init: child delay = 20, parent delay = 30
(milliseconds)
10:43:20.524 [22020] <2> io_init: shm_size = 67115012, buffer address =
0xf39b8000, buf control = 0xf79b8000, ready ptr = 0xf79b9800
10:43:21.188 [22027] <2> io_init: using 256 data buffers
10:43:21.188 [22027] <2> io_init: buffer size for read is 262144
10:43:21.188 [22027] <2> io_init: child delay = 20, parent delay = 30
(milliseconds)
10:43:21.188 [22027] <2> io_init: shm_size = 67115060, buffer address =
0xf39b8000, buf control = 0xf79b8000, ready ptr = 0xf79b9800, res_cntl =
0xf79b9804

 
Also, there are no lines in the bptm logfile showing
"mpx_setup_restore_shm" for these PIDs...
 
-devon



From: Mike Andres [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 21, 2007 9:49 AM
To: Justin Piszcz
Cc: Peters, Devon C; VERITAS-BU@mailman.eng.auburn.edu
Subject: RE: [Veritas-bu] T2000 vaulting performance with VTL/LTO3


Thanks.  I guess my question could be more specifically stated as "does
the duplication process utilize NUMBER_DATA_BUFFERS_RESTORE or
NUMBER_DATA_BUFFERS."   I don't have a system in front of me to test.



From: Justin Piszcz [mailto:[EMAIL PROTECTED]
Sent: Wed 11/21/2007 8:58 AM
To: Mike Andres
Cc: Peters, Devon C; VERITAS-BU@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3



Buffers in memory to disk would be dependent on how much cache the raid
controller has yeah?

Justin.

On Wed, 21 Nov 2007, Mike Andres wrote:

> I'm curious about NUMBER_DATA_BUFFERS_RESTORE and duplication
performance as well.  Anybody know this definitively?
>
> 
>
> From: [EMAIL PROTECTED] on behalf of Peters,
Devon C
> Sent: Tue 11/20/2007 1:32 PM
> To: VERITAS-BU@mailman.eng.auburn.edu
> Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>
>
>
> Chris,
>
> To me it looks like there's a 1Gb bottleneck somewhere (90MB/s is
about all we ever got out of 1Gb fibre back in the day).  Are there any
ISL's between your tape drive, your switch, and your server's HBA?
Also, have you verified that your tape drives have negotiated onto the
fabric as 2Gb and not 1Gb?
>
> When we had 2Gb LTO-3 drives on our T2000's, throughput to a single
drive toped out around 160MB/s.  When we upgraded the drives to 4Gb
LTO-3, throughput to a single drive went up to 260MB/s.  Our data is
very compressible, and these numbers are what I assume to be the
limitation of the IBM tape drives.
>
> Regarding buffer settings, my experience may not apply directly since
we're doing disk (filesystems on fast storge) to tape backups, rather
than VTL to tape.  With our setup we see the best performance with a
buffer size of 1048576 and 512 buffers.  For us these buffer sizes are
mostly related to the filesystem performance, since we get better disk
throughput with 1MB I/O's than with smaller ones...
>
> I'm also curious if anyone knows whether the
NUMBER_DATA_BUFFERS_RESTORE parameter is used when doing duplications?
I would assume it is, but I don't know for sure.  If it is, then the
bptm process reading from the VTL would be using the default 16 (?)
buffers, and you might see better performance by using a larger number.
>
>
> -devon
>
>
> -
> Date: Fri, 16 Nov 2007 10:00:18 -0800
> From: Chris_Millet <[EMAIL PROTECTED]>
> Subject: [Veritas-bu]  T2000 vaulting performance with VTL/LTO3
> To: VERITAS-BU@mailman.eng.auburn.edu
> Message-ID: <[EMAIL PROTECTED]>
>
>
> I'm starting to experiment with the use of T2000 for media servers.
The backup server is a T2000 8 core, 18GB system.  There is a Qlogic
QLE2462 PCI-E dual port 4Gb adapter in the system that plugs into a
Qlogic 5602 switch.  From there, one port is zoned to a EMC CDL 4400
(VTL) and a few HP LTO3 tape drives.  The connectivity is 4Gb from host
to switch, and from switch to the VTL.  The tape drive is 2Gb.
>
> So when 

Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-21 Thread Marianu, Jonathan

Something you wrote didn't sound quite right.
Bpbkar writes to the child bptm using TCP sockets which is a bottleneck.

The child bptm process or processes, depending on MPX, write to shared
memory, The parent bptm reads from shared memory and writes it to the
tape.

I still use 5.1 so this may be different in 6.0

__
Jonathan Marianu (mah ree ah' nu)
AT&T Storage Planning and Design Architect
(360) 597-6896
Work Hours 0800-1800 PST M-F

Manager: David Anderson
(314) 340-9296


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-21 Thread Peters, Devon C
Not sure if this question was directed at Mike or myself, but if it was
directed to me...

In our case, the memory buffers for the disk are the shared memory
buffers on the media server (T2000).  When a media server is backing up
itself, the bpbkar process reads from disk directly into the
shared-memory buffers - the same buffers that the bptm process is
writing to tape from.  So, for our filesystem backups, we see disk I/O's
of the same size as our tape buffers...  We're currently bottlenecked at
the front-end processors of our storage array, and doing fewer larger
I/O's provides a little more throughput from the array.

The cache on the storage array is something I don't have a whole lot of
understanding about.  I assume that for reads, it is mostly a buffer
space for readahead...

-devon

-Original Message-
From: Justin Piszcz [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 21, 2007 6:59 AM
To: Mike Andres
Cc: Peters, Devon C; VERITAS-BU@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

Buffers in memory to disk would be dependent on how much cache the raid 
controller has yeah?

Justin.

On Wed, 21 Nov 2007, Mike Andres wrote:

> I'm curious about NUMBER_DATA_BUFFERS_RESTORE and duplication
performance as well.  Anybody know this definitively?
>
> 
>
> From: [EMAIL PROTECTED] on behalf of Peters,
Devon C
> Sent: Tue 11/20/2007 1:32 PM
> To: VERITAS-BU@mailman.eng.auburn.edu
> Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>
>
>
> Chris,
>
> To me it looks like there's a 1Gb bottleneck somewhere (90MB/s is
about all we ever got out of 1Gb fibre back in the day).  Are there any
ISL's between your tape drive, your switch, and your server's HBA?
Also, have you verified that your tape drives have negotiated onto the
fabric as 2Gb and not 1Gb?
>
> When we had 2Gb LTO-3 drives on our T2000's, throughput to a single
drive toped out around 160MB/s.  When we upgraded the drives to 4Gb
LTO-3, throughput to a single drive went up to 260MB/s.  Our data is
very compressible, and these numbers are what I assume to be the
limitation of the IBM tape drives.
>
> Regarding buffer settings, my experience may not apply directly since
we're doing disk (filesystems on fast storge) to tape backups, rather
than VTL to tape.  With our setup we see the best performance with a
buffer size of 1048576 and 512 buffers.  For us these buffer sizes are
mostly related to the filesystem performance, since we get better disk
throughput with 1MB I/O's than with smaller ones...
>
> I'm also curious if anyone knows whether the
NUMBER_DATA_BUFFERS_RESTORE parameter is used when doing duplications?
I would assume it is, but I don't know for sure.  If it is, then the
bptm process reading from the VTL would be using the default 16 (?)
buffers, and you might see better performance by using a larger number.
>
>
> -devon
>
>
> -------------
> Date: Fri, 16 Nov 2007 10:00:18 -0800
> From: Chris_Millet <[EMAIL PROTECTED]>
> Subject: [Veritas-bu]  T2000 vaulting performance with VTL/LTO3
> To: VERITAS-BU@mailman.eng.auburn.edu
> Message-ID: <[EMAIL PROTECTED]>
>
>
> I'm starting to experiment with the use of T2000 for media servers.
The backup server is a T2000 8 core, 18GB system.  There is a Qlogic
QLE2462 PCI-E dual port 4Gb adapter in the system that plugs into a
Qlogic 5602 switch.  From there, one port is zoned to a EMC CDL 4400
(VTL) and a few HP LTO3 tape drives.  The connectivity is 4Gb from host
to switch, and from switch to the VTL.  The tape drive is 2Gb.
>
> So when using Netbackup Vault to copy a backup done to the VTL to a
real tape drive, the backup performance tops out at about 90MB/sec.  If
I spin up two jobs to two tape drives, they both go about 45MB/sec.   It
seems I've hit a 90MB/sec bottleneck somehow.  I have v240s performing
better!
>
> Write performance to the VTL from incoming client backups over the WAN
exceeds the vault performance.
>
> My next step is to zone the tape drives on one of the HBA ports, and
the VTL zoned on the other port.
>
> I'm using:
> SIZE_DATA_BUFFERS = 262144
> NUMBER_DATA_BUFFERS = 64
>
> Any other suggestions?
>
>

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-21 Thread Mike Andres
Thanks.  I guess my question could be more specifically stated as "does the 
duplication process utilize NUMBER_DATA_BUFFERS_RESTORE or 
NUMBER_DATA_BUFFERS."   I don't have a system in front of me to test.



From: Justin Piszcz [mailto:[EMAIL PROTECTED]
Sent: Wed 11/21/2007 8:58 AM
To: Mike Andres
Cc: Peters, Devon C; VERITAS-BU@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3



Buffers in memory to disk would be dependent on how much cache the raid
controller has yeah?

Justin.

On Wed, 21 Nov 2007, Mike Andres wrote:

> I'm curious about NUMBER_DATA_BUFFERS_RESTORE and duplication performance as 
> well.  Anybody know this definitively?
>
> 
>
> From: [EMAIL PROTECTED] on behalf of Peters, Devon C
> Sent: Tue 11/20/2007 1:32 PM
> To: VERITAS-BU@mailman.eng.auburn.edu
> Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>
>
>
> Chris,
>
> To me it looks like there's a 1Gb bottleneck somewhere (90MB/s is about all 
> we ever got out of 1Gb fibre back in the day).  Are there any ISL's between 
> your tape drive, your switch, and your server's HBA?  Also, have you verified 
> that your tape drives have negotiated onto the fabric as 2Gb and not 1Gb?
>
> When we had 2Gb LTO-3 drives on our T2000's, throughput to a single drive 
> toped out around 160MB/s.  When we upgraded the drives to 4Gb LTO-3, 
> throughput to a single drive went up to 260MB/s.  Our data is very 
> compressible, and these numbers are what I assume to be the limitation of the 
> IBM tape drives.
>
> Regarding buffer settings, my experience may not apply directly since we're 
> doing disk (filesystems on fast storge) to tape backups, rather than VTL to 
> tape.  With our setup we see the best performance with a buffer size of 
> 1048576 and 512 buffers.  For us these buffer sizes are mostly related to the 
> filesystem performance, since we get better disk throughput with 1MB I/O's 
> than with smaller ones...
>
> I'm also curious if anyone knows whether the NUMBER_DATA_BUFFERS_RESTORE 
> parameter is used when doing duplications?  I would assume it is, but I don't 
> know for sure.  If it is, then the bptm process reading from the VTL would be 
> using the default 16 (?) buffers, and you might see better performance by 
> using a larger number.
>
>
> -devon
>
>
> -
> Date: Fri, 16 Nov 2007 10:00:18 -0800
> From: Chris_Millet <[EMAIL PROTECTED]>
> Subject: [Veritas-bu]  T2000 vaulting performance with VTL/LTO3
> To: VERITAS-BU@mailman.eng.auburn.edu
> Message-ID: <[EMAIL PROTECTED]>
>
>
> I'm starting to experiment with the use of T2000 for media servers.  The 
> backup server is a T2000 8 core, 18GB system.  There is a Qlogic QLE2462 
> PCI-E dual port 4Gb adapter in the system that plugs into a Qlogic 5602 
> switch.  From there, one port is zoned to a EMC CDL 4400 (VTL) and a few HP 
> LTO3 tape drives.  The connectivity is 4Gb from host to switch, and from 
> switch to the VTL.  The tape drive is 2Gb.
>
> So when using Netbackup Vault to copy a backup done to the VTL to a real tape 
> drive, the backup performance tops out at about 90MB/sec.  If I spin up two 
> jobs to two tape drives, they both go about 45MB/sec.   It seems I've hit a 
> 90MB/sec bottleneck somehow.  I have v240s performing better!
>
> Write performance to the VTL from incoming client backups over the WAN 
> exceeds the vault performance.
>
> My next step is to zone the tape drives on one of the HBA ports, and the VTL 
> zoned on the other port.
>
> I'm using:
> SIZE_DATA_BUFFERS = 262144
> NUMBER_DATA_BUFFERS = 64
>
> Any other suggestions?
>
>



___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-21 Thread Justin Piszcz
Buffers in memory to disk would be dependent on how much cache the raid 
controller has yeah?

Justin.

On Wed, 21 Nov 2007, Mike Andres wrote:

> I'm curious about NUMBER_DATA_BUFFERS_RESTORE and duplication performance as 
> well.  Anybody know this definitively?
>
> 
>
> From: [EMAIL PROTECTED] on behalf of Peters, Devon C
> Sent: Tue 11/20/2007 1:32 PM
> To: VERITAS-BU@mailman.eng.auburn.edu
> Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
>
>
>
> Chris,
>
> To me it looks like there's a 1Gb bottleneck somewhere (90MB/s is about all 
> we ever got out of 1Gb fibre back in the day).  Are there any ISL's between 
> your tape drive, your switch, and your server's HBA?  Also, have you verified 
> that your tape drives have negotiated onto the fabric as 2Gb and not 1Gb?
>
> When we had 2Gb LTO-3 drives on our T2000's, throughput to a single drive 
> toped out around 160MB/s.  When we upgraded the drives to 4Gb LTO-3, 
> throughput to a single drive went up to 260MB/s.  Our data is very 
> compressible, and these numbers are what I assume to be the limitation of the 
> IBM tape drives.
>
> Regarding buffer settings, my experience may not apply directly since we're 
> doing disk (filesystems on fast storge) to tape backups, rather than VTL to 
> tape.  With our setup we see the best performance with a buffer size of 
> 1048576 and 512 buffers.  For us these buffer sizes are mostly related to the 
> filesystem performance, since we get better disk throughput with 1MB I/O's 
> than with smaller ones...
>
> I'm also curious if anyone knows whether the NUMBER_DATA_BUFFERS_RESTORE 
> parameter is used when doing duplications?  I would assume it is, but I don't 
> know for sure.  If it is, then the bptm process reading from the VTL would be 
> using the default 16 (?) buffers, and you might see better performance by 
> using a larger number.
>
>
> -devon
>
>
> ---------
> Date: Fri, 16 Nov 2007 10:00:18 -0800
> From: Chris_Millet <[EMAIL PROTECTED]>
> Subject: [Veritas-bu]  T2000 vaulting performance with VTL/LTO3
> To: VERITAS-BU@mailman.eng.auburn.edu
> Message-ID: <[EMAIL PROTECTED]>
>
>
> I'm starting to experiment with the use of T2000 for media servers.  The 
> backup server is a T2000 8 core, 18GB system.  There is a Qlogic QLE2462 
> PCI-E dual port 4Gb adapter in the system that plugs into a Qlogic 5602 
> switch.  From there, one port is zoned to a EMC CDL 4400 (VTL) and a few HP 
> LTO3 tape drives.  The connectivity is 4Gb from host to switch, and from 
> switch to the VTL.  The tape drive is 2Gb.
>
> So when using Netbackup Vault to copy a backup done to the VTL to a real tape 
> drive, the backup performance tops out at about 90MB/sec.  If I spin up two 
> jobs to two tape drives, they both go about 45MB/sec.   It seems I've hit a 
> 90MB/sec bottleneck somehow.  I have v240s performing better!
>
> Write performance to the VTL from incoming client backups over the WAN 
> exceeds the vault performance.
>
> My next step is to zone the tape drives on one of the HBA ports, and the VTL 
> zoned on the other port.
>
> I'm using:
> SIZE_DATA_BUFFERS = 262144
> NUMBER_DATA_BUFFERS = 64
>
> Any other suggestions?
>
>
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-21 Thread Mike Andres
I'm curious about NUMBER_DATA_BUFFERS_RESTORE and duplication performance as 
well.  Anybody know this definitively?



From: [EMAIL PROTECTED] on behalf of Peters, Devon C
Sent: Tue 11/20/2007 1:32 PM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3



Chris, 

To me it looks like there's a 1Gb bottleneck somewhere (90MB/s is about all we 
ever got out of 1Gb fibre back in the day).  Are there any ISL's between your 
tape drive, your switch, and your server's HBA?  Also, have you verified that 
your tape drives have negotiated onto the fabric as 2Gb and not 1Gb?

When we had 2Gb LTO-3 drives on our T2000's, throughput to a single drive toped 
out around 160MB/s.  When we upgraded the drives to 4Gb LTO-3, throughput to a 
single drive went up to 260MB/s.  Our data is very compressible, and these 
numbers are what I assume to be the limitation of the IBM tape drives.

Regarding buffer settings, my experience may not apply directly since we're 
doing disk (filesystems on fast storge) to tape backups, rather than VTL to 
tape.  With our setup we see the best performance with a buffer size of 1048576 
and 512 buffers.  For us these buffer sizes are mostly related to the 
filesystem performance, since we get better disk throughput with 1MB I/O's than 
with smaller ones...

I'm also curious if anyone knows whether the NUMBER_DATA_BUFFERS_RESTORE 
parameter is used when doing duplications?  I would assume it is, but I don't 
know for sure.  If it is, then the bptm process reading from the VTL would be 
using the default 16 (?) buffers, and you might see better performance by using 
a larger number.


-devon 


- 
Date: Fri, 16 Nov 2007 10:00:18 -0800 
From: Chris_Millet <[EMAIL PROTECTED]> 
Subject: [Veritas-bu]  T2000 vaulting performance with VTL/LTO3 
To: VERITAS-BU@mailman.eng.auburn.edu 
Message-ID: <[EMAIL PROTECTED]> 


I'm starting to experiment with the use of T2000 for media servers.  The backup 
server is a T2000 8 core, 18GB system.  There is a Qlogic QLE2462 PCI-E dual 
port 4Gb adapter in the system that plugs into a Qlogic 5602 switch.  From 
there, one port is zoned to a EMC CDL 4400 (VTL) and a few HP LTO3 tape drives. 
 The connectivity is 4Gb from host to switch, and from switch to the VTL.  The 
tape drive is 2Gb.

So when using Netbackup Vault to copy a backup done to the VTL to a real tape 
drive, the backup performance tops out at about 90MB/sec.  If I spin up two 
jobs to two tape drives, they both go about 45MB/sec.   It seems I've hit a 
90MB/sec bottleneck somehow.  I have v240s performing better!

Write performance to the VTL from incoming client backups over the WAN exceeds 
the vault performance. 

My next step is to zone the tape drives on one of the HBA ports, and the VTL 
zoned on the other port. 

I'm using: 
SIZE_DATA_BUFFERS = 262144 
NUMBER_DATA_BUFFERS = 64 

Any other suggestions? 

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-20 Thread Peters, Devon C
Chris,

To me it looks like there's a 1Gb bottleneck somewhere (90MB/s is about
all we ever got out of 1Gb fibre back in the day).  Are there any ISL's
between your tape drive, your switch, and your server's HBA?  Also, have
you verified that your tape drives have negotiated onto the fabric as
2Gb and not 1Gb?

When we had 2Gb LTO-3 drives on our T2000's, throughput to a single
drive toped out around 160MB/s.  When we upgraded the drives to 4Gb
LTO-3, throughput to a single drive went up to 260MB/s.  Our data is
very compressible, and these numbers are what I assume to be the
limitation of the IBM tape drives.

Regarding buffer settings, my experience may not apply directly since
we're doing disk (filesystems on fast storge) to tape backups, rather
than VTL to tape.  With our setup we see the best performance with a
buffer size of 1048576 and 512 buffers.  For us these buffer sizes are
mostly related to the filesystem performance, since we get better disk
throughput with 1MB I/O's than with smaller ones...

I'm also curious if anyone knows whether the NUMBER_DATA_BUFFERS_RESTORE
parameter is used when doing duplications?  I would assume it is, but I
don't know for sure.  If it is, then the bptm process reading from the
VTL would be using the default 16 (?) buffers, and you might see better
performance by using a larger number.


-devon


-
Date: Fri, 16 Nov 2007 10:00:18 -0800
From: Chris_Millet <[EMAIL PROTECTED]>
Subject: [Veritas-bu]  T2000 vaulting performance with VTL/LTO3
To: VERITAS-BU@mailman.eng.auburn.edu
Message-ID: <[EMAIL PROTECTED]>


I'm starting to experiment with the use of T2000 for media servers.  The
backup server is a T2000 8 core, 18GB system.  There is a Qlogic QLE2462
PCI-E dual port 4Gb adapter in the system that plugs into a Qlogic 5602
switch.  From there, one port is zoned to a EMC CDL 4400 (VTL) and a few
HP LTO3 tape drives.  The connectivity is 4Gb from host to switch, and
from switch to the VTL.  The tape drive is 2Gb.

So when using Netbackup Vault to copy a backup done to the VTL to a real
tape drive, the backup performance tops out at about 90MB/sec.  If I
spin up two jobs to two tape drives, they both go about 45MB/sec.   It
seems I've hit a 90MB/sec bottleneck somehow.  I have v240s performing
better!

Write performance to the VTL from incoming client backups over the WAN
exceeds the vault performance.

My next step is to zone the tape drives on one of the HBA ports, and the
VTL zoned on the other port.

I'm using:
SIZE_DATA_BUFFERS = 262144
NUMBER_DATA_BUFFERS = 64

Any other suggestions?
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-20 Thread Marianu, Jonathan
The MPX setting of the original backup can greatly influence the speed
of your duplications.

I'm not *telling* you to use a high mpx but it is something to consider
testing.

 

It is very interesting to open up two ssh sessions, put them side by
side and run a truss on both the reading and writing bptm processes of
the media server while a duplication is running and then rotate your
screen 90 degrees to the right.



That illustrates the effect MPX has much better than I could describe in
words.

 

 


__
Jonathan Marianu (mah ree ah' nu)
AT&T Storage Planning and Design Architect
(360) 597-6896
Work Hours 0800-1800 PST M-F

Manager: David Anderson
(314) 340-9296 

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-19 Thread Dominik Pietrzykowski

If it's a T2000 it's Sol 10 and the default for shared memory is 1/4 of
physical memory which I can almost guarantee no one set their's that big
back in Sol 9. I would not put any of these entries in /etc/system on Sol
10, my V890 with Sol 10 runs very well without any.

Dom


-Original Message-
From: Andre Smith [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 17 November 2007 8:58 AM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

Depending on how many streams and write drives are running  
concurrently, check your shared memory settings in /etc/system.

On Nov 16, 2007, at 1:00 PM, Chris_Millet <[EMAIL PROTECTED]

 > wrote:

>
> I'm starting to experiment with the use of T2000 for media servers.   
> The backup server is a T2000 8 core, 18GB system.  There is a Qlogic  
> QLE2462 PCI-E dual port 4Gb adapter in the system that plugs into a  
> Qlogic 5602 switch.  From there, one port is zoned to a EMC CDL 4400  
> (VTL) and a few HP LTO3 tape drives.  The connectivity is 4Gb from  
> host to switch, and from switch to the VTL.  The tape drive is 2Gb.
>
> So when using Netbackup Vault to copy a backup done to the VTL to a  
> real tape drive, the backup performance tops out at about 90MB/sec.   
> If I spin up two jobs to two tape drives, they both go about 45MB/ 
> sec.   It seems I've hit a 90MB/sec bottleneck somehow.  I have  
> v240s performing better!
>
> Write performance to the VTL from incoming client backups over the  
> WAN exceeds the vault performance.
>
> My next step is to zone the tape drives on one of the HBA ports, and  
> the VTL zoned on the other port.
>
> I'm using:
> SIZE_DATA_BUFFERS = 262144
> NUMBER_DATA_BUFFERS = 64
>
> Any other suggestions?
>
> +-- 
> 
> |This was sent by [EMAIL PROTECTED] via Backup Central.
> |Forward SPAM to [EMAIL PROTECTED]
> +-- 
> 
>
>
> ___
> Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-19 Thread Gregory Demilde
Chris,

Which version of Solaris are you using. You might have a look on the
following URL. There are some parameters to tune for the T2000.

http://www.sun.com/servers/coolthreads/tnb/parameters.jsp#2

Besides, what does the bptm logs say about the buffers? According to you
settings, you have only buffer 50 ms. I would increase the number of buffers
to a higher number and see how it goes...

Greg

On Nov 16, 2007 7:00 PM, Chris_Millet <[EMAIL PROTECTED]>
wrote:

>
> I'm starting to experiment with the use of T2000 for media servers.  The
> backup server is a T2000 8 core, 18GB system.  There is a Qlogic QLE2462
> PCI-E dual port 4Gb adapter in the system that plugs into a Qlogic 5602
> switch.  From there, one port is zoned to a EMC CDL 4400 (VTL) and a few HP
> LTO3 tape drives.  The connectivity is 4Gb from host to switch, and from
> switch to the VTL.  The tape drive is 2Gb.
>
> So when using Netbackup Vault to copy a backup done to the VTL to a real
> tape drive, the backup performance tops out at about 90MB/sec.  If I spin up
> two jobs to two tape drives, they both go about 45MB/sec.   It seems I've
> hit a 90MB/sec bottleneck somehow.  I have v240s performing better!
>
> Write performance to the VTL from incoming client backups over the WAN
> exceeds the vault performance.
>
> My next step is to zone the tape drives on one of the HBA ports, and the
> VTL zoned on the other port.
>
> I'm using:
> SIZE_DATA_BUFFERS = 262144
> NUMBER_DATA_BUFFERS = 64
>
> Any other suggestions?
>
> +--
> |This was sent by [EMAIL PROTECTED] via Backup Central.
> |Forward SPAM to [EMAIL PROTECTED]
> +--
>
>
> ___
> Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
>



-- 
Gregory DEMILDE
Email : [EMAIL PROTECTED]
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-17 Thread Matthew Stier
Same here. But I'm looking into replacing an UE450 with a T2 box. (T5120 
or T5220)


Conner, Neil wrote:

Let us know how it goes - I'm looking at doing the same thing with a T2000.

-Neil

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Chris_Millet
Sent: Friday, November 16, 2007 11:57 AM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: [Veritas-bu] T2000 vaulting performance with VTL/LTO3


Seems like that shouldn't be the case, but OK i'll try it.  Our Qlogic rep states this 
adapter should be fully capable of 380MB/sec, full duplex, on both ports, simultaneously. 
 "best in industry"  And the T2000 is no slouch in the PCI-E department.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
  
begin:vcard
fn:Matthew Stier
n:Stier;Matthew
org:Fujitsu Network Communications;CAE
adr:Sixth Floor;;Two Blue Hill Plaza;Pearl River;NY;10965;USA
email;internet:[EMAIL PROTECTED]
title:Principal Engineer
tel;work:845-731-2097
tel;fax:845-731-2011
tel;cell:845-893-0575
x-mozilla-html:TRUE
version:2.1
end:vcard

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-16 Thread Chris_Millet

No other activity on the server except for the read stream from VTL and the 
write stream to LTO3.  There are a couple GB allocated for shmmem.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-16 Thread Andre Smith
Depending on how many streams and write drives are running  
concurrently, check your shared memory settings in /etc/system.

On Nov 16, 2007, at 1:00 PM, Chris_Millet <[EMAIL PROTECTED] 
 > wrote:

>
> I'm starting to experiment with the use of T2000 for media servers.   
> The backup server is a T2000 8 core, 18GB system.  There is a Qlogic  
> QLE2462 PCI-E dual port 4Gb adapter in the system that plugs into a  
> Qlogic 5602 switch.  From there, one port is zoned to a EMC CDL 4400  
> (VTL) and a few HP LTO3 tape drives.  The connectivity is 4Gb from  
> host to switch, and from switch to the VTL.  The tape drive is 2Gb.
>
> So when using Netbackup Vault to copy a backup done to the VTL to a  
> real tape drive, the backup performance tops out at about 90MB/sec.   
> If I spin up two jobs to two tape drives, they both go about 45MB/ 
> sec.   It seems I've hit a 90MB/sec bottleneck somehow.  I have  
> v240s performing better!
>
> Write performance to the VTL from incoming client backups over the  
> WAN exceeds the vault performance.
>
> My next step is to zone the tape drives on one of the HBA ports, and  
> the VTL zoned on the other port.
>
> I'm using:
> SIZE_DATA_BUFFERS = 262144
> NUMBER_DATA_BUFFERS = 64
>
> Any other suggestions?
>
> +-- 
> 
> |This was sent by [EMAIL PROTECTED] via Backup Central.
> |Forward SPAM to [EMAIL PROTECTED]
> +-- 
> 
>
>
> ___
> Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-16 Thread Conner, Neil
Let us know how it goes - I'm looking at doing the same thing with a T2000.

-Neil

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Chris_Millet
Sent: Friday, November 16, 2007 11:57 AM
To: VERITAS-BU@mailman.eng.auburn.edu
Subject: [Veritas-bu] T2000 vaulting performance with VTL/LTO3


Seems like that shouldn't be the case, but OK i'll try it.  Our Qlogic rep 
states this adapter should be fully capable of 380MB/sec, full duplex, on both 
ports, simultaneously.  "best in industry"  And the T2000 is no slouch in the 
PCI-E department.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-16 Thread Chris_Millet

Seems like that shouldn't be the case, but OK i'll try it.  Our Qlogic rep 
states this adapter should be fully capable of 380MB/sec, full duplex, on both 
ports, simultaneously.  "best in industry"  And the T2000 is no slouch in the 
PCI-E department.

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-16 Thread Paul Keating
You may see better performance by connecting the tapes and CDL to
different HBAs in different slots on separate busses.

Yes, there's more than enough "bandwidth", but the bus has to reverse
direction to send/receive, so if you have your cards laid out so one bus
is "sending" and the other "receiving" you will see better total
throughput.

Paul

-- 


> -Original Message-
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On Behalf 
> Of Chris_Millet
> Sent: November 16, 2007 1:00 PM
> To: VERITAS-BU@mailman.eng.auburn.edu
> Subject: [Veritas-bu] T2000 vaulting performance with VTL/LTO3
> 
> 
> 
> I'm starting to experiment with the use of T2000 for media 
> servers.  The backup server is a T2000 8 core, 18GB system.  
> There is a Qlogic QLE2462 PCI-E dual port 4Gb adapter in the 
> system that plugs into a Qlogic 5602 switch.  From there, one 
> port is zoned to a EMC CDL 4400 (VTL) and a few HP LTO3 tape 
> drives.  The connectivity is 4Gb from host to switch, and 
> from switch to the VTL.  The tape drive is 2Gb.
> 
> So when using Netbackup Vault to copy a backup done to the 
> VTL to a real tape drive, the backup performance tops out at 
> about 90MB/sec.  If I spin up two jobs to two tape drives, 
> they both go about 45MB/sec.   It seems I've hit a 90MB/sec 
> bottleneck somehow.  I have v240s performing better!
> 
> Write performance to the VTL from incoming client backups 
> over the WAN exceeds the vault performance.
> 
> My next step is to zone the tape drives on one of the HBA 
> ports, and the VTL zoned on the other port.
> 
> I'm using:
> SIZE_DATA_BUFFERS = 262144
> NUMBER_DATA_BUFFERS = 64
> 
> Any other suggestions?
> 
> +-
> -
> |This was sent by [EMAIL PROTECTED] via Backup Central.
> |Forward SPAM to [EMAIL PROTECTED]
> +-
> -
> 
> 
> ___
> Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
> 


La version française suit le texte anglais.



This email may contain privileged and/or confidential information, and the Bank 
of
Canada does not waive any related rights. Any distribution, use, or copying of 
this
email or the information it contains by other than the intended recipient is
unauthorized. If you received this email in error please delete it immediately 
from
your system and notify the sender promptly by email that you have done so. 



Le présent courriel peut contenir de l'information privilégiée ou 
confidentielle.
La Banque du Canada ne renonce pas aux droits qui s'y rapportent. Toute 
diffusion,
utilisation ou copie de ce courriel ou des renseignements qu'il contient par une
personne autre que le ou les destinataires désignés est interdite. Si vous 
recevez
ce courriel par erreur, veuillez le supprimer immédiatement et envoyer sans 
délai à
l'expéditeur un message électronique pour l'aviser que vous avez éliminé de 
votre
ordinateur toute copie du courriel reçu.

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] T2000 vaulting performance with VTL/LTO3

2007-11-16 Thread Chris_Millet

I'm starting to experiment with the use of T2000 for media servers.  The backup 
server is a T2000 8 core, 18GB system.  There is a Qlogic QLE2462 PCI-E dual 
port 4Gb adapter in the system that plugs into a Qlogic 5602 switch.  From 
there, one port is zoned to a EMC CDL 4400 (VTL) and a few HP LTO3 tape drives. 
 The connectivity is 4Gb from host to switch, and from switch to the VTL.  The 
tape drive is 2Gb.

So when using Netbackup Vault to copy a backup done to the VTL to a real tape 
drive, the backup performance tops out at about 90MB/sec.  If I spin up two 
jobs to two tape drives, they both go about 45MB/sec.   It seems I've hit a 
90MB/sec bottleneck somehow.  I have v240s performing better!

Write performance to the VTL from incoming client backups over the WAN exceeds 
the vault performance.

My next step is to zone the tape drives on one of the HBA ports, and the VTL 
zoned on the other port.

I'm using:
SIZE_DATA_BUFFERS = 262144
NUMBER_DATA_BUFFERS = 64

Any other suggestions?

+--
|This was sent by [EMAIL PROTECTED] via Backup Central.
|Forward SPAM to [EMAIL PROTECTED]
+--


___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu