Re: [Veritas-bu] NUMBER data buffers

2011-10-18 Thread Kevin Holtz
Look for io_init in the bptm log.  This will give you an idea of how many are 
being used and how much memory is being allocated.

Kevin

Sent from my mobile

On Oct 18, 2011, at 9:49 AM, Heathe Yeakley  wrote:

> So I've read the tuning guide, I've played around with different options for 
> SIZE and NUMBER of buffers and I understand the formula of SIZE * NUMBER * 
> drives *MPX as it relates to shared memory.
> 
> Here's my question. Of the four parameters:
> 
> MPX level
> 
> # of drives (I have 12 drives)
> 
> NUMBER of buffers
> 
> SIZE of buffers (must be multiple of 1024 and can't exceed the block size 
> supported by your tape or HBA)
> The NUMBER of buffers and MPX level seem to be the two variables here. I have 
> MPX set pretty low (2 or 3) and NUMBER of buffers set to either 16 or 32. 
> When I multiply it all out, I get a hit on my shared memory of less than a 
> GB. My media servers  are dedicated linux hosts that only function as media 
> servers and that's it. Furthermore, they each have somewhere around 35 - 50 
> GB of memory a piece. 
> 
> With my current configuration, I'm not even scratching the surface of the 
> amount of shared memory that's sitting idle in my system while my backups run 
> at night. Is there any reason I shouldn't jack the NUMBER of data buffers up 
> to... say... 500? 1000? I've seen some people mention that they have the 
> number of buffers set to 64, but can we go higher?
> 
> I've searched around to see if there's a technote on the upper limit of the 
> NUMBER buffers parameter. If there is such a tech note, I can't find it.
> 
> Any ideas?
> 
> ___
> Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


Re: [Veritas-bu] NUMBER data buffers

2011-10-18 Thread bob944
> So I've read the tuning guide, I've played around with
> different options for SIZE and NUMBER of buffers and I
> understand the formula of SIZE * NUMBER * drives *MPX
> as it relates to shared memory.

You're going to get a lot of replies.  Everyone is a buffer tuning
expert.  :-)

If all you really need is "how many buffers, of what size, muxed how
many ways to how many drives" can I possibly use, skip everything
after this paragraph.  It was only six years ago that most NBU
platforms were 32-bit, media servers might have only a few GB of core
and have other limits on shared memory from the OS, so size
size*number*mpx*drives was a more pressing concern.  Even today, it
takes a serious amount of buffering and streams to use up 32GB,
probably more than is useful.  With, say MPX 3 and 32 256KB buffers,
that's over 1000 simultaneous streams.  It would take one killer
network|media-server|storage combo to keep up with that.

> The NUMBER of buffers and MPX level seem to be the two
> variables

Makes more sense to me to think of SIZE and NUMBER as the two
variables for a backup stream.  Then think separately about MPX as a
how-many-TunedWithBuffers-streams do I deliver to a tape drive to make
it stream.  (Recognizing that you may mux different classes and
schedules in different ways for backup or restore performance
considerations.)

You didn't ask for tuning methodology, but since you're in the tuning
guide already...  One of the points you may have gleaned from the
tuning guide is to look at the wait and delay counters, and whether
you're measuring a data supplier or a data consumer.  Understanding
producer/consumer and wait/delay together gives you a sound basis for
making changes.  As does gaining the numbers to see if it's 300
seconds of delay on a 10-minute backup or 300 seconds on a two-hour
backup.  That's plan A.

Plan B is empirical.  (My and others' methods will be in many posts
from the last ten years if you check the archives, so this'll be
relatively brief.)  Define which path, under what conditions, with
what data, you want to investigate/optimize.  Strongly recommend you
work initially with one server, one invariant chunk of data, one
stream, no conflicting activity to nail down the basic limits of that
client/network/STU combination.  Only after that is optimized would I
throw multiplexing into the game.

Make a test policy, say TUNE-win, full schedule, no scheduling
windows, specifying the STU, client and path that will take a
non-trivial amount of time to minimize variables yet be short enough
that the testing doesn't take forever.  Say, enough representative,
unchanging data for 10-15 minutes elapsed write time.  Record number,
size of buffers, wait and delay values and write time.  Double only
one of the parameters.  Retest/record.  Do that until there is no gain
and leave that value alone.  Now repeat the cycle while changing the
other parameter.  Retest until no gain.  Then go back to the first
parameter and change it up and down (one doubling and one halving) to
see if that needs to be revisited.

BTW, NUMBER_DATA_BUFFERS is per backup stream.  Just want to make sure
you are clear on that--not sure from your note.

You have a good start on setting size/number now.  Extra credit for
trying other clients/STUs and other variables, of course.  Use those
values and try controlled multiplexing (you've probably maxed out a
client, so generally this will be with multiple clients.  Net
bandwidth might also rule here, of course.)

Since 6.x NetBackup, you are unlikely to run out of buffers.  Status
89 if you do.  Regarding number versus size, there's no point in
having a huge number of small buffers or a small number of huge
buffers in any environment I've seen.  The methodology above usually
shows you an optimal combination that is somewhat balanced.  Sooner or
later, the allocating of a ton of little spaces, or of a few huge
(contiguous) spaces will lead you to a reasonable balance.  I probably
never had speed improvements over 1024 buffers, and often much less.

DON'T FORGET NET_BUFFER_SZ!  Hugely important on both Windows (in the
GUI) and *nix clients to get the data out of the client.

> Here's my question. Of the four parameters:
> 
> MPX level
> 
> # of drives (I have 12 drives)
> 
> NUMBER of buffers
> 
> SIZE of buffers (must be multiple of 1024 and can't exceed the block
> size supported by your tape or HBA)
> 
> The NUMBER of buffers and MPX level seem to be the two variables
> here. I have MPX set pretty low (2 or 3) and NUMBER of buffers set
> to either 16 or 32. When I multiply it all out, I get a hit on my
> shared memory of less than a GB. My media servers are dedicated
> linux hosts that only function as media servers and that's it.
> Furthermore, they each have somewhere around 35 - 50 GB of memory a
> piece.
> 
> With my current configuration, I'm not even scratching the surface
> of the amount of shared memory that's sitting idle in my system
> while my backups run at night.

Re: [Veritas-bu] NUMBER data buffers

2011-10-18 Thread David McMullin
If you do windows backups, you need to also multiply # of drives with "all 
local drives" - essentially how many 'child' processes can be started by your 
parent job as well. Figure - how many jobs are running at once? Each job will 
get the memory allocated to it.

Buffer settings?

It will depend on your drives and configuration - the speed of your drives, 
your hba and TAN infrastructure.


I had LTO2 drives, my buffer size was limited. I moved to LTO5 - I increased my 
buffer values.
I have done some testing - backing up the same file with different numbers of 
buffers, and be aware it can be counter intuitive - sometimes a lower number of 
buffers gets you more throughput.

64K - larger number seems better.
128K - sweet spot at 64 buffers
256K - sweet spot at 96 buffers
512K - sweet spot at 16 and 64 buffers!

Best throughput at 64 X 128K buffers!

However - I write to an VTL and duplicate to tape - and the duplication process 
is 'stuck' with the original buffer size - so duplicating backups written at 
64K buffers is painfully slow in comparison to 256K buffers.

I am still seeking a "professional opinion" on optimal buffer size for LTO5 
drives.

I have one media server I had to set to 8 X 64K buffers, to get it to slow 
below 20M/Second per channel, due to slow storage - I was crushing the 
application.


Here is a spreadsheet I built, my speed is mostly limited by the source storage 
disks and the fiber/switch TAN 


Number Size Total   KB per  waited/delayed KB written
Buffer Buffer   Second Second   buffer
8   65536   605 20,904  0   0   12,246,528 - cancelled 
- too slow
16  65536   604 46,801  0   0   27,494,656 - cancelled 
- too slow
32  65536   605 74,268  0   0   54,272,032
64  65536   757 73,498  25K 25K 54,272,032
96  65536   360 160,233 106 243 54,272,032
128 65536   463 120,826 48  154 54,272,032

8   131072  953 45,135  0   0   42,022,400
16  131072  683 81,455  26K 26K 54,272,032
32  131072  422 135,666 7K  7K  54,272,032
64  131072  280 205,184 55  128 54,272,032
96  131072  296 193,573 28  110 54,272,032
128 131072  358 160,427 12  41  54,272,032

8   262144  695 80,014  26K 26K 54,272,032
16  262144  336 170,204 1K  1K  54,272,032
32  262144  293 196,601 62  206 54,272,032
64  262144  298 194,335 14  38  54,272,032
96  262144  295 199,338 10  33  54,272,032
128 262144  312 182,813 1   43  54,272,032

16  524288  286 204,707 43  93  54,272,032
32  524288  293 196,913 25  86  54,272,032
64  524288  287 204,287 0   0   54,272,032
96  524288  294 194,663 1   6   54,272,032

-Original Message-

Message: 1
Date: Tue, 18 Oct 2011 08:49:19 -0500
From: Heathe Yeakley 
Subject: [Veritas-bu] NUMBER data buffers
To: NetBackup Mailing List 
Message-ID:

Content-Type: text/plain; charset="iso-8859-1"

So I've read the tuning guide, I've played around with different options for
SIZE and NUMBER of buffers and I understand the formula of SIZE * NUMBER *
drives *MPX as it relates to shared memory.

Here's my question. Of the four parameters:

MPX level

# of drives (I have 12 drives)

NUMBER of buffers

SIZE of buffers (must be multiple of 1024 and can't exceed the block size
supported by your tape or HBA)

The NUMBER of buffers and MPX level seem to be the two variables here. I
have MPX set pretty low (2 or 3) and NUMBER of buffers set to either 16 or
32. When I multiply it all out, I get a hit on my shared memory of less than
a GB. My media servers are dedicated linux hosts that only function as media
servers and that's it. Furthermore, they each have somewhere around 35 - 50
GB of memory a piece.

With my current configuration, I'm not even scratching the surface of the
amount of shared memory that's sitting idle in my system while my backups
run at night. Is there any reason I *shouldn't*** jack the NUMBER of data
buffers up to... say... 500? 1000? I've seen some people mention that they
have the number of buffers set to 64, but can we go higher?

I've searched around to see if there's a technote on the upper limit of the
NUMBER buffers parameter. If there is such a tech note, I can't find it.

Any ideas?

___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu


[Veritas-bu] NUMBER data buffers

2011-10-18 Thread Heathe Yeakley
So I've read the tuning guide, I've played around with different options for
SIZE and NUMBER of buffers and I understand the formula of SIZE * NUMBER *
drives *MPX as it relates to shared memory.

Here's my question. Of the four parameters:

MPX level

# of drives (I have 12 drives)

NUMBER of buffers

SIZE of buffers (must be multiple of 1024 and can't exceed the block size
supported by your tape or HBA)

The NUMBER of buffers and MPX level seem to be the two variables here. I
have MPX set pretty low (2 or 3) and NUMBER of buffers set to either 16 or
32. When I multiply it all out, I get a hit on my shared memory of less than
a GB. My media servers are dedicated linux hosts that only function as media
servers and that's it. Furthermore, they each have somewhere around 35 - 50
GB of memory a piece.

With my current configuration, I'm not even scratching the surface of the
amount of shared memory that's sitting idle in my system while my backups
run at night. Is there any reason I *shouldn't*** jack the NUMBER of data
buffers up to... say... 500? 1000? I've seen some people mention that they
have the number of buffers set to 64, but can we go higher?

I've searched around to see if there's a technote on the upper limit of the
NUMBER buffers parameter. If there is such a tech note, I can't find it.

Any ideas?
___
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu