Multiple retentions per volume is a global parameter in 5.1 that is only on
the master.  It would be nice if it were a media server option, but in 5.1
it is global.

If it were a media server option, then both sides of the aisle would be
happy. 



Bobby Williams
2205 Peterson Drive
Chattanooga, Tennessee  37421
423-296-8200

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Martin,
Jonathan (Contractor)
Sent: Monday, February 26, 2007 1:28 PM
To: Justin Piszcz; chodhetz
Cc: veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Multiple retention in one pool

Whoa there tiger.  Keeping different retentions on different volume polls is
NOT the way to go IMO.  If you have 20 GB of data that must be kept offsite
for infinity you are already losing 1 tape "forever," why not put more data
on it?  Sure, you could keep that tape in the bot and keep writing to it,
but most shops I know have to remove tapes weekly.
So why not put more than 20GB of data on that tape?  I have to keep email
backups (two tapes or so) for 5 years.  That's two tapes I'm losing for 5
years, why not take the 80% free space on tape #2 and put some 1 year
retention data on it?  Further, extra volume pools is definitely a tape
loser, not saver.  Media servers cannot share volumes in a pool.  So using
more volume pools than you need actually reduces your efficiency.

Example:

3 Media Servers Writing to Three Volume Pools = Minimum of 9 tapes if each
server writes to each pool. (3 servers x 3 volume pools.)

3 Media Servers Writing to 1 Volume Pool (Single Retention Per Media) =
3 (servers) x 1 Volume Pool x number of retentions

3 Media Servers writing to 1 Volume Pool (mixed retentions) = 3 media
(minimum.)

I know this doesn't work for everyone, and when ver 6.5 comes out the 3
media servers will be able to share media which is a nice touch.  All I'm
saying is that if you aren't offsiting 100+ media a week there's much tape
love to be gained by looking into tape efficiency and cramming as much data
onto media as possible.

-Jonathan

PS: Allow multiple retentions per media has to be turned on *per* media
server.



-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Justin
Piszcz
Sent: Monday, February 26, 2007 4:52 AM
To: chodhetz
Cc: veritas-bu@mailman.eng.auburn.edu
Subject: Re: [Veritas-bu] Multiple retention in one pool

There is an option (off by default) in which one tape can use multiple
retentions, but I would not recommend it.

If you have a 20MB file that is infinite retention and the rest of an
LTO2/3 tape as 1 month retention, it will become confusing, yes?

Keep different retentions with separate volume pools.

On Mon, 26 Feb 2007, chodhetz wrote:

> Dear all
>
> I have volume pool test_pool, then I have 2 backup policies for 1 
> client they are policy_test_A and policy_test_B. those backup policies

> take tapes from test_pool. But policy_test_A have retention 1 week and

> policy_test_B have retention 1 month so in same volume pool I have 2 
> different retention.
>
>> From that case above what is different if I set
> multiple retention and if I
> don't set multiple retention ? and what is the effect in future ?
>
> Please give me advise,
>
> Thanx
> Chodhetz
>
>
>
>
> __________________________________
> Yahoo! Movies - Search movie info and celeb profiles and photos.
> http://sg.movies.yahoo.com/
> _______________________________________________
> Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu 
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu
>
_______________________________________________
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

_______________________________________________
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

_______________________________________________
Veritas-bu maillist  -  Veritas-bu@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-bu

Reply via email to