Not entirely true - you can start multiple restore sessions for a client.


Doesn't make sense for small clients with only one disk, but for your larger
NT clients with multiple disks, or your AIX clients with multiple file
systems, there is no reason at all not to open multiple TSM windows and
start multiple restores, up until you exceed your throughput capacity.

That's one of the reasons some people collocate by file space...you can
restore the file spaces in parallel.

-----Original Message-----
From: Maurice van 't Loo [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, August 15, 2001 5:52 PM
To: [EMAIL PROTECTED]
Subject: Re: Disaster Recovery - Question


Don't forget that you restore serial, for one client there is only one
restore session, so also 1 tapedrive is needed.
You can save time when you can use collocation, but it takes more tapes; up
to 90 tapes in your case.

So, you can only save time when you need to restore more than 6 clients at
the same time.
An other help could be to use the money for the 14 drives to buy disks and
make a very large diskpool, so you can use caching. All data goes to tape,
but stays also in the diskpool, when you need a restore, this will save A
LOT of time when you have enough network bandwidth.

Good luck,
  Maurice van 't Loo
  The Netherlands

----- Original Message -----
From: "Pearson, Dave" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Wednesday, August 15, 2001 10:32 PM
Subject: Disaster Recovery - Question


> Hi ,
>
> I have a couple of questions about your Disaster Recovery Plan,
> How much parallelism does TSM recovery have.  How many tape drives do you
> use for this plan?
> We have 6 tape drives (3494 tape library with 3590 tape drive). We have
> about 90 clients on TSM (AIX, NT, SUN)
> Could we use 20 tape drive to recover all the clients in a shorter time
then
> just have 6 tape drive and take a looong time to do the recovery?
>
> Is anyone using Lanless backup on a server with the fibre network? How is
> this working for you?
>
> Thanks for you help
>
> Dave Pearson
>

Reply via email to