synthetic fullbackup

2002-12-18 Thread Schwarz Werner
We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time. Disk as a primary
STGPool is too expensive.
Now we are looking for methods to 'cluster together' all active
backup_versions per node without backing up the whole TSM-client every night
(like VERITAS NetbackUp). Ideally the full_backup should be done in the
TSM-server (starting with an initial full_backup, then combining the
full_backup and the incrementals from next run to build the next synthetic
full_backup and so on). We already have activated COLLOCATE. Has anybody
good ideas?
thanks,
werner



AW: synthetic fullbackup

2002-12-18 Thread Schwarz Werner
Hi,
thanks a lot for your tips. With all of your mentioned hints we go to the
right direction. But I think that there is still room for doing it better
(VERITAS for ex. has to de-DUPLEX their tapes after the backup_process to
be able for an acceptable fast restore).
Our envir:
- TSM Server: 4.2.2.3 on zOS 1.2 (we plan to migrate to TSM 5.1)
- STGPool(s): virtual tapes (Virtual Storage Manager from STK)
- we are backing up to tape (if all ACTIVE versions are on disk, the problem
of scattered versions may be not serious for restores (?))
remarks for MOVE NODEDATA: I think ALL versions are moved - we need quite a
lot of storage if we maintain many versions.
thanks very much
werner


-Ursprüngliche Nachricht-
Von: Halvorsen Geirr Gulbrand [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 18. Dezember 2002 14:44
An: [EMAIL PROTECTED]
Betreff: Re: synthetic fullbackup

Hi Werner,
we might need some clearifying of your setup.
What is your server version?
Are you backing up to tape, or disk?

Generally I can say this:
If you are running TSM v. 5.x you have the possibility to use MOVE NODEDATA,
which moves data for one node to another storagepool (from tape to disk),
and then start your restore from the diskpool. It may sound strange, because
you move the data twice, but often, you have a delay between the time you
decide to restore, until you actually start the restore (f.ex. in a disaster
recovery situation, where you have to get new hardware, install OS + TSM
client software, before you start the restore). In this interval, you can
start to move data from tape to disk, and the subsequent restore will be
alot faster.
The other possibility is to use collocation by filespace. Different
filespaces from the same server will be collocated on different tapes,
enabling you to simultaneously start a restore for each filespace. This
helps reducing restore times.
Third option is using backupsets, which can be created just for active
files. Then you will have all active files on one volume.
Others may also have an opinion on best approach to solve this. I have just
pointed out some of TSM's features.

Rgds.
Geirr Halvorsen
-Original Message-
From: Schwarz Werner [mailto:[EMAIL PROTECTED]]
Sent: 18. december 2002 14:08
To: [EMAIL PROTECTED]
Subject: synthetic fullbackup


We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time. Disk as a primary
STGPool is too expensive.
Now we are looking for methods to 'cluster together' all active
backup_versions per node without backing up the whole TSM-client every night
(like VERITAS NetbackUp). Ideally the full_backup should be done in the
TSM-server (starting with an initial full_backup, then combining the
full_backup and the incrementals from next run to build the next synthetic
full_backup and so on). We already have activated COLLOCATE. Has anybody
good ideas?
thanks,
werner



AW: synthetic fullbackup

2002-12-18 Thread Schwarz Werner
Hi Richard
thanks for your hints.
We still hope on a solution that makes us happy. The idea with BackupSet is
theoretically the closest one. But I heard from people with experince, that
(i) its not so easy to restore a client with the newest backupversions and
(ii) we have the time-problem moved to the process GENERATE BACKUPSET
(collect/consolidate all newest versions).
Regards,
werner

-Ursprüngliche Nachricht-
Von: Richard Sims [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 18. Dezember 2002 15:21
An: [EMAIL PROTECTED]
Betreff: Re: synthetic fullbackup

We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time...

Werner - As Geirr said, Backupsets are the optimal solution but, as all
 solutions, will require extra processing time.
Another approach is to exploit the often-unexploited MIGDelay value, on
the primary tape storage pool.  Try to match that value to your dominant
Copy Group retention value, and migrate older files to a next, tape
storage pool, and let reclamation naturally bring bring newer files
closer together.  This should get inactive versions out of the way.
The cost is extra tape data movement.

There is no ideal solution.  Opportune full backups (weekend?) will
get you closest to what you want.

  Richard Sims, BU



WG: BackupSet: Is there an efficient method to generate backupset s

2002-12-16 Thread Schwarz Werner
-Ursprüngliche Nachricht-
Von: Schwarz Werner BI 
Gesendet am: Freitag, 13. Dezember 2002 14:58
An: '[EMAIL PROTECTED]'
Cc: Schwarz Werner BI
Betreff: AW: BackupSet: Is there an efficient method to generate
backupsets

Hi Zlatko

thanks very much for the hints about backupsets. The effective problem we
are looking to solve is the following:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tapevolumes (primary
tape stgpool). This was the main reason for an unacceptable long
restore-time.
Now we are looking for methods to 'cluster together' all active
backup_versions per node without backing up the whole TSM-client every night
(like VERITAS NetbackUp). These 'clustered' active backup_versions should be
the candidates during a normal restore. We already have activated COLLOCATE.
Do you have more nice ideas?
thanks,
werner

-Ursprüngliche Nachricht-
Von: Zlatko Krastev/ACIT [mailto:[EMAIL PROTECTED]]
Gesendet am: Freitag, 13. Dezember 2002 13:17
An: [EMAIL PROTECTED]
Betreff: Re: BackupSet: Is there an efficient method to generate
backupsets

Werner,

on day_02 you will have 990 files still active from day_00 (bds010.1,
bds011.1, ..., bds999.1) + 10 files from day_01 (bds000.2, bds001.2, ...,
bds009.2) and your assumption is completely correct.
What you want (as far as I understood it) is to mimic copypool behavior
with backupsets and is not possible. If you are making this for only one
node go the backupset way (1 tape/day) and it might be fine for you. If
this is to be done for several nodes copypool is the right answer (few
tapes/day + 1 DB tape/day). Copypool also allows you to recover from bad
primary tape which you cannot accomplish with backupsets.

Zlatko Krastev
IT Consultant






Schwarz Werner [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
12.12.2002 18:15
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:BackupSet: Is there an efficient method to generate
backupsets


I need help, please:
Can me tell somebody to solve the problem described in the following
example?


 begin example
assumption: I have 2 backup_versions

time: day_00
The 1st time I create a backupset, all 1000 active backup_versions are
consolidated on tape_01 {bds000.1,bds001.1, ... ,bds999.1}.

time: day_01
Incremental backup creates 10 newer backupversions {bds000.2,bds001.2, ...
,bds009.2}.

time. day_02
I create a 2nd backupset, all 1000 active backup_versions are consolidated
on tape_02 {bds000.2,bds001.2, ... ,bds009.2,bds010.1,bds011.1, ...
bds999.1}.

Question_1:
I suppose, that all 1000 versions on tape_02 are copied from the inventory
of incremental backup_versions. Is this true?

Question_2:
Is it possible to do the following:
copy {bds010.1,bds011.1, ... bds999.1} from tape_01
copy {bds000.2,bds001.2, ... bds009.2} from incremental backup_versions
this would be more efficient in my environment.
 end example


Thanks everybody who can give me some useful comments

kind regards,
werner



BackupSet: Is there an efficient method to generate backupsets

2002-12-12 Thread Schwarz Werner
I need help, please:
Can me tell somebody to solve the problem described in the following
example?


 begin example
assumption: I have 2 backup_versions

time: day_00
The 1st time I create a backupset, all 1000 active backup_versions are
consolidated on tape_01 {bds000.1,bds001.1, ... ,bds999.1}.

time: day_01
Incremental backup creates 10 newer backupversions {bds000.2,bds001.2, ...
,bds009.2}.

time. day_02
I create a 2nd backupset, all 1000 active backup_versions are consolidated
on tape_02 {bds000.2,bds001.2, ... ,bds009.2,bds010.1,bds011.1, ...
bds999.1}.

Question_1:
I suppose, that all 1000 versions on tape_02 are copied from the inventory
of incremental backup_versions. Is this true?

Question_2:
Is it possible to do the following:
copy {bds010.1,bds011.1, ... bds999.1} from tape_01
copy {bds000.2,bds001.2, ... bds009.2} from incremental backup_versions
this would be more efficient in my environment.
 end example


Thanks everybody who can give me some useful comments

kind regards,
werner