Re: synthetic fullbackup

2002-12-27 Thread Halvorsen Geirr Gulbrand
Hi Again,

you would have to create a collocated storagepool (on tape), and migrate
data from the diskpool to the new tapepool. You would have to setup next
storagepoole for your diskpool, to point to the right tapepool before you
start migration, by setting the migration hi/lo for the diskpool.

I'm sorry to have kept you waiting, but I've been rather busy, before
christmas.

Since it's a few days since you wrote this I hope yo've already figured it
out.

Rgds,
Geirr G. Halvorsen

-Original Message-
From: Ron Lochhead [mailto:[EMAIL PROTECTED]]
Sent: 20. december 2002 19:16
To: [EMAIL PROTECTED]
Subject: synthetic fullbackup


Hi Halvorsen:

I am trying to implement this same move nodedata idea,  but am coming up
with problem.  My environment is Win2k server running TSM server 5.1.5.2
and same on clients.  My goal is consolidate my tapepool data for each node
so each node has it's own tape.  We only have 25 nodes.  I figured out how
to move nodedata to our diskpool but now how do I put nodedata from
diskpool back on to one tapepool tape?

The error I got said that I couldn't move nodedata from diskpool back to
tapepool because of sequential access storage.  Any ideas?

Thanks,
Ron Lochhead




Halvorsen
GeirrTo: [EMAIL PROTECTED]
Gulbrand cc:
gehal@WMDATASubject: Re: synthetic
fullbackup
.COM
Sent by:
ADSM: Dist
Stor Manager
[EMAIL PROTECTED]
RIST.EDU


12/18/2002
05:44 AM
Please
respond to
ADSM: Dist
Stor Manager





Hi Werner,
we might need some clearifying of your setup.
What is your server version?
Are you backing up to tape, or disk?

Generally I can say this:
If you are running TSM v. 5.x you have the possibility to use MOVE
NODEDATA,
which moves data for one node to another storagepool (from tape to disk),
and then start your restore from the diskpool. It may sound strange,
because
you move the data twice, but often, you have a delay between the time you
decide to restore, until you actually start the restore (f.ex. in a
disaster
recovery situation, where you have to get new hardware, install OS + TSM
client software, before you start the restore). In this interval, you can
start to move data from tape to disk, and the subsequent restore will be
alot faster.
The other possibility is to use collocation by filespace. Different
filespaces from the same server will be collocated on different tapes,
enabling you to simultaneously start a restore for each filespace. This
helps reducing restore times.
Third option is using backupsets, which can be created just for active
files. Then you will have all active files on one volume.
Others may also have an opinion on best approach to solve this. I have just
pointed out some of TSM's features.

Rgds.
Geirr Halvorsen
-Original Message-
From: Schwarz Werner [mailto:[EMAIL PROTECTED]]
Sent: 18. december 2002 14:08
To: [EMAIL PROTECTED]
Subject: synthetic fullbackup


We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time. Disk as a
primary
STGPool is too expensive.
Now we are looking for methods to 'cluster together' all active
backup_versions per node without backing up the whole TSM-client every
night
(like VERITAS NetbackUp). Ideally the full_backup should be done in the
TSM-server (starting with an initial full_backup, then combining the
full_backup and the incrementals from next run to build the next synthetic
full_backup and so on). We already have activated COLLOCATE. Has anybody
good ideas?
thanks,
werner



Re: synthetic fullbackup

2002-12-24 Thread Zlatko Krastev
--- ... I don't know if this is necessary or not.

It is not necessary and will not help (you may even lose performance if
several drives are available). TSM server is smart enough and even without
collocation setting each process will migrate data for separate node (to
separate tape). With collocation enabled, when only one node's data is in
the diskpool you will have only one migration process despite the diskpool
settings.

Zlatko Krastev
IT Consultant






Ford, Phillip [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
20.12.2002 20:47
Please respond to ADSM: Dist Stor Manager


To: [EMAIL PROTECTED]
cc:
Subject:Re: synthetic fullbackup


Make the tape pool collocation enabled and do a migration on the disk
pool
(update stg diskpool hi=0 low=0).  Also helps to have migration process
set
to 1 for the disk pool.  I don't know if this is necessary or not.



--
Phillip Ford
Senior Software Specialist
Corporate Computer Center
Schering-Plough Corp.
(901) 320-4462
(901) 320-4856 FAX
[EMAIL PROTECTED]



synthetic fullbackup

2002-12-20 Thread Ron Lochhead
Hi Halvorsen:

I am trying to implement this same move nodedata idea,  but am coming up
with problem.  My environment is Win2k server running TSM server 5.1.5.2
and same on clients.  My goal is consolidate my tapepool data for each node
so each node has it's own tape.  We only have 25 nodes.  I figured out how
to move nodedata to our diskpool but now how do I put nodedata from
diskpool back on to one tapepool tape?

The error I got said that I couldn't move nodedata from diskpool back to
tapepool because of sequential access storage.  Any ideas?

Thanks,
Ron Lochhead




Halvorsen
GeirrTo: [EMAIL PROTECTED]
Gulbrand cc:
gehal@WMDATASubject: Re: synthetic fullbackup
.COM
Sent by:
ADSM: Dist
Stor Manager
[EMAIL PROTECTED]
RIST.EDU


12/18/2002
05:44 AM
Please
respond to
ADSM: Dist
Stor Manager





Hi Werner,
we might need some clearifying of your setup.
What is your server version?
Are you backing up to tape, or disk?

Generally I can say this:
If you are running TSM v. 5.x you have the possibility to use MOVE
NODEDATA,
which moves data for one node to another storagepool (from tape to disk),
and then start your restore from the diskpool. It may sound strange,
because
you move the data twice, but often, you have a delay between the time you
decide to restore, until you actually start the restore (f.ex. in a
disaster
recovery situation, where you have to get new hardware, install OS + TSM
client software, before you start the restore). In this interval, you can
start to move data from tape to disk, and the subsequent restore will be
alot faster.
The other possibility is to use collocation by filespace. Different
filespaces from the same server will be collocated on different tapes,
enabling you to simultaneously start a restore for each filespace. This
helps reducing restore times.
Third option is using backupsets, which can be created just for active
files. Then you will have all active files on one volume.
Others may also have an opinion on best approach to solve this. I have just
pointed out some of TSM's features.

Rgds.
Geirr Halvorsen
-Original Message-
From: Schwarz Werner [mailto:[EMAIL PROTECTED]]
Sent: 18. december 2002 14:08
To: [EMAIL PROTECTED]
Subject: synthetic fullbackup


We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time. Disk as a
primary
STGPool is too expensive.
Now we are looking for methods to 'cluster together' all active
backup_versions per node without backing up the whole TSM-client every
night
(like VERITAS NetbackUp). Ideally the full_backup should be done in the
TSM-server (starting with an initial full_backup, then combining the
full_backup and the incrementals from next run to build the next synthetic
full_backup and so on). We already have activated COLLOCATE. Has anybody
good ideas?
thanks,
werner



Re: synthetic fullbackup

2002-12-20 Thread Mark Stapleton
On Fri, 2002-12-20 at 12:16, Ron Lochhead wrote:
 I am trying to implement this same move nodedata idea,  but am coming up
 with problem.  My environment is Win2k server running TSM server 5.1.5.2
 and same on clients.  My goal is consolidate my tapepool data for each node
 so each node has it's own tape.  We only have 25 nodes.  I figured out how
 to move nodedata to our diskpool but now how do I put nodedata from
 diskpool back on to one tapepool tape?

 The error I got said that I couldn't move nodedata from diskpool back to
 tapepool because of sequential access storage.  Any ideas?

If you'll read the results of
help move nodedata
you'll notice that the FROMstgpool (the source) must be a
sequential-access pool.

--
Mark Stapleton ([EMAIL PROTECTED])



Re: synthetic fullbackup

2002-12-20 Thread Ford, Phillip
Make the tape pool collocation enabled and do a migration on the disk pool
(update stg diskpool hi=0 low=0).  Also helps to have migration process set
to 1 for the disk pool.  I don't know if this is necessary or not.



--
Phillip Ford
Senior Software Specialist
Corporate Computer Center
Schering-Plough Corp.
(901) 320-4462
(901) 320-4856 FAX
[EMAIL PROTECTED]





-Original Message-
From: Ron Lochhead [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 20, 2002 12:16 PM
To: [EMAIL PROTECTED]
Subject: synthetic fullbackup


Hi Halvorsen:

I am trying to implement this same move nodedata idea,  but am coming up
with problem.  My environment is Win2k server running TSM server 5.1.5.2 and
same on clients.  My goal is consolidate my tapepool data for each node so
each node has it's own tape.  We only have 25 nodes.  I figured out how to
move nodedata to our diskpool but now how do I put nodedata from diskpool
back on to one tapepool tape?

The error I got said that I couldn't move nodedata from diskpool back to
tapepool because of sequential access storage.  Any ideas?

Thanks,
Ron Lochhead




Halvorsen
GeirrTo: [EMAIL PROTECTED]
Gulbrand cc:
gehal@WMDATASubject: Re: synthetic
fullbackup
.COM
Sent by:
ADSM: Dist
Stor Manager
[EMAIL PROTECTED]
RIST.EDU


12/18/2002
05:44 AM
Please
respond to
ADSM: Dist
Stor Manager





Hi Werner,
we might need some clearifying of your setup.
What is your server version?
Are you backing up to tape, or disk?

Generally I can say this:
If you are running TSM v. 5.x you have the possibility to use MOVE NODEDATA,
which moves data for one node to another storagepool (from tape to disk),
and then start your restore from the diskpool. It may sound strange, because
you move the data twice, but often, you have a delay between the time you
decide to restore, until you actually start the restore (f.ex. in a disaster
recovery situation, where you have to get new hardware, install OS + TSM
client software, before you start the restore). In this interval, you can
start to move data from tape to disk, and the subsequent restore will be
alot faster. The other possibility is to use collocation by filespace.
Different filespaces from the same server will be collocated on different
tapes, enabling you to simultaneously start a restore for each filespace.
This helps reducing restore times. Third option is using backupsets, which
can be created just for active files. Then you will have all active files on
one volume. Others may also have an opinion on best approach to solve this.
I have just pointed out some of TSM's features.

Rgds.
Geirr Halvorsen
-Original Message-
From: Schwarz Werner [mailto:[EMAIL PROTECTED]]
Sent: 18. december 2002 14:08
To: [EMAIL PROTECTED]
Subject: synthetic fullbackup


We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time. Disk as a primary
STGPool is too expensive. Now we are looking for methods to 'cluster
together' all active backup_versions per node without backing up the whole
TSM-client every night (like VERITAS NetbackUp). Ideally the full_backup
should be done in the TSM-server (starting with an initial full_backup, then
combining the full_backup and the incrementals from next run to build the
next synthetic full_backup and so on). We already have activated COLLOCATE.
Has anybody good ideas? thanks, werner




*
This message and any attachments is solely for the intended recipient. If you are not 
the intended recipient, disclosure, copying, use or distribution of the information 
included in this message is prohibited -- Please immediately and permanently delete.



Re: synthetic fullbackup

2002-12-19 Thread Halvorsen Geirr Gulbrand
Hi Werner,
I'm sorry about a minor error on my side. I said (believed) that move
nodedata had parameters to move only active files, but the possibilities for
reducing the data to be moved are limited to
type=ANY|Backup|ARchive|SPacemanaged. In your case Werner, that doesn't help
alot. My mistake.
I don't have much experience with backupsets, but I think Richards idea
about MIGDelay seems very good.

As to the versionquestions in my first mail - MOVE NODEDATA is only
available in v. 5.

Rgds.
Geirr G. Halvorsen

-Original Message-
From: Schwarz Werner [mailto:[EMAIL PROTECTED]]
Sent: 18. december 2002 16:10
To: [EMAIL PROTECTED]
Subject: AW: synthetic fullbackup


Hi Richard
thanks for your hints.
We still hope on a solution that makes us happy. The idea with BackupSet is
theoretically the closest one. But I heard from people with experince, that
(i) its not so easy to restore a client with the newest backupversions and
(ii) we have the time-problem moved to the process GENERATE BACKUPSET
(collect/consolidate all newest versions).
Regards,
werner

-Ursprüngliche Nachricht-
Von: Richard Sims [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 18. Dezember 2002 15:21
An: [EMAIL PROTECTED]
Betreff: Re: synthetic fullbackup

We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time...

Werner - As Geirr said, Backupsets are the optimal solution but, as all
 solutions, will require extra processing time.
Another approach is to exploit the often-unexploited MIGDelay value, on
the primary tape storage pool.  Try to match that value to your dominant
Copy Group retention value, and migrate older files to a next, tape
storage pool, and let reclamation naturally bring bring newer files
closer together.  This should get inactive versions out of the way.
The cost is extra tape data movement.

There is no ideal solution.  Opportune full backups (weekend?) will
get you closest to what you want.

  Richard Sims, BU



synthetic fullbackup

2002-12-18 Thread Schwarz Werner
We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time. Disk as a primary
STGPool is too expensive.
Now we are looking for methods to 'cluster together' all active
backup_versions per node without backing up the whole TSM-client every night
(like VERITAS NetbackUp). Ideally the full_backup should be done in the
TSM-server (starting with an initial full_backup, then combining the
full_backup and the incrementals from next run to build the next synthetic
full_backup and so on). We already have activated COLLOCATE. Has anybody
good ideas?
thanks,
werner



Re: synthetic fullbackup

2002-12-18 Thread Halvorsen Geirr Gulbrand
Hi Werner,
we might need some clearifying of your setup.
What is your server version?
Are you backing up to tape, or disk?

Generally I can say this:
If you are running TSM v. 5.x you have the possibility to use MOVE NODEDATA,
which moves data for one node to another storagepool (from tape to disk),
and then start your restore from the diskpool. It may sound strange, because
you move the data twice, but often, you have a delay between the time you
decide to restore, until you actually start the restore (f.ex. in a disaster
recovery situation, where you have to get new hardware, install OS + TSM
client software, before you start the restore). In this interval, you can
start to move data from tape to disk, and the subsequent restore will be
alot faster.
The other possibility is to use collocation by filespace. Different
filespaces from the same server will be collocated on different tapes,
enabling you to simultaneously start a restore for each filespace. This
helps reducing restore times.
Third option is using backupsets, which can be created just for active
files. Then you will have all active files on one volume.
Others may also have an opinion on best approach to solve this. I have just
pointed out some of TSM's features.

Rgds.
Geirr Halvorsen
-Original Message-
From: Schwarz Werner [mailto:[EMAIL PROTECTED]]
Sent: 18. december 2002 14:08
To: [EMAIL PROTECTED]
Subject: synthetic fullbackup


We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time. Disk as a primary
STGPool is too expensive.
Now we are looking for methods to 'cluster together' all active
backup_versions per node without backing up the whole TSM-client every night
(like VERITAS NetbackUp). Ideally the full_backup should be done in the
TSM-server (starting with an initial full_backup, then combining the
full_backup and the incrementals from next run to build the next synthetic
full_backup and so on). We already have activated COLLOCATE. Has anybody
good ideas?
thanks,
werner



Re: synthetic fullbackup

2002-12-18 Thread Richard Sims
We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time...

Werner - As Geirr said, Backupsets are the optimal solution but, as all
 solutions, will require extra processing time.
Another approach is to exploit the often-unexploited MIGDelay value, on
the primary tape storage pool.  Try to match that value to your dominant
Copy Group retention value, and migrate older files to a next, tape
storage pool, and let reclamation naturally bring bring newer files
closer together.  This should get inactive versions out of the way.
The cost is extra tape data movement.

There is no ideal solution.  Opportune full backups (weekend?) will
get you closest to what you want.

  Richard Sims, BU



AW: synthetic fullbackup

2002-12-18 Thread Schwarz Werner
Hi,
thanks a lot for your tips. With all of your mentioned hints we go to the
right direction. But I think that there is still room for doing it better
(VERITAS for ex. has to de-DUPLEX their tapes after the backup_process to
be able for an acceptable fast restore).
Our envir:
- TSM Server: 4.2.2.3 on zOS 1.2 (we plan to migrate to TSM 5.1)
- STGPool(s): virtual tapes (Virtual Storage Manager from STK)
- we are backing up to tape (if all ACTIVE versions are on disk, the problem
of scattered versions may be not serious for restores (?))
remarks for MOVE NODEDATA: I think ALL versions are moved - we need quite a
lot of storage if we maintain many versions.
thanks very much
werner


-Ursprüngliche Nachricht-
Von: Halvorsen Geirr Gulbrand [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 18. Dezember 2002 14:44
An: [EMAIL PROTECTED]
Betreff: Re: synthetic fullbackup

Hi Werner,
we might need some clearifying of your setup.
What is your server version?
Are you backing up to tape, or disk?

Generally I can say this:
If you are running TSM v. 5.x you have the possibility to use MOVE NODEDATA,
which moves data for one node to another storagepool (from tape to disk),
and then start your restore from the diskpool. It may sound strange, because
you move the data twice, but often, you have a delay between the time you
decide to restore, until you actually start the restore (f.ex. in a disaster
recovery situation, where you have to get new hardware, install OS + TSM
client software, before you start the restore). In this interval, you can
start to move data from tape to disk, and the subsequent restore will be
alot faster.
The other possibility is to use collocation by filespace. Different
filespaces from the same server will be collocated on different tapes,
enabling you to simultaneously start a restore for each filespace. This
helps reducing restore times.
Third option is using backupsets, which can be created just for active
files. Then you will have all active files on one volume.
Others may also have an opinion on best approach to solve this. I have just
pointed out some of TSM's features.

Rgds.
Geirr Halvorsen
-Original Message-
From: Schwarz Werner [mailto:[EMAIL PROTECTED]]
Sent: 18. december 2002 14:08
To: [EMAIL PROTECTED]
Subject: synthetic fullbackup


We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time. Disk as a primary
STGPool is too expensive.
Now we are looking for methods to 'cluster together' all active
backup_versions per node without backing up the whole TSM-client every night
(like VERITAS NetbackUp). Ideally the full_backup should be done in the
TSM-server (starting with an initial full_backup, then combining the
full_backup and the incrementals from next run to build the next synthetic
full_backup and so on). We already have activated COLLOCATE. Has anybody
good ideas?
thanks,
werner



Re: synthetic fullbackup

2002-12-18 Thread David E Ehresman
Backupsets are the optimal solution

In TSM v5, I've replaced my backupsets with MOVE NODE DATA as my
preferred way of consolidating a node's backup data onto a minimum
number of tapes.

David



AW: synthetic fullbackup

2002-12-18 Thread Schwarz Werner
Hi Richard
thanks for your hints.
We still hope on a solution that makes us happy. The idea with BackupSet is
theoretically the closest one. But I heard from people with experince, that
(i) its not so easy to restore a client with the newest backupversions and
(ii) we have the time-problem moved to the process GENERATE BACKUPSET
(collect/consolidate all newest versions).
Regards,
werner

-Ursprüngliche Nachricht-
Von: Richard Sims [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 18. Dezember 2002 15:21
An: [EMAIL PROTECTED]
Betreff: Re: synthetic fullbackup

We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time...

Werner - As Geirr said, Backupsets are the optimal solution but, as all
 solutions, will require extra processing time.
Another approach is to exploit the often-unexploited MIGDelay value, on
the primary tape storage pool.  Try to match that value to your dominant
Copy Group retention value, and migrate older files to a next, tape
storage pool, and let reclamation naturally bring bring newer files
closer together.  This should get inactive versions out of the way.
The cost is extra tape data movement.

There is no ideal solution.  Opportune full backups (weekend?) will
get you closest to what you want.

  Richard Sims, BU



Re: AW: synthetic fullbackup

2002-12-18 Thread William Rosette
How about using the Policy Domain to schedule a backup every so often.  We
are thinking of doing this and changing the Copy Mode (in the Backup Copy
Group) to Absolute causing the backup to be a full and this will bring the
node date closer for faster restores.

Thank You,
Bill Rosette
Data Center/IS/Papa Johns International
WWJD


   
 
  Schwarz Werner   
 
  Werner.Schwarz@BTo:   [EMAIL PROTECTED]  
 
  EDAG.CH cc: 
 
  Sent by: ADSM:  Subject:  AW: synthetic fullbackup  
 
  Dist Stor
 
  Manager 
 
  [EMAIL PROTECTED]
 
  .EDU
 
   
 
   
 
  12/18/2002 10:09 
 
  AM   
 
  Please respond to
 
  ADSM: Dist Stor 
 
  Manager 
 
   
 
   
 




Hi Richard
thanks for your hints.
We still hope on a solution that makes us happy. The idea with BackupSet is
theoretically the closest one. But I heard from people with experince, that
(i) its not so easy to restore a client with the newest backupversions and
(ii) we have the time-problem moved to the process GENERATE BACKUPSET
(collect/consolidate all newest versions).
Regards,
werner

-Ursprüngliche Nachricht-
Von: Richard Sims [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 18. Dezember 2002 15:21
An: [EMAIL PROTECTED]
Betreff: Re: synthetic fullbackup

We are looking for a solution for the following problem:
During a restore of a whole TSM-client we found that the needed ACTIVE
backup_versions were heavy scattered around our virtual tape-volumes. This
was the main reason for an unacceptable long restore-time...

Werner - As Geirr said, Backupsets are the optimal solution but, as all
 solutions, will require extra processing time.
Another approach is to exploit the often-unexploited MIGDelay value, on
the primary tape storage pool.  Try to match that value to your dominant
Copy Group retention value, and migrate older files to a next, tape
storage pool, and let reclamation naturally bring bring newer files
closer together.  This should get inactive versions out of the way.
The cost is extra tape data movement.

There is no ideal solution.  Opportune full backups (weekend?) will
get you closest to what you want.

  Richard Sims, BU