DISASTER: How to do a LOT of restores?

2008-01-22 Thread Roger Deschner
We like to talk about disaster preparedness, and one just happened here
at UIC.

On Saturday morning, a fire damaged portions of the UIC College of
Pharmacy Building. It affected several laboratories and offices. The
Chicago Fire Department, wearing hazmat moon suits due to the highly
dangerous contents of the laboratories, put it out efficiently in about
15 minutes. The temperature was around 0F (-18C), which compounded the
problems - anything that took on water became a block of ice.
Fortunately nobody was hurt; only a few people were in the building on a
Saturday morning, and they all got out safely.

Now, both the good news and the bad news is that many of the damaged
computers were backed up to our large TSM system. The good news is that
their data can be restored.

The bad news is that their data can be restored. And so now it must be.

Our TSM system is currently an old-school tape-based setup from the ADSM
days. (Upgrades involving a lot more disk coming real soon!) Most of the
nodes affected are not collocated, so I have to plan to do a number of
full restores of nodes whose data is scattered across numerous tape
volumes each. There are only 8 tape drives, and they are kept busy since
this system is in a heavily-loaded, about-to-be-upgraded state. (Timing
couldn't be worse; Murphy's Law.)

TSM was recently upgraded to version 5.5.0.0. It runs on AIX 5.3 with a
SCSI library. Since it is a v5.5 server, there may be new facilities
available that I'm not aware of yet.

I have the luxury of a little bit of time in advance. The hazmat guys
aren't letting anyone in to asess damage yet, so we don't know which
client node computers are damaged or not. We should know in a day or
two, so in the meantime I'm running as much reclamation as possible.

Given that this is our situation, how can I best optimize these
restores? I'm looking for ideas to get the most restoration done for
this disaster, while still continuing normal client-backup, migration,
expiration, reclamation cycles, because somebody else unrelated to this
situation could also need to restore...

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


Re: Upgrading TSM on an AIX server running multiple TSM instances

2008-01-22 Thread Loon, E.J. van - SPLXM
Hi Richard!
In fact it does. Thank you very much! 
Kindest regards,
Eric van Loon
KLM Royal Dutch Airlines

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: maandag 21 januari 2008 19:26
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Upgrading TSM on an AIX server running multiple TSM
instances

IBM Technote 1052631 is the best description.

Richard Sims
**
For information, services and offers, please visit our web site:
http://www.klm.com. This e-mail and any attachment may contain
confidential and privileged material intended for the addressee
only. If you are not the addressee, you are notified that no part
of the e-mail or any attachment may be disclosed, copied or
distributed, and that any other action related to this e-mail or
attachment is strictly prohibited, and may be unlawful. If you have
received this e-mail by error, please notify the sender immediately
by return e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries
and/or its employees shall not be liable for the incorrect or
incomplete transmission of this e-mail or any attachments, nor
responsible for any delay in receipt.
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal
Dutch Airlines) is registered in Amstelveen, The Netherlands, with
registered number 33014286 
**


Re: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Richard Sims

Roger -

Topic Restoral performance in http://people.bu.edu/rbs/ADSM.QuickFacts
summarizes factors which will help.
In particular, minimize MOUNTRetention so that drives are ready for the
next mount asap.
If possible, see if your networking people can provide direct, high-
speed
networking to that client, at least for this emergency.
Where fast recovery is needed, it may in some cases be realistic to
start
the restoral alongside the server, as for example where you could have a
same-OS surrogate client with same-filesystem-type internal or external
drives where you could repopulate them via restoral and then give them
over to the disaster site to plug into their system as is.
An alternative is to start collocating via Move Data ahead of the
pending
restoral effort, to help make that faster.  This would also deal with
the
almost inevitable case of some long-unused tapes which are now difficult
to read, ahead of time.

I'm sure others, who have had to deal with such a situation, will post
other ideas.

   Richard Simsat Boston University (no fires yet)


Re: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Dominique Laflamme
I would use MOVE NODEDATA commands to move the data for the effected
nodes to a (?new?) collocated pool before they start trying to do their
restores. That lets you get a lot of the tape mounts and so forth out of
the way while the clients aren't ready yet to be restored. You can pace
how much this is done and when it is done by how many MOVE NODEDATAs you
have running and when you run them manually. It won't solve the fact
that you're running at capacity, but it will let you minimize restore
times for the victims of the fire. 

Just a thought,
Nick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Roger Deschner
Sent: Tuesday, January 22, 2008 2:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DISASTER: How to do a LOT of restores?

We like to talk about disaster preparedness, and one just happened here
at UIC.

On Saturday morning, a fire damaged portions of the UIC College of
Pharmacy Building. It affected several laboratories and offices. The
Chicago Fire Department, wearing hazmat moon suits due to the highly
dangerous contents of the laboratories, put it out efficiently in about
15 minutes. The temperature was around 0F (-18C), which compounded the
problems - anything that took on water became a block of ice.
Fortunately nobody was hurt; only a few people were in the building on a
Saturday morning, and they all got out safely.

Now, both the good news and the bad news is that many of the damaged
computers were backed up to our large TSM system. The good news is that
their data can be restored.

The bad news is that their data can be restored. And so now it must be.

Our TSM system is currently an old-school tape-based setup from the ADSM
days. (Upgrades involving a lot more disk coming real soon!) Most of the
nodes affected are not collocated, so I have to plan to do a number of
full restores of nodes whose data is scattered across numerous tape
volumes each. There are only 8 tape drives, and they are kept busy since
this system is in a heavily-loaded, about-to-be-upgraded state. (Timing
couldn't be worse; Murphy's Law.)

TSM was recently upgraded to version 5.5.0.0. It runs on AIX 5.3 with a
SCSI library. Since it is a v5.5 server, there may be new facilities
available that I'm not aware of yet.

I have the luxury of a little bit of time in advance. The hazmat guys
aren't letting anyone in to asess damage yet, so we don't know which
client node computers are damaged or not. We should know in a day or
two, so in the meantime I'm running as much reclamation as possible.

Given that this is our situation, how can I best optimize these
restores? I'm looking for ideas to get the most restoration done for
this disaster, while still continuing normal client-backup, migration,
expiration, reclamation cycles, because somebody else unrelated to this
situation could also need to restore...

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Nicholas Cassimatis
Roger,

If you know which nodes are to be restored, or at least have some that are
good suspects, you might want to run some move nodedata commands to try
to get their data more contiguous.  If you can get some of that DASD that's
coming real soon, even just to borrow it, that would help out
tremendously.

You say tape but never library - are you on manual drives?  (Please say
No, please say No...)  Try setting the mount retention high on them, and
kick off a few restores at once.  You may get lucky and already have the
needed tape mounted, saving you a few mounts.  If that's not working (it's
impossible to predict which way it will go), drop the mount retention to 0
so the tape ejects immediately, so the drive is ready for a new tape
sooner.  And if you are, try to recruit the people who haven't approved
spending for the upgrades to be the picker arm for you - I did that to an
account manager on a DR Test once, and we got the library approved the next
day.

The thoughts of your fellow TSMers are with you.

Nick Cassimatis

- Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 08:08 AM
-

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
03:40:07 AM:

 We like to talk about disaster preparedness, and one just happened here
 at UIC.

 On Saturday morning, a fire damaged portions of the UIC College of
 Pharmacy Building. It affected several laboratories and offices. The
 Chicago Fire Department, wearing hazmat moon suits due to the highly
 dangerous contents of the laboratories, put it out efficiently in about
 15 minutes. The temperature was around 0F (-18C), which compounded the
 problems - anything that took on water became a block of ice.
 Fortunately nobody was hurt; only a few people were in the building on a
 Saturday morning, and they all got out safely.

 Now, both the good news and the bad news is that many of the damaged
 computers were backed up to our large TSM system. The good news is that
 their data can be restored.

 The bad news is that their data can be restored. And so now it must be.

 Our TSM system is currently an old-school tape-based setup from the ADSM
 days. (Upgrades involving a lot more disk coming real soon!) Most of the
 nodes affected are not collocated, so I have to plan to do a number of
 full restores of nodes whose data is scattered across numerous tape
 volumes each. There are only 8 tape drives, and they are kept busy since
 this system is in a heavily-loaded, about-to-be-upgraded state. (Timing
 couldn't be worse; Murphy's Law.)

 TSM was recently upgraded to version 5.5.0.0. It runs on AIX 5.3 with a
 SCSI library. Since it is a v5.5 server, there may be new facilities
 available that I'm not aware of yet.

 I have the luxury of a little bit of time in advance. The hazmat guys
 aren't letting anyone in to asess damage yet, so we don't know which
 client node computers are damaged or not. We should know in a day or
 two, so in the meantime I'm running as much reclamation as possible.

 Given that this is our situation, how can I best optimize these
 restores? I'm looking for ideas to get the most restoration done for
 this disaster, while still continuing normal client-backup, migration,
 expiration, reclamation cycles, because somebody else unrelated to this
 situation could also need to restore...

 Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]

ANS1512E Scheduled event '1ST_NT_OFFSITE' failed. Return code = 12.

2008-01-22 Thread Hari, Krishnaprasath
Guys
Any idea about the RC=12 and why backup shows has failed due to this

01/21/2008 18:51:46 --- SCHEDULEREC STATUS BEGIN
01/21/2008 18:51:46 Total number of objects inspected:   28,844
01/21/2008 18:51:46 Total number of objects backed up:  150
01/21/2008 18:51:46 Total number of objects updated:  0
01/21/2008 18:51:46 Total number of objects rebound:  0
01/21/2008 18:51:46 Total number of objects deleted:  0
01/21/2008 18:51:46 Total number of objects expired: 30
01/21/2008 18:51:46 Total number of objects failed:  13
01/21/2008 18:51:46 Total number of subfile objects:  0
01/21/2008 18:51:46 Total number of bytes transferred:176.91 MB
01/21/2008 18:51:46 Data transfer time:8.88 sec
01/21/2008 18:51:46 Network data transfer rate:20,382.13 KB/sec
01/21/2008 18:51:46 Aggregate data transfer rate:  1,514.54 KB/sec
01/21/2008 18:51:46 Objects compressed by:0%
01/21/2008 18:51:46 Subfile objects reduced by:   0%
01/21/2008 18:51:46 Elapsed processing time:   00:01:59
01/21/2008 18:51:46 --- SCHEDULEREC STATUS END
01/21/2008 18:51:46 --- SCHEDULEREC OBJECT END 1ST_NT_OFFSITE 01/21/2008 
18:00:00
01/21/2008 18:51:46 ANS1512E Scheduled event '1ST_NT_OFFSITE' failed.  Return 
code = 12.
01/21/2008 18:51:46 Sending results for scheduled event '1ST_NT_OFFSITE'.


Re: 2nd TSM Instance Question

2008-01-22 Thread Keith Arbogast

Could I ask a variation of this question?

What about the case where server 'S' at the local (to me) data center
is the source server, and server 'T' at a distant data center is the
target server for server-to-server backups from the local data
center; while server 'T' at the distant data center is the source
server, and server 'S' at the local data center is the target server
for server-to-server backups from the distant data center?  There
will be a TS3500 (3584) ATL with ALMS at each data center.

In this case, would the management be easier or harder with two TSM
server instances on both physical servers?  One for the source
functions and the other for the target functions.

With my thanks,
Keith Arbogast
Indiana University


Re: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Bill Dourado
Roger,

Create backupsets of nodes that need restoring , maybe ?

Bill







Roger Deschner [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
22/01/2008 08:40
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
[ADSM-L] DISASTER: How to do a LOT of restores?






We like to talk about disaster preparedness, and one just happened here
at UIC.

On Saturday morning, a fire damaged portions of the UIC College of
Pharmacy Building. It affected several laboratories and offices. The
Chicago Fire Department, wearing hazmat moon suits due to the highly
dangerous contents of the laboratories, put it out efficiently in about
15 minutes. The temperature was around 0F (-18C), which compounded the
problems - anything that took on water became a block of ice.
Fortunately nobody was hurt; only a few people were in the building on a
Saturday morning, and they all got out safely.

Now, both the good news and the bad news is that many of the damaged
computers were backed up to our large TSM system. The good news is that
their data can be restored.

The bad news is that their data can be restored. And so now it must be.

Our TSM system is currently an old-school tape-based setup from the ADSM
days. (Upgrades involving a lot more disk coming real soon!) Most of the
nodes affected are not collocated, so I have to plan to do a number of
full restores of nodes whose data is scattered across numerous tape
volumes each. There are only 8 tape drives, and they are kept busy since
this system is in a heavily-loaded, about-to-be-upgraded state. (Timing
couldn't be worse; Murphy's Law.)

TSM was recently upgraded to version 5.5.0.0. It runs on AIX 5.3 with a
SCSI library. Since it is a v5.5 server, there may be new facilities
available that I'm not aware of yet.

I have the luxury of a little bit of time in advance. The hazmat guys
aren't letting anyone in to asess damage yet, so we don't know which
client node computers are damaged or not. We should know in a day or
two, so in the meantime I'm running as much reclamation as possible.

Given that this is our situation, how can I best optimize these
restores? I'm looking for ideas to get the most restoration done for
this disaster, while still continuing normal client-backup, migration,
expiration, reclamation cycles, because somebody else unrelated to this
situation could also need to restore...

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


**
This electronic mail message, including any attachments, is a
confidential communication exclusively between Babcock International
Group PLC or its subsidiary company and the intended recipient(s)
indicated as the addressee(s). It contains information which is private
and may be proprietary or covered by legal professional privilege. If
you receive this message in any form and you are not the intended
recipient you must not review, use, disclose or disseminate it.
We would be grateful if you could contact the sender upon receipt
and in any event you should destroy this message without delay.
Anything contained in this message that is not connected with the
business of Babcock International Group PLC is neither endorsed by
nor is the liability of this company.
Babcock International Group PLC
Company number 2342138
Registered in England
2 Cavendish Square
London W1G 0PX
Telephone: +44(0)20 7291 5000
Fax: +44(0)20 7291 5055
Website: www.babcock.co.uk
**


SV: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Daniel Sparrman
Hi Roger

If you have enough disk, or can get hold of extra disk space relatively fast, 
you can also choose to start moving data for the nodes you suspect may be in 
need of restore to disk (move node data. This would speed up restores a lot, 
and wouldn't put you in a situation where you don't have enough tape drives 
required by the restores.

If you don't have enough disk space and can get extra disk temporarily, I would 
start figuring out in which order the servers might be needed so that you can 
move the server(s) (depending on the amount of diskspace you have) up to disk 
so that these servers are restored quickly.

Creating backupsets will only work if you have spare tape drives, and as I 
understood it, the system is already overbooked when it requires to resources.

Best Regards

Daniel Sparrman

-Ursprungligt meddelande-
Från: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] För Bill Dourado
Skickat: den 22 januari 2008 16:24
Till: ADSM-L@VM.MARIST.EDU
Ämne: Re: DISASTER: How to do a LOT of restores?
Prioritet: Hög

Roger,

Create backupsets of nodes that need restoring , maybe ?

Bill







Roger Deschner [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
22/01/2008 08:40
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
[ADSM-L] DISASTER: How to do a LOT of restores?






We like to talk about disaster preparedness, and one just happened here
at UIC.

On Saturday morning, a fire damaged portions of the UIC College of
Pharmacy Building. It affected several laboratories and offices. The
Chicago Fire Department, wearing hazmat moon suits due to the highly
dangerous contents of the laboratories, put it out efficiently in about
15 minutes. The temperature was around 0F (-18C), which compounded the
problems - anything that took on water became a block of ice.
Fortunately nobody was hurt; only a few people were in the building on a
Saturday morning, and they all got out safely.

Now, both the good news and the bad news is that many of the damaged
computers were backed up to our large TSM system. The good news is that
their data can be restored.

The bad news is that their data can be restored. And so now it must be.

Our TSM system is currently an old-school tape-based setup from the ADSM
days. (Upgrades involving a lot more disk coming real soon!) Most of the
nodes affected are not collocated, so I have to plan to do a number of
full restores of nodes whose data is scattered across numerous tape
volumes each. There are only 8 tape drives, and they are kept busy since
this system is in a heavily-loaded, about-to-be-upgraded state. (Timing
couldn't be worse; Murphy's Law.)

TSM was recently upgraded to version 5.5.0.0. It runs on AIX 5.3 with a
SCSI library. Since it is a v5.5 server, there may be new facilities
available that I'm not aware of yet.

I have the luxury of a little bit of time in advance. The hazmat guys
aren't letting anyone in to asess damage yet, so we don't know which
client node computers are damaged or not. We should know in a day or
two, so in the meantime I'm running as much reclamation as possible.

Given that this is our situation, how can I best optimize these
restores? I'm looking for ideas to get the most restoration done for
this disaster, while still continuing normal client-backup, migration,
expiration, reclamation cycles, because somebody else unrelated to this
situation could also need to restore...

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


**
This electronic mail message, including any attachments, is a
confidential communication exclusively between Babcock International
Group PLC or its subsidiary company and the intended recipient(s)
indicated as the addressee(s). It contains information which is private
and may be proprietary or covered by legal professional privilege. If
you receive this message in any form and you are not the intended
recipient you must not review, use, disclose or disseminate it.
We would be grateful if you could contact the sender upon receipt
and in any event you should destroy this message without delay.
Anything contained in this message that is not connected with the
business of Babcock International Group PLC is neither endorsed by
nor is the liability of this company.
Babcock International Group PLC
Company number 2342138
Registered in England
2 Cavendish Square
London W1G 0PX
Telephone: +44(0)20 7291 5000
Fax: +44(0)20 7291 5055
Website: www.babcock.co.uk
**


SV: 2nd TSM Instance Question

2008-01-22 Thread Daniel Sparrman
Hi

I would recommend using only 2 active instances, one on each machine. For 
example, you have server 'S' at main site and server 'T' at off-site. 'S' backs 
up to 'T' and vice versa.

I would also recommend (to speed up recovery in the event of loosing a TSM 
server) that you prepare server 'S' for housing server 'T' and vice versa. This 
can easily be done by placing server 'T's configuration files in a separate 
folder on server 'S' and the other way around. You should also make sure that 
you have some sort of routines for copying the files between the two servers so 
that the files are always updated.

A setup could look like this:

On each machine, create a separate folder for each server instance

/itsm/serverS - Holds server 'S''s configuration files. Server 'S' is 
configured to use port 1500 and 1580.

/itsm/serverT - Holds server 'T''s configuration files. Server 'T' is 
configured to use port 1600 and 1680.

This way, you can relatively quick get the lost instance up and running fast on 
the other server, without interrupting operation of the home instance.

Best Regards

Daniel Sparrman

-Ursprungligt meddelande-
Från: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] För Keith Arbogast
Skickat: den 22 januari 2008 16:13
Till: ADSM-L@VM.MARIST.EDU
Ämne: Re: 2nd TSM Instance Question

Could I ask a variation of this question?

What about the case where server 'S' at the local (to me) data center
is the source server, and server 'T' at a distant data center is the
target server for server-to-server backups from the local data
center; while server 'T' at the distant data center is the source
server, and server 'S' at the local data center is the target server
for server-to-server backups from the distant data center?  There
will be a TS3500 (3584) ATL with ALMS at each data center.

In this case, would the management be easier or harder with two TSM
server instances on both physical servers?  One for the source
functions and the other for the target functions.

With my thanks,
Keith Arbogast
Indiana University


Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Roger Deschner
MOVE NODEDATA looks like it is going to be the key. I will simply move
the affected nodes into a disk storage pool, or into our existing
collocated tape storage pool. I presume it should be possible to restart
MOVE NODEDATA, in case it has to be interrupted or if the server
crashes, because what it does is not very different from migration or
relcamation. This should be a big advantage over GENERATE BACKUPSET,
which is not even as restartable as a common client restore. A possible
strategy is to do the long, laborious, but restartable, MOVE NODEDATA
first, and then do a very quick, painless, regular client restore or
GENERATE BACKUPSET.

Thanks to all! Until now, I was not fully aware of MOVE NODEDATA.

B.T.W. It is an automatic tape library, Quantum P7000. We graduated from
manual tape mounting back in 1999.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


On Tue, 22 Jan 2008, Nicholas Cassimatis wrote:

Roger,

If you know which nodes are to be restored, or at least have some that are
good suspects, you might want to run some move nodedata commands to try
to get their data more contiguous.  If you can get some of that DASD that's
coming real soon, even just to borrow it, that would help out
tremendously.

You say tape but never library - are you on manual drives?  (Please say
No, please say No...)  Try setting the mount retention high on them, and
kick off a few restores at once.  You may get lucky and already have the
needed tape mounted, saving you a few mounts.  If that's not working (it's
impossible to predict which way it will go), drop the mount retention to 0
so the tape ejects immediately, so the drive is ready for a new tape
sooner.  And if you are, try to recruit the people who haven't approved
spending for the upgrades to be the picker arm for you - I did that to an
account manager on a DR Test once, and we got the library approved the next
day.

The thoughts of your fellow TSMers are with you.

Nick Cassimatis

- Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 08:08 AM
-

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
03:40:07 AM:

 We like to talk about disaster preparedness, and one just happened here
 at UIC.

 On Saturday morning, a fire damaged portions of the UIC College of
 Pharmacy Building. It affected several laboratories and offices. The
 Chicago Fire Department, wearing hazmat moon suits due to the highly
 dangerous contents of the laboratories, put it out efficiently in about
 15 minutes. The temperature was around 0F (-18C), which compounded the
 problems - anything that took on water became a block of ice.
 Fortunately nobody was hurt; only a few people were in the building on a
 Saturday morning, and they all got out safely.

 Now, both the good news and the bad news is that many of the damaged
 computers were backed up to our large TSM system. The good news is that
 their data can be restored.

 The bad news is that their data can be restored. And so now it must be.

 Our TSM system is currently an old-school tape-based setup from the ADSM
 days. (Upgrades involving a lot more disk coming real soon!) Most of the
 nodes affected are not collocated, so I have to plan to do a number of
 full restores of nodes whose data is scattered across numerous tape
 volumes each. There are only 8 tape drives, and they are kept busy since
 this system is in a heavily-loaded, about-to-be-upgraded state. (Timing
 couldn't be worse; Murphy's Law.)

 TSM was recently upgraded to version 5.5.0.0. It runs on AIX 5.3 with a
 SCSI library. Since it is a v5.5 server, there may be new facilities
 available that I'm not aware of yet.

 I have the luxury of a little bit of time in advance. The hazmat guys
 aren't letting anyone in to asess damage yet, so we don't know which
 client node computers are damaged or not. We should know in a day or
 two, so in the meantime I'm running as much reclamation as possible.

 Given that this is our situation, how can I best optimize these
 restores? I'm looking for ideas to get the most restoration done for
 this disaster, while still continuing normal client-backup, migration,
 expiration, reclamation cycles, because somebody else unrelated to this
 situation could also need to restore...

 Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


Re: ANS1512E Scheduled event '1ST_NT_OFFSITE' failed. Return code = 12.

2008-01-22 Thread Alexander Verkooijen
According to the client manual return code 12 means:

The operation completed with at least one error message (except for
error messages for skipped files). For scheduled events, the status will
be Failed. Review the dsmerror.log file (and dsmsched.log file for
scheduled events) to determine what error messages were issued and to
assess their impact on the operation. As a general rule, this return
code means that the error was severe enough to prevent the successful
completion of the operation. For example, an error that prevents an
entire file system from being processed yields return code 12. When a
file is not found the operation yields return code 12.

Alexander

On Tue, 2008-01-22 at 16:29 +0100, Hari, Krishnaprasath wrote:
 Guys
 Any idea about the RC=12 and why backup shows has failed due to this

 01/21/2008 18:51:46 --- SCHEDULEREC STATUS BEGIN
 01/21/2008 18:51:46 Total number of objects inspected:   28,844
 01/21/2008 18:51:46 Total number of objects backed up:  150
 01/21/2008 18:51:46 Total number of objects updated:  0
 01/21/2008 18:51:46 Total number of objects rebound:  0
 01/21/2008 18:51:46 Total number of objects deleted:  0
 01/21/2008 18:51:46 Total number of objects expired: 30
 01/21/2008 18:51:46 Total number of objects failed:  13
 01/21/2008 18:51:46 Total number of subfile objects:  0
 01/21/2008 18:51:46 Total number of bytes transferred:176.91 MB
 01/21/2008 18:51:46 Data transfer time:8.88 sec
 01/21/2008 18:51:46 Network data transfer rate:20,382.13 KB/sec
 01/21/2008 18:51:46 Aggregate data transfer rate:  1,514.54 KB/sec
 01/21/2008 18:51:46 Objects compressed by:0%
 01/21/2008 18:51:46 Subfile objects reduced by:   0%
 01/21/2008 18:51:46 Elapsed processing time:   00:01:59
 01/21/2008 18:51:46 --- SCHEDULEREC STATUS END
 01/21/2008 18:51:46 --- SCHEDULEREC OBJECT END 1ST_NT_OFFSITE 01/21/2008 
 18:00:00
 01/21/2008 18:51:46 ANS1512E Scheduled event '1ST_NT_OFFSITE' failed.  Return 
 code = 12.
 01/21/2008 18:51:46 Sending results for scheduled event '1ST_NT_OFFSITE'.


Re: SV: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Maria Ilieva
You can create active data pools for all your backed up data.  Maybe
this will be a faster method than move data or collocating it.

Maria Ilieva


Re: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Ben Bullock
I agree with Nick. If you can find some disk space (even NFS mounted)
that you can put on the TSM server, you can do a move nodedata of the
nodes you will be restoring to this diskpool and get it all queued up in
advance so the restores will happen quicker.

Good luck.
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Dominique Laflamme
Sent: Tuesday, January 22, 2008 6:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DISASTER: How to do a LOT of restores?

I would use MOVE NODEDATA commands to move the data for the effected
nodes to a (?new?) collocated pool before they start trying to do their
restores. That lets you get a lot of the tape mounts and so forth out of
the way while the clients aren't ready yet to be restored. You can pace
how much this is done and when it is done by how many MOVE NODEDATAs you
have running and when you run them manually. It won't solve the fact
that you're running at capacity, but it will let you minimize restore
times for the victims of the fire. 

Just a thought,
Nick

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Roger Deschner
Sent: Tuesday, January 22, 2008 2:40 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] DISASTER: How to do a LOT of restores?

We like to talk about disaster preparedness, and one just happened here
at UIC.

On Saturday morning, a fire damaged portions of the UIC College of
Pharmacy Building. It affected several laboratories and offices. The
Chicago Fire Department, wearing hazmat moon suits due to the highly
dangerous contents of the laboratories, put it out efficiently in about
15 minutes. The temperature was around 0F (-18C), which compounded the
problems - anything that took on water became a block of ice.
Fortunately nobody was hurt; only a few people were in the building on a
Saturday morning, and they all got out safely.

Now, both the good news and the bad news is that many of the damaged
computers were backed up to our large TSM system. The good news is that
their data can be restored.

The bad news is that their data can be restored. And so now it must be.

Our TSM system is currently an old-school tape-based setup from the ADSM
days. (Upgrades involving a lot more disk coming real soon!) Most of the
nodes affected are not collocated, so I have to plan to do a number of
full restores of nodes whose data is scattered across numerous tape
volumes each. There are only 8 tape drives, and they are kept busy since
this system is in a heavily-loaded, about-to-be-upgraded state. (Timing
couldn't be worse; Murphy's Law.)

TSM was recently upgraded to version 5.5.0.0. It runs on AIX 5.3 with a
SCSI library. Since it is a v5.5 server, there may be new facilities
available that I'm not aware of yet.

I have the luxury of a little bit of time in advance. The hazmat guys
aren't letting anyone in to asess damage yet, so we don't know which
client node computers are damaged or not. We should know in a day or
two, so in the meantime I'm running as much reclamation as possible.

Given that this is our situation, how can I best optimize these
restores? I'm looking for ideas to get the most restoration done for
this disaster, while still continuing normal client-backup, migration,
expiration, reclamation cycles, because somebody else unrelated to this
situation could also need to restore...

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


The Blue Cross of Idaho Email Firewall Server made the following annotations:
--
*Confidentiality Notice: 

This E-Mail is intended only for the use of the individual
or entity to which it is addressed and may contain
information that is privileged, confidential and exempt
from disclosure under applicable law. If you have received
this communication in error, please do not distribute, and
delete the original message. 

Thank you for your compliance.

You may contact us at:
Blue Cross of Idaho 
3000 E. Pine Ave.
Meridian, Idaho 83642
1.208.345.4550

==


Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Andrew Carlson
Do you have any spare disk storage at all?  If you do, you could start
staging some of the more important restores to disk using move nodedata.

On Jan 22, 2008 11:35 AM, Whitlock, Brett 
[EMAIL PROTECTED] wrote:

 Good Luck, Roger!

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Roger Deschner
 Sent: Tuesday, January 22, 2008 10:14 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?

 MOVE NODEDATA looks like it is going to be the key. I will simply move
 the affected nodes into a disk storage pool, or into our existing
 collocated tape storage pool. I presume it should be possible to restart
 MOVE NODEDATA, in case it has to be interrupted or if the server
 crashes, because what it does is not very different from migration or
 relcamation. This should be a big advantage over GENERATE BACKUPSET,
 which is not even as restartable as a common client restore. A possible
 strategy is to do the long, laborious, but restartable, MOVE NODEDATA
 first, and then do a very quick, painless, regular client restore or
 GENERATE BACKUPSET.

 Thanks to all! Until now, I was not fully aware of MOVE NODEDATA.

 B.T.W. It is an automatic tape library, Quantum P7000. We graduated from
 manual tape mounting back in 1999.

 Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


 On Tue, 22 Jan 2008, Nicholas Cassimatis wrote:

 Roger,
 
 If you know which nodes are to be restored, or at least have some that
 are good suspects, you might want to run some move nodedata commands
 to try to get their data more contiguous.  If you can get some of that
 DASD that's coming real soon, even just to borrow it, that would help

 out tremendously.
 
 You say tape but never library - are you on manual drives?  (Please

 say No, please say No...)  Try setting the mount retention high on
 them, and kick off a few restores at once.  You may get lucky and
 already have the needed tape mounted, saving you a few mounts.  If
 that's not working (it's impossible to predict which way it will go),
 drop the mount retention to 0 so the tape ejects immediately, so the
 drive is ready for a new tape sooner.  And if you are, try to recruit
 the people who haven't approved spending for the upgrades to be the
 picker arm for you - I did that to an account manager on a DR Test
 once, and we got the library approved the next day.
 
 The thoughts of your fellow TSMers are with you.
 
 Nick Cassimatis
 
 - Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 08:08
 AM
 -
 
 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
 03:40:07 AM:
 
  We like to talk about disaster preparedness, and one just happened
  here at UIC.
 
  On Saturday morning, a fire damaged portions of the UIC College of
  Pharmacy Building. It affected several laboratories and offices. The
  Chicago Fire Department, wearing hazmat moon suits due to the highly
  dangerous contents of the laboratories, put it out efficiently in
  about
  15 minutes. The temperature was around 0F (-18C), which compounded
  the problems - anything that took on water became a block of ice.
  Fortunately nobody was hurt; only a few people were in the building
  on a Saturday morning, and they all got out safely.
 
  Now, both the good news and the bad news is that many of the damaged
  computers were backed up to our large TSM system. The good news is
  that their data can be restored.
 
  The bad news is that their data can be restored. And so now it must
 be.
 
  Our TSM system is currently an old-school tape-based setup from the
  ADSM days. (Upgrades involving a lot more disk coming real soon!)
  Most of the nodes affected are not collocated, so I have to plan to
  do a number of full restores of nodes whose data is scattered across
  numerous tape volumes each. There are only 8 tape drives, and they
  are kept busy since this system is in a heavily-loaded,
  about-to-be-upgraded state. (Timing couldn't be worse; Murphy's Law.)
 
  TSM was recently upgraded to version 5.5.0.0. It runs on AIX 5.3 with

  a SCSI library. Since it is a v5.5 server, there may be new
  facilities available that I'm not aware of yet.
 
  I have the luxury of a little bit of time in advance. The hazmat guys

  aren't letting anyone in to asess damage yet, so we don't know which
  client node computers are damaged or not. We should know in a day or
  two, so in the meantime I'm running as much reclamation as possible.
 
  Given that this is our situation, how can I best optimize these
  restores? I'm looking for ideas to get the most restoration done for
  this disaster, while still continuing normal client-backup,
  migration, expiration, reclamation cycles, because somebody else
  unrelated to this situation could also need to restore...
 
  Roger Deschner  University of Illinois at Chicago
 [EMAIL PROTECTED]




--
Andy Carlson

Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread James R Owen

Roger,
You certainly want to get a best guess list of likely priority#1 restores.
If your tapes really are mostly uncollocated, you will probably experience lots 
of
tape volume contention when you attempt to use MAXPRocess  1 or to run multiple
simultaneous restore, move nodedata, or export node operations.

Use Query NODEData to see how many tapes might have to be read for each node to 
be
restored.

To minimize tape mounts, if you can wait for this operation to complete, I 
believe
you should try to move or export all of the nodes' data in a single operation.

Here are possible disadvantages with using MOVe NODEData:
 - does not enable you to select to move only the Active backups for these nodes
[so you might have to move lots of extra inactive backups]
 - you probably can not effectively use MAXPROC=N (1 nor run multiple 
simultaneous
MOVe NODEData commands because of contention for your uncollocated 
volumes.

If you have or can set up another TSM server, you could do a Server-Server 
EXPort:
EXPort Node node1,node2,... FILEData=BACKUPActive TOServer=... 
[Preview=Yes]
moving only the nodes' active backups to a diskpool on the other TSM server.  
Using
this technique, you can move only the minimal necessary data.  I don't see any 
way
to multithread or run multiple simultaneous commands to read more than one tape 
at
a time, but given your drive constraints and uncollocated volumes, you will 
probably
discover that you can not effectively restore, move, or export from more than 
one tape
at a time, no matter which technique you try.  Your Query NODEData output 
should show
you which nodes, if any, do *not* have backups on the same tapes.

Try running a preview EXPort Node command for single or multiple nodes to get 
some
idea of what tapes will be mounted and how much data you will need to export.

Call me if you want to talk about any of this.
--
[EMAIL PROTECTED]   (w#203.432.6693, Verizon c#203.494.9201)

Roger Deschner wrote:

MOVE NODEDATA looks like it is going to be the key. I will simply move
the affected nodes into a disk storage pool, or into our existing
collocated tape storage pool. I presume it should be possible to restart
MOVE NODEDATA, in case it has to be interrupted or if the server
crashes, because what it does is not very different from migration or
relcamation. This should be a big advantage over GENERATE BACKUPSET,
which is not even as restartable as a common client restore. A possible
strategy is to do the long, laborious, but restartable, MOVE NODEDATA
first, and then do a very quick, painless, regular client restore or
GENERATE BACKUPSET.

Thanks to all! Until now, I was not fully aware of MOVE NODEDATA.

B.T.W. It is an automatic tape library, Quantum P7000. We graduated from
manual tape mounting back in 1999.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


On Tue, 22 Jan 2008, Nicholas Cassimatis wrote:


Roger,

If you know which nodes are to be restored, or at least have some that are
good suspects, you might want to run some move nodedata commands to try
to get their data more contiguous.  If you can get some of that DASD that's
coming real soon, even just to borrow it, that would help out
tremendously.

You say tape but never library - are you on manual drives?  (Please say
No, please say No...)  Try setting the mount retention high on them, and
kick off a few restores at once.  You may get lucky and already have the
needed tape mounted, saving you a few mounts.  If that's not working (it's
impossible to predict which way it will go), drop the mount retention to 0
so the tape ejects immediately, so the drive is ready for a new tape
sooner.  And if you are, try to recruit the people who haven't approved
spending for the upgrades to be the picker arm for you - I did that to an
account manager on a DR Test once, and we got the library approved the next
day.

The thoughts of your fellow TSMers are with you.

Nick Cassimatis

- Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 08:08 AM
-

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
03:40:07 AM:


We like to talk about disaster preparedness, and one just happened here
at UIC.

On Saturday morning, a fire damaged portions of the UIC College of
Pharmacy Building. It affected several laboratories and offices. The
Chicago Fire Department, wearing hazmat moon suits due to the highly
dangerous contents of the laboratories, put it out efficiently in about
15 minutes. The temperature was around 0F (-18C), which compounded the
problems - anything that took on water became a block of ice.
Fortunately nobody was hurt; only a few people were in the building on a
Saturday morning, and they all got out safely.

Now, both the good news and the bad news is that many of the damaged
computers were backed up to our large TSM system. The good news is that
their data can be restored.

The bad news is that their data can be restored. And so now it must 

Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Whitlock, Brett
Good Luck, Roger! 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Roger Deschner
Sent: Tuesday, January 22, 2008 10:14 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?

MOVE NODEDATA looks like it is going to be the key. I will simply move
the affected nodes into a disk storage pool, or into our existing
collocated tape storage pool. I presume it should be possible to restart
MOVE NODEDATA, in case it has to be interrupted or if the server
crashes, because what it does is not very different from migration or
relcamation. This should be a big advantage over GENERATE BACKUPSET,
which is not even as restartable as a common client restore. A possible
strategy is to do the long, laborious, but restartable, MOVE NODEDATA
first, and then do a very quick, painless, regular client restore or
GENERATE BACKUPSET.

Thanks to all! Until now, I was not fully aware of MOVE NODEDATA.

B.T.W. It is an automatic tape library, Quantum P7000. We graduated from
manual tape mounting back in 1999.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]


On Tue, 22 Jan 2008, Nicholas Cassimatis wrote:

Roger,

If you know which nodes are to be restored, or at least have some that 
are good suspects, you might want to run some move nodedata commands 
to try to get their data more contiguous.  If you can get some of that 
DASD that's coming real soon, even just to borrow it, that would help

out tremendously.

You say tape but never library - are you on manual drives?  (Please

say No, please say No...)  Try setting the mount retention high on 
them, and kick off a few restores at once.  You may get lucky and 
already have the needed tape mounted, saving you a few mounts.  If 
that's not working (it's impossible to predict which way it will go), 
drop the mount retention to 0 so the tape ejects immediately, so the 
drive is ready for a new tape sooner.  And if you are, try to recruit 
the people who haven't approved spending for the upgrades to be the 
picker arm for you - I did that to an account manager on a DR Test 
once, and we got the library approved the next day.

The thoughts of your fellow TSMers are with you.

Nick Cassimatis

- Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 08:08 
AM
-

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
03:40:07 AM:

 We like to talk about disaster preparedness, and one just happened 
 here at UIC.

 On Saturday morning, a fire damaged portions of the UIC College of 
 Pharmacy Building. It affected several laboratories and offices. The 
 Chicago Fire Department, wearing hazmat moon suits due to the highly 
 dangerous contents of the laboratories, put it out efficiently in 
 about
 15 minutes. The temperature was around 0F (-18C), which compounded 
 the problems - anything that took on water became a block of ice.
 Fortunately nobody was hurt; only a few people were in the building 
 on a Saturday morning, and they all got out safely.

 Now, both the good news and the bad news is that many of the damaged 
 computers were backed up to our large TSM system. The good news is 
 that their data can be restored.

 The bad news is that their data can be restored. And so now it must
be.

 Our TSM system is currently an old-school tape-based setup from the 
 ADSM days. (Upgrades involving a lot more disk coming real soon!) 
 Most of the nodes affected are not collocated, so I have to plan to 
 do a number of full restores of nodes whose data is scattered across 
 numerous tape volumes each. There are only 8 tape drives, and they 
 are kept busy since this system is in a heavily-loaded, 
 about-to-be-upgraded state. (Timing couldn't be worse; Murphy's Law.)

 TSM was recently upgraded to version 5.5.0.0. It runs on AIX 5.3 with

 a SCSI library. Since it is a v5.5 server, there may be new 
 facilities available that I'm not aware of yet.

 I have the luxury of a little bit of time in advance. The hazmat guys

 aren't letting anyone in to asess damage yet, so we don't know which 
 client node computers are damaged or not. We should know in a day or 
 two, so in the meantime I'm running as much reclamation as possible.

 Given that this is our situation, how can I best optimize these 
 restores? I'm looking for ideas to get the most restoration done for 
 this disaster, while still continuing normal client-backup, 
 migration, expiration, reclamation cycles, because somebody else 
 unrelated to this situation could also need to restore...

 Roger Deschner  University of Illinois at Chicago
[EMAIL PROTECTED]


Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Curtis Preston
James,

I like this idea a lot.  The disadvantage, of course, is that it
requires a separate server.  Is there a way to use this same (or
similar) idea to move just the active files into an active data pool (as
suggested by Maria), given that he's running 5.5 and has access to that
feature?

---
W. Curtis Preston
Backup Blog @ www.backupcentral.com
VP Data Protection, GlassHouse Technologies 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
James R Owen
Sent: Tuesday, January 22, 2008 9:32 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?

Roger,
You certainly want to get a best guess list of likely priority#1
restores.
If your tapes really are mostly uncollocated, you will probably
experience lots of
tape volume contention when you attempt to use MAXPRocess  1 or to run
multiple
simultaneous restore, move nodedata, or export node operations.

Use Query NODEData to see how many tapes might have to be read for each
node to be
restored.

To minimize tape mounts, if you can wait for this operation to complete,
I believe
you should try to move or export all of the nodes' data in a single
operation.

Here are possible disadvantages with using MOVe NODEData:
  - does not enable you to select to move only the Active backups for
these nodes
[so you might have to move lots of extra inactive backups]
  - you probably can not effectively use MAXPROC=N (1 nor run multiple
simultaneous
MOVe NODEData commands because of contention for your
uncollocated volumes.

If you have or can set up another TSM server, you could do a
Server-Server EXPort:
EXPort Node node1,node2,... FILEData=BACKUPActive TOServer=...
[Preview=Yes]
moving only the nodes' active backups to a diskpool on the other TSM
server.  Using
this technique, you can move only the minimal necessary data.  I don't
see any way
to multithread or run multiple simultaneous commands to read more than
one tape at
a time, but given your drive constraints and uncollocated volumes, you
will probably
discover that you can not effectively restore, move, or export from more
than one tape
at a time, no matter which technique you try.  Your Query NODEData
output should show
you which nodes, if any, do *not* have backups on the same tapes.

Try running a preview EXPort Node command for single or multiple nodes
to get some
idea of what tapes will be mounted and how much data you will need to
export.

Call me if you want to talk about any of this.
--
[EMAIL PROTECTED]   (w#203.432.6693, Verizon c#203.494.9201)

Roger Deschner wrote:
 MOVE NODEDATA looks like it is going to be the key. I will simply move
 the affected nodes into a disk storage pool, or into our existing
 collocated tape storage pool. I presume it should be possible to
restart
 MOVE NODEDATA, in case it has to be interrupted or if the server
 crashes, because what it does is not very different from migration or
 relcamation. This should be a big advantage over GENERATE BACKUPSET,
 which is not even as restartable as a common client restore. A
possible
 strategy is to do the long, laborious, but restartable, MOVE NODEDATA
 first, and then do a very quick, painless, regular client restore or
 GENERATE BACKUPSET.

 Thanks to all! Until now, I was not fully aware of MOVE NODEDATA.

 B.T.W. It is an automatic tape library, Quantum P7000. We graduated
from
 manual tape mounting back in 1999.

 Roger Deschner  University of Illinois at Chicago
[EMAIL PROTECTED]


 On Tue, 22 Jan 2008, Nicholas Cassimatis wrote:

 Roger,

 If you know which nodes are to be restored, or at least have some
that are
 good suspects, you might want to run some move nodedata commands to
try
 to get their data more contiguous.  If you can get some of that DASD
that's
 coming real soon, even just to borrow it, that would help out
 tremendously.

 You say tape but never library - are you on manual drives?
(Please say
 No, please say No...)  Try setting the mount retention high on them,
and
 kick off a few restores at once.  You may get lucky and already have
the
 needed tape mounted, saving you a few mounts.  If that's not working
(it's
 impossible to predict which way it will go), drop the mount retention
to 0
 so the tape ejects immediately, so the drive is ready for a new tape
 sooner.  And if you are, try to recruit the people who haven't
approved
 spending for the upgrades to be the picker arm for you - I did that
to an
 account manager on a DR Test once, and we got the library approved
the next
 day.

 The thoughts of your fellow TSMers are with you.

 Nick Cassimatis

 - Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008
08:08 AM
 -

 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
 03:40:07 AM:

 We like to talk about disaster preparedness, and one just happened
here
 at UIC.

 On Saturday morning, a fire damaged portions of the UIC College of
 Pharmacy Building. It affected several 

Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Maria Ilieva
The procedure of creating active data pools (assuming you have TSM
version 5.4 or more) is the following:
1. Create FILE type DISK pool or sequential TAPE pool specifying
pooltype=ACTIVEDATA
2.Update node's domain(s) specifying ACTIVEDESTINATION=created active
data pool
3. Issue COPY ACTIVEDATA node_name
This process incrementaly copies node's active data, so it can be
restarted if needed. HSM migrated and archived data is not copied in
the active data pool!

Maria Ilieva

 ---
 W. Curtis Preston
 Backup Blog @ www.backupcentral.com
 VP Data Protection, GlassHouse Technologies

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 James R Owen
 Sent: Tuesday, January 22, 2008 9:32 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?


 Roger,
 You certainly want to get a best guess list of likely priority#1
 restores.
 If your tapes really are mostly uncollocated, you will probably
 experience lots of
 tape volume contention when you attempt to use MAXPRocess  1 or to run
 multiple
 simultaneous restore, move nodedata, or export node operations.

 Use Query NODEData to see how many tapes might have to be read for each
 node to be
 restored.

 To minimize tape mounts, if you can wait for this operation to complete,
 I believe
 you should try to move or export all of the nodes' data in a single
 operation.

 Here are possible disadvantages with using MOVe NODEData:
   - does not enable you to select to move only the Active backups for
 these nodes
 [so you might have to move lots of extra inactive backups]
   - you probably can not effectively use MAXPROC=N (1 nor run multiple
 simultaneous
 MOVe NODEData commands because of contention for your
 uncollocated volumes.

 If you have or can set up another TSM server, you could do a
 Server-Server EXPort:
 EXPort Node node1,node2,... FILEData=BACKUPActive TOServer=...
 [Preview=Yes]
 moving only the nodes' active backups to a diskpool on the other TSM
 server.  Using
 this technique, you can move only the minimal necessary data.  I don't
 see any way
 to multithread or run multiple simultaneous commands to read more than
 one tape at
 a time, but given your drive constraints and uncollocated volumes, you
 will probably
 discover that you can not effectively restore, move, or export from more
 than one tape
 at a time, no matter which technique you try.  Your Query NODEData
 output should show
 you which nodes, if any, do *not* have backups on the same tapes.

 Try running a preview EXPort Node command for single or multiple nodes
 to get some
 idea of what tapes will be mounted and how much data you will need to
 export.

 Call me if you want to talk about any of this.
 --
 [EMAIL PROTECTED]   (w#203.432.6693, Verizon c#203.494.9201)

 Roger Deschner wrote:
  MOVE NODEDATA looks like it is going to be the key. I will simply move
  the affected nodes into a disk storage pool, or into our existing
  collocated tape storage pool. I presume it should be possible to
 restart
  MOVE NODEDATA, in case it has to be interrupted or if the server
  crashes, because what it does is not very different from migration or
  relcamation. This should be a big advantage over GENERATE BACKUPSET,
  which is not even as restartable as a common client restore. A
 possible
  strategy is to do the long, laborious, but restartable, MOVE NODEDATA
  first, and then do a very quick, painless, regular client restore or
  GENERATE BACKUPSET.
 
  Thanks to all! Until now, I was not fully aware of MOVE NODEDATA.
 
  B.T.W. It is an automatic tape library, Quantum P7000. We graduated
 from
  manual tape mounting back in 1999.
 
  Roger Deschner  University of Illinois at Chicago
 [EMAIL PROTECTED]
 
 
  On Tue, 22 Jan 2008, Nicholas Cassimatis wrote:
 
  Roger,
 
  If you know which nodes are to be restored, or at least have some
 that are
  good suspects, you might want to run some move nodedata commands to
 try
  to get their data more contiguous.  If you can get some of that DASD
 that's
  coming real soon, even just to borrow it, that would help out
  tremendously.
 
  You say tape but never library - are you on manual drives?
 (Please say
  No, please say No...)  Try setting the mount retention high on them,
 and
  kick off a few restores at once.  You may get lucky and already have
 the
  needed tape mounted, saving you a few mounts.  If that's not working
 (it's
  impossible to predict which way it will go), drop the mount retention
 to 0
  so the tape ejects immediately, so the drive is ready for a new tape
  sooner.  And if you are, try to recruit the people who haven't
 approved
  spending for the upgrades to be the picker arm for you - I did that
 to an
  account manager on a DR Test once, and we got the library approved
 the next
  day.
 
  The thoughts of your fellow TSMers are with you.
 
  Nick Cassimatis
 
  - Forwarded by Nicholas Cassimatis/Raleigh/IBM on 

Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Curtis Preston
Are files that are no longer active automatically expired from the
activedata pool when you perform the latest COPY ACTIVEDATA?  This would
mean that, at some point, you would need to do reclamation on this pool,
right?

It would seem to me that this would be a much better answer to TOP's
question.  Instead of doing a MOVE NODE (which requires moving ALL of
the node's files), or doing an EXPORT NODE (which requires a separate
server), he can just create an ACTIVEDATA pool, then perform a COPY
ACTIVEDATA into it while he's preparing for the restore.  Putting said
pool on disk would be even better, of course.

I was just discussing this with another one of our TSM experts, and he's
not as bullish on it as I am.  (It was an off-list convo, so I'll let
him go nameless unless he wants to speak up.)  He doesn't like that you
can't use a DISK type device class (disk has to be listed as FILE type).

He also has issues with the resources needed to create this 3rd copy
of the data.  He said, Most customers have trouble getting backups
complete and creating their offsite copies in a 24 hour period and would
not be able to complete a third copy of the data.  Add to that the
possibility of doing reclamation on this pool and you've got even more
work to do.

He's more of a fan of group collocation and the multisession restore
feature.  I think this has more value if you're restoring fewer clients
than you have tape drives.  Because if you collocate all your active
files, then you'll only be using one tape drive per client.  If you've
got 40 clients to restore and 20 tape drives, I don't see this slowing
you down.  But if you've got one client to restore, and 20 tape drives,
then the multisession restore would probably be faster than a collocated
restore.

I still think it's a strong feature whose value should be investigated
and discussed -- even if you only use it for the purpose we're
discussing here.  If you know you're in a DR scenario and you're going
to be restoring multiple systems, why wouldn't you do create an
ACTIVEDATA pool and do a COPY ACTIVEDATA instead of a MOVE NODE?

OK, here's another question.  Is it assumed that the ACTIVEDATA pool
have node-level collocation on?  Can you use group collocation instead?
Then maybe I and my friend could both get what we want?

Just throwing thoughts out there.

---
W. Curtis Preston
Backup Blog @ www.backupcentral.com
VP Data Protection, GlassHouse Technologies 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Maria Ilieva
Sent: Tuesday, January 22, 2008 10:22 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?

The procedure of creating active data pools (assuming you have TSM
version 5.4 or more) is the following:
1. Create FILE type DISK pool or sequential TAPE pool specifying
pooltype=ACTIVEDATA
2.Update node's domain(s) specifying ACTIVEDESTINATION=created active
data pool
3. Issue COPY ACTIVEDATA node_name
This process incrementaly copies node's active data, so it can be
restarted if needed. HSM migrated and archived data is not copied in
the active data pool!

Maria Ilieva

 ---
 W. Curtis Preston
 Backup Blog @ www.backupcentral.com
 VP Data Protection, GlassHouse Technologies

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
 James R Owen
 Sent: Tuesday, January 22, 2008 9:32 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?


 Roger,
 You certainly want to get a best guess list of likely priority#1
 restores.
 If your tapes really are mostly uncollocated, you will probably
 experience lots of
 tape volume contention when you attempt to use MAXPRocess  1 or to
run
 multiple
 simultaneous restore, move nodedata, or export node operations.

 Use Query NODEData to see how many tapes might have to be read for
each
 node to be
 restored.

 To minimize tape mounts, if you can wait for this operation to
complete,
 I believe
 you should try to move or export all of the nodes' data in a single
 operation.

 Here are possible disadvantages with using MOVe NODEData:
   - does not enable you to select to move only the Active backups for
 these nodes
 [so you might have to move lots of extra inactive backups]
   - you probably can not effectively use MAXPROC=N (1 nor run
multiple
 simultaneous
 MOVe NODEData commands because of contention for your
 uncollocated volumes.

 If you have or can set up another TSM server, you could do a
 Server-Server EXPort:
 EXPort Node node1,node2,... FILEData=BACKUPActive TOServer=...
 [Preview=Yes]
 moving only the nodes' active backups to a diskpool on the other TSM
 server.  Using
 this technique, you can move only the minimal necessary data.  I don't
 see any way
 to multithread or run multiple simultaneous commands to read more than
 one tape at
 a time, but given your drive constraints and uncollocated volumes, you
 will probably
 

TSM admin password expired

2008-01-22 Thread Haberstroh, Debbie (IT)
Hi all,

I have a server that was recently updated to 5.4 from an old TSM server.  One 
of the admin accounts, called OPERATOR, had a password expire over the weekend 
so this morning the administrator updated the password.  Now we are getting the 
following messages:

1/22/08 2:00:00 PM CST ANR0407I Session 17320 started for administrator 
OPERATOR (AIX) (Tcp/Ip aixdb(64763)). (SESSION: 17320) 
1/22/08 2:00:00 PM CST ANR2177I OPERATOR has 1 invalid sign-on attempts. The 
limit is 3. (SESSION: 17320) 
1/22/08 2:00:00 PM CST ANR0418W Session 17320 for administrator OPERATOR (AIX) 
is refused because an incorrect password was submitted. (SESSION: 17320) 

How do we find out what jobs are trying to run with this account so we can 
reset it?  This would have been setup a long time ago by an administrator that 
is no longer here.  Any help would be appreciated, thanks.

Debbie Haberstroh
Server Administration
Northrop Grumman Information Technology 
Commercial, State  Local (CSL)


Re: TSM admin password expired

2008-01-22 Thread Richard Sims

The precise, on-the-hour timestamp suggests a cron job.

   Richard Sims


Re: TSM admin password expired

2008-01-22 Thread Schneider, John
Well, the activity log won't tell you exactly where the commands are
coming from, just the account.  But my guess would be, since this is an
AIX TSM server, that somebody has scripts running from cron that use the
Operator account and have the password hardcoded in.

I would get somebody with root privileges in the server to help you dig
through the crontab entries (it may run from somewhere other than root's
crontab) and then grep through scripts with likely sounding names
looking for dsmadmc, or operator.

Best of luck to you.  Just a tip, but for these sort of scripts we set
up a special privileged account who's password doesn't expire, and
almost nobody knows the password to.

Best Regards,

John D. Schneider
Lead Systems Administrator - Storage
Sisters of Mercy Health Systems
Email:  [EMAIL PROTECTED]


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Haberstroh, Debbie (IT)
Sent: Tuesday, January 22, 2008 2:46 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM admin password expired


Hi all,

I have a server that was recently updated to 5.4 from an old TSM server.
One of the admin accounts, called OPERATOR, had a password expire over
the weekend so this morning the administrator updated the password.  Now
we are getting the following messages:

1/22/08 2:00:00 PM CST ANR0407I Session 17320 started for administrator
OPERATOR (AIX) (Tcp/Ip aixdb(64763)). (SESSION: 17320) 
1/22/08 2:00:00 PM CST ANR2177I OPERATOR has 1 invalid sign-on attempts.
The limit is 3. (SESSION: 17320) 
1/22/08 2:00:00 PM CST ANR0418W Session 17320 for administrator OPERATOR
(AIX) is refused because an incorrect password was submitted. (SESSION:
17320) 

How do we find out what jobs are trying to run with this account so we
can reset it?  This would have been setup a long time ago by an
administrator that is no longer here.  Any help would be appreciated,
thanks.

Debbie Haberstroh
Server Administration
Northrop Grumman Information Technology 
Commercial, State  Local (CSL)


Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Henrik Vahlstedt
Hi,

So far so good, move data, backup sets, active copyppool. You got plenty
to work with. I just wanted to add a example restore command.

Dsmc restore e:\?* e:\ -subdir=y or equivalent. In tests I did with 600k
files I reduced restore times from 4h17min to 52min. 
Processing time without '?' in the restore syntax took forever, if 3
hours counts as forever. 


//Henrik


---
The information contained in this message may be CONFIDENTIAL and is
intended for the addressee only. Any unauthorised use, dissemination of the
information or copying of this message is prohibited. If you are not the
addressee, please notify the sender immediately by return e-mail and delete
this message.
Thank you.


Re: TSM admin password expired

2008-01-22 Thread Ben Bullock
You have received a secure message from Ben Bullock entitled, RE: [ADSM-L] TSM 
admin password expired.

You may view the message (before 07/30/2008) at the following web address:
  https://mail.bcidaho.com:443/messenger/msg?x=d-111406-cxGLqnLw1FY0


-
Blue Cross of Idaho, 3000 E. Pine Ave., Meridian, ID 83642.  1-208-331-7421


Re: TSM admin password expired

2008-01-22 Thread Haberstroh, Debbie (IT)
Thanks, the AIX administrator found scripts running in cron.  

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Richard Sims
Sent: Tuesday, January 22, 2008 2:50 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM admin password expired


The precise, on-the-hour timestamp suggests a cron job.

Richard Sims


Another archive/expire query

2008-01-22 Thread Angus Macdonald
I am busy helping an external supplier understand his own Tivoli system, which 
has become full.

We plan to archive certain drive letters attached to a file server which 
contain data that will never change. The archive process is fine but what is 
the best way to remove references to the archived data from the main TSM 
database? Should I run a 

dsmc expire j:\*.* 

at the client? Will that expire the data immediately? How would subdirectories 
be handled? There appears to be no -subdir=yes option for the expire command. 
Alternatively, would

dsmc delete backup j:\* -deltype=all

get rid of the file references from the TSM database? I take it the deletion 
would get rid of the entries immediately.

Sorry for the probably newbie queries. I haven't done any archiving before and 
it isn't my data ;-)
Angus

Gallair e-bost yma gynnwys gwybodaeth gyfrinachol a/neu ddeunydd hawlfraint.  
Os ydych chin meddwl eich bod wedi derbyn yr e-bost yma drwy gamgymeriad rydym 
yn ymddiheuro am hyn; peidiwch os gwelwch yn dda â datgelu, anfon ymlaen, 
printio, copïo na dosbarthu gwybodaeth yn yr e-bost yma na gweithredu mewn 
unrhyw fodd drwy ddibynnu ar ei gynnwys: gwaherddir gwneud hynnyn gyfan gwbl a 
gallai fod yn anghyfreithlon. Rhowch wybod ir anfonwr fod y neges yma wedi mynd 
ar goll cyn ei dileu.

Mae unrhyw safbwynt neu farn a gyflwynir yn eiddo ir awdur ac nid ydynt o 
anghenraid yn cynrychioli safbwynt neu farn Ymddiriedolaeth GIG Gogledd 
Orllewin Cymru.

Gallai cynnwys yr e-bost yma gael ei ddatgelu Ir cyhoedd o dan Ddeddf Rhyddid 
Gwybodaeth 2000.  Ni does modd gwarantu cyfrinachedd y neges ac unrhyw ateb

Bydd y neges yma ac unrhyw ffeiliau cysylltiedig wedi cael eu gwirio gan 
feddalwedd canfod firws cyn eu trosglwyddo.  Ond rhaid ir sawl syn derbyn wirio 
rhag firws ei hun cyn agor unrhyw ymgysylltiad.  Nid ywr Ymddiriedolaeth yn 
derbyn unrhyw gyfrifoldeb am unrhyw golled neu niwed a allai gael ei achosi gan 
firws meddalwedd.


This e-mail may contain confidential information and/or copyright material.  If 
you believe that you have received this e-mail in error please accept our 
apologies; please do not disclose, forward, print, copy or distribute 
information in this e-mail or take any action in reliance on its contents: to 
do so is strictly prohibited and may be unlawful.  Please inform the sender 
that this message has gone astray before deleting it.

Any views or opinions presented are to be understood as those of the author and 
do not necessarily represent those of the North West Wales NHS Trust.

The contents of this e-mail may be subject to public disclosure under the 
Freedom of Information Act 2000. The confidentiality of the message and any 
reply cannot be guaranteed.

This message and any attached files will have been checked with virus detection 
software before transmission.  However, recipients must carry out their own 
virus checks before opening any attachment.  The Trust accepts no liability for 
any loss or damage, which may be caused by software viruses.


Re: Another archive/expire query

2008-01-22 Thread Steven Harris
Angus,

Congrats to your organization for the longest disclaimer in the history of
the internet :-)

If its a whole drive,

DEL FILESPACE node_name filespace_name TYPE=ARCHIVE

on the server might be the way to go.  As its windows you may have to use
the fsid number and nametype=fsid to indicate the drive you want.

Regards

Steve

Steven Harris
TSM Admin, Sydney Australia

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 23/01/2008
08:45:37 AM:

 I am busy helping an external supplier understand his own Tivoli
 system, which has become full.

 We plan to archive certain drive letters attached to a file server
 which contain data that will never change. The archive process is
 fine but what is the best way to remove references to the archived
 data from the main TSM database? Should I run a

 dsmc expire j:\*.*

 at the client? Will that expire the data immediately? How would
 subdirectories be handled? There appears to be no -subdir=yes option
 for the expire command. Alternatively, would

 dsmc delete backup j:\* -deltype=all

 get rid of the file references from the TSM database? I take it the
 deletion would get rid of the entries immediately.

 Sorry for the probably newbie queries. I haven't done any archiving
 before and it isn't my data ;-)
 Angus



Re: Another archive/expire query

2008-01-22 Thread Angus Macdonald
Thanks. You should have seen my efforts to get it cut down from the original 
version ;-)

I got the impression from the BAClient redbook that delete filespace would 
remove backup AND archived copies. Is that incorrect?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Steven Harris
Sent: 22 January 2008 22:54
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Another archive/expire query


Angus,

Congrats to your organization for the longest disclaimer in the history of
the internet :-)

If its a whole drive,

DEL FILESPACE node_name filespace_name TYPE=ARCHIVE

on the server might be the way to go.  As its windows you may have to use
the fsid number and nametype=fsid to indicate the drive you want.

Regards

Steve

Steven Harris
TSM Admin, Sydney Australia

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 23/01/2008
08:45:37 AM:

 I am busy helping an external supplier understand his own Tivoli
 system, which has become full.

 We plan to archive certain drive letters attached to a file server
 which contain data that will never change. The archive process is
 fine but what is the best way to remove references to the archived
 data from the main TSM database? Should I run a

 dsmc expire j:\*.*

 at the client? Will that expire the data immediately? How would
 subdirectories be handled? There appears to be no -subdir=yes option
 for the expire command. Alternatively, would

 dsmc delete backup j:\* -deltype=all

 get rid of the file references from the TSM database? I take it the
 deletion would get rid of the entries immediately.

 Sorry for the probably newbie queries. I haven't done any archiving
 before and it isn't my data ;-)
 Angus


Gallair e-bost yma gynnwys gwybodaeth gyfrinachol a/neu ddeunydd hawlfraint.  
Os ydych chin meddwl eich bod wedi derbyn yr e-bost yma drwy gamgymeriad rydym 
yn ymddiheuro am hyn; peidiwch os gwelwch yn dda â datgelu, anfon ymlaen, 
printio, copïo na dosbarthu gwybodaeth yn yr e-bost yma na gweithredu mewn 
unrhyw fodd drwy ddibynnu ar ei gynnwys: gwaherddir gwneud hynnyn gyfan gwbl a 
gallai fod yn anghyfreithlon. Rhowch wybod ir anfonwr fod y neges yma wedi mynd 
ar goll cyn ei dileu.

Mae unrhyw safbwynt neu farn a gyflwynir yn eiddo ir awdur ac nid ydynt o 
anghenraid yn cynrychioli safbwynt neu farn Ymddiriedolaeth GIG Gogledd 
Orllewin Cymru.

Gallai cynnwys yr e-bost yma gael ei ddatgelu Ir cyhoedd o dan Ddeddf Rhyddid 
Gwybodaeth 2000.  Ni does modd gwarantu cyfrinachedd y neges ac unrhyw ateb

Bydd y neges yma ac unrhyw ffeiliau cysylltiedig wedi cael eu gwirio gan 
feddalwedd canfod firws cyn eu trosglwyddo.  Ond rhaid ir sawl syn derbyn wirio 
rhag firws ei hun cyn agor unrhyw ymgysylltiad.  Nid ywr Ymddiriedolaeth yn 
derbyn unrhyw gyfrifoldeb am unrhyw golled neu niwed a allai gael ei achosi gan 
firws meddalwedd.


This e-mail may contain confidential information and/or copyright material.  If 
you believe that you have received this e-mail in error please accept our 
apologies; please do not disclose, forward, print, copy or distribute 
information in this e-mail or take any action in reliance on its contents: to 
do so is strictly prohibited and may be unlawful.  Please inform the sender 
that this message has gone astray before deleting it.

Any views or opinions presented are to be understood as those of the author and 
do not necessarily represent those of the North West Wales NHS Trust.

The contents of this e-mail may be subject to public disclosure under the 
Freedom of Information Act 2000. The confidentiality of the message and any 
reply cannot be guaranteed.

This message and any attached files will have been checked with virus detection 
software before transmission.  However, recipients must carry out their own 
virus checks before opening any attachment.  The Trust accepts no liability for 
any loss or damage, which may be caused by software viruses.


Re: Another archive/expire query

2008-01-22 Thread Steven Harris
Thats what the TYPE=ARCHIVE is for.  Default is to do all types



ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 23/01/2008
10:16:11 AM:

 Thanks. You should have seen my efforts to get it cut down from the
 original version ;-)

 I got the impression from the BAClient redbook that delete filespace
 would remove backup AND archived copies. Is that incorrect?

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Steven Harris
 Sent: 22 January 2008 22:54
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Another archive/expire query


 Angus,

 Congrats to your organization for the longest disclaimer in the history
of
 the internet :-)

 If its a whole drive,

 DEL FILESPACE node_name filespace_name TYPE=ARCHIVE

 on the server might be the way to go.  As its windows you may have to use
 the fsid number and nametype=fsid to indicate the drive you want.

 Regards

 Steve

 Steven Harris
 TSM Admin, Sydney Australia

 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 23/01/2008
 08:45:37 AM:

  I am busy helping an external supplier understand his own Tivoli
  system, which has become full.
 
  We plan to archive certain drive letters attached to a file server
  which contain data that will never change. The archive process is
  fine but what is the best way to remove references to the archived
  data from the main TSM database? Should I run a
 
  dsmc expire j:\*.*
 
  at the client? Will that expire the data immediately? How would
  subdirectories be handled? There appears to be no -subdir=yes option
  for the expire command. Alternatively, would
 
  dsmc delete backup j:\* -deltype=all
 
  get rid of the file references from the TSM database? I take it the
  deletion would get rid of the entries immediately.
 
  Sorry for the probably newbie queries. I haven't done any archiving
  before and it isn't my data ;-)
  Angus
 


Re: Another archive/expire query

2008-01-22 Thread Richard Sims

Use 'dsmc Delete ARchive' to remove previously archived files.

The client manual explains how to manage Archive files.

  Richard Sims


Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Nicholas Cassimatis
For this scenario, the problem with Active Storagepools is it's a
pool-to-pool relationship.  So ALL active data in a storagepool would be
copied to the Active Pool.  Not knowing what percentage of the nodes on the
TSM Server will be restored, but assuming they're all in one storage pool,
you'd probably want to move nodedata them to another pool, then do the
copy activedata.  Two steps, and needs more resources.  Just doing move
nodedata within the same pool will semi-collocate the data (See Note
below).  Obviously, a DASD pool, for this circumstance, would be best, if
it's available, but even cycling the data within the existing pool will
have benefits.

Note:  Semi-collocated, as each process will make all of the named nodes
data contiguous, even if it ends up on the same media with another nodes
data.  Turning on collocation before starting the jobs, and marking all
filling volumes read-only, will give you separate volumes for each node,
but requires a decent scratch pool to try.

Nick Cassimatis

- Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 07:25 PM
-

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
01:58:11 PM:

 Are files that are no longer active automatically expired from the
 activedata pool when you perform the latest COPY ACTIVEDATA?  This would
 mean that, at some point, you would need to do reclamation on this pool,
 right?

 It would seem to me that this would be a much better answer to TOP's
 question.  Instead of doing a MOVE NODE (which requires moving ALL of
 the node's files), or doing an EXPORT NODE (which requires a separate
 server), he can just create an ACTIVEDATA pool, then perform a COPY
 ACTIVEDATA into it while he's preparing for the restore.  Putting said
 pool on disk would be even better, of course.

 I was just discussing this with another one of our TSM experts, and he's
 not as bullish on it as I am.  (It was an off-list convo, so I'll let
 him go nameless unless he wants to speak up.)  He doesn't like that you
 can't use a DISK type device class (disk has to be listed as FILE type).

 He also has issues with the resources needed to create this 3rd copy
 of the data.  He said, Most customers have trouble getting backups
 complete and creating their offsite copies in a 24 hour period and would
 not be able to complete a third copy of the data.  Add to that the
 possibility of doing reclamation on this pool and you've got even more
 work to do.

 He's more of a fan of group collocation and the multisession restore
 feature.  I think this has more value if you're restoring fewer clients
 than you have tape drives.  Because if you collocate all your active
 files, then you'll only be using one tape drive per client.  If you've
 got 40 clients to restore and 20 tape drives, I don't see this slowing
 you down.  But if you've got one client to restore, and 20 tape drives,
 then the multisession restore would probably be faster than a collocated
 restore.

 I still think it's a strong feature whose value should be investigated
 and discussed -- even if you only use it for the purpose we're
 discussing here.  If you know you're in a DR scenario and you're going
 to be restoring multiple systems, why wouldn't you do create an
 ACTIVEDATA pool and do a COPY ACTIVEDATA instead of a MOVE NODE?

 OK, here's another question.  Is it assumed that the ACTIVEDATA pool
 have node-level collocation on?  Can you use group collocation instead?
 Then maybe I and my friend could both get what we want?

 Just throwing thoughts out there.

 ---
 W. Curtis Preston
 Backup Blog @ www.backupcentral.com
 VP Data Protection, GlassHouse Technologies

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Maria Ilieva
 Sent: Tuesday, January 22, 2008 10:22 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?

 The procedure of creating active data pools (assuming you have TSM
 version 5.4 or more) is the following:
 1. Create FILE type DISK pool or sequential TAPE pool specifying
 pooltype=ACTIVEDATA
 2.Update node's domain(s) specifying ACTIVEDESTINATION=created active
 data pool
 3. Issue COPY ACTIVEDATA node_name
 This process incrementaly copies node's active data, so it can be
 restarted if needed. HSM migrated and archived data is not copied in
 the active data pool!

 Maria Ilieva

  ---
  W. Curtis Preston
  Backup Blog @ www.backupcentral.com
  VP Data Protection, GlassHouse Technologies
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
 Of
  James R Owen
  Sent: Tuesday, January 22, 2008 9:32 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?
 
 
  Roger,
  You certainly want to get a best guess list of likely priority#1
  restores.
  If your tapes really are mostly uncollocated, you will probably
  experience lots of
  tape volume contention when you attempt to use MAXPRocess 

Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Roger Deschner
Thanks Curtis, and everyone else too. It's great to have this much
assembled expertise out there.

Preliminary indications from a quick look at the building are that we're
going to be dealing with a count of nodes in the dozens, not hundreds.

The next step is for the IT guy of the affected department (thank
goodness not me!) to put on a moon suit and go into the burned area with
a clipboard to take inventory, escorted by the Chicago Fire Department's
hazmat unit. That's supposed to happen sometime today. He'll be
gathering a preliminary list of obviously damaged machines, and then
we'll prioritize them.

My next step then will be to calculate the ratio of active/inactive data
for them, which will help to determine if there will be much of a
benefit to EXPORT NODE ACTIVEDATA or COPY ACTIVEDATA, as opposed to MOVE
NODEDATA. If I can get a quick read as to the active/inactive data
ratio, by comparing the output of Q FILESPACE to Q OCC, then these
decisions can be made intelegently. (In the future, I may keep a spare
server image around just for DR.)

The less inactive data there is, the better MOVE NODEDATA sounds. My
strategy would be to move the data into an existing collocated
heirarchy, into its primary disk stgpool. Then I'd let normal migration
move it once per day as the MOVE NODEDATAs progressed, onto truly
collocated tapes. Once there, normal client restore or GENERATE
BACKUPSET operations will go quickly.

A distinct disadvantage to the EXPORT strategy is that we're already in
a DR situation, so setting up a new server would take some time.

I have already reduced the mount retention time to 0.

I also may have more NFS space available than I had first thought. A
hundred gigs here and there, adds up to a lot of space. Since a type
FILE stgpool can span multiple Unix filesystems as of TSM 5.3, I can
probably cobble togther significant space if I need it.

Roger Deschner   ACCC Basic Systems Group [EMAIL PROTECTED]



On Tue, 22 Jan 2008, Curtis Preston wrote:

Are files that are no longer active automatically expired from the
activedata pool when you perform the latest COPY ACTIVEDATA?  This would
mean that, at some point, you would need to do reclamation on this pool,
right?

It would seem to me that this would be a much better answer to TOP's
question.  Instead of doing a MOVE NODE (which requires moving ALL of
the node's files), or doing an EXPORT NODE (which requires a separate
server), he can just create an ACTIVEDATA pool, then perform a COPY
ACTIVEDATA into it while he's preparing for the restore.  Putting said
pool on disk would be even better, of course.

I was just discussing this with another one of our TSM experts, and he's
not as bullish on it as I am.  (It was an off-list convo, so I'll let
him go nameless unless he wants to speak up.)  He doesn't like that you
can't use a DISK type device class (disk has to be listed as FILE type).

He also has issues with the resources needed to create this 3rd copy
of the data.  He said, Most customers have trouble getting backups
complete and creating their offsite copies in a 24 hour period and would
not be able to complete a third copy of the data.  Add to that the
possibility of doing reclamation on this pool and you've got even more
work to do.

He's more of a fan of group collocation and the multisession restore
feature.  I think this has more value if you're restoring fewer clients
than you have tape drives.  Because if you collocate all your active
files, then you'll only be using one tape drive per client.  If you've
got 40 clients to restore and 20 tape drives, I don't see this slowing
you down.  But if you've got one client to restore, and 20 tape drives,
then the multisession restore would probably be faster than a collocated
restore.

I still think it's a strong feature whose value should be investigated
and discussed -- even if you only use it for the purpose we're
discussing here.  If you know you're in a DR scenario and you're going
to be restoring multiple systems, why wouldn't you do create an
ACTIVEDATA pool and do a COPY ACTIVEDATA instead of a MOVE NODE?

OK, here's another question.  Is it assumed that the ACTIVEDATA pool
have node-level collocation on?  Can you use group collocation instead?
Then maybe I and my friend could both get what we want?

Just throwing thoughts out there.

---
W. Curtis Preston
Backup Blog @ www.backupcentral.com
VP Data Protection, GlassHouse Technologies

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Maria Ilieva
Sent: Tuesday, January 22, 2008 10:22 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?

The procedure of creating active data pools (assuming you have TSM
version 5.4 or more) is the following:
1. Create FILE type DISK pool or sequential TAPE pool specifying
pooltype=ACTIVEDATA
2.Update node's domain(s) specifying ACTIVEDESTINATION=created active
data pool
3. Issue COPY ACTIVEDATA 

Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Curtis Preston
AhHAH!  So this would only really work if he has a storage pool with
clients that should be copied in this manner.  That makes sense.

What about the expiration of inactive files the next time you do a copy
activedata?  It doesn't say in the manual that this is what it does, but
you would think it does it that way.  Am I right?

---
W. Curtis Preston
Backup Blog @ www.backupcentral.com
VP Data Protection, GlassHouse Technologies 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Nicholas Cassimatis
Sent: Tuesday, January 22, 2008 4:38 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?

For this scenario, the problem with Active Storagepools is it's a
pool-to-pool relationship.  So ALL active data in a storagepool would be
copied to the Active Pool.  Not knowing what percentage of the nodes on
the
TSM Server will be restored, but assuming they're all in one storage
pool,
you'd probably want to move nodedata them to another pool, then do the
copy activedata.  Two steps, and needs more resources.  Just doing
move
nodedata within the same pool will semi-collocate the data (See Note
below).  Obviously, a DASD pool, for this circumstance, would be best,
if
it's available, but even cycling the data within the existing pool will
have benefits.

Note:  Semi-collocated, as each process will make all of the named nodes
data contiguous, even if it ends up on the same media with another nodes
data.  Turning on collocation before starting the jobs, and marking all
filling volumes read-only, will give you separate volumes for each node,
but requires a decent scratch pool to try.

Nick Cassimatis

- Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 07:25
PM
-

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
01:58:11 PM:

 Are files that are no longer active automatically expired from the
 activedata pool when you perform the latest COPY ACTIVEDATA?  This
would
 mean that, at some point, you would need to do reclamation on this
pool,
 right?

 It would seem to me that this would be a much better answer to TOP's
 question.  Instead of doing a MOVE NODE (which requires moving ALL of
 the node's files), or doing an EXPORT NODE (which requires a separate
 server), he can just create an ACTIVEDATA pool, then perform a COPY
 ACTIVEDATA into it while he's preparing for the restore.  Putting said
 pool on disk would be even better, of course.

 I was just discussing this with another one of our TSM experts, and
he's
 not as bullish on it as I am.  (It was an off-list convo, so I'll let
 him go nameless unless he wants to speak up.)  He doesn't like that
you
 can't use a DISK type device class (disk has to be listed as FILE
type).

 He also has issues with the resources needed to create this 3rd copy
 of the data.  He said, Most customers have trouble getting backups
 complete and creating their offsite copies in a 24 hour period and
would
 not be able to complete a third copy of the data.  Add to that the
 possibility of doing reclamation on this pool and you've got even more
 work to do.

 He's more of a fan of group collocation and the multisession restore
 feature.  I think this has more value if you're restoring fewer
clients
 than you have tape drives.  Because if you collocate all your active
 files, then you'll only be using one tape drive per client.  If you've
 got 40 clients to restore and 20 tape drives, I don't see this slowing
 you down.  But if you've got one client to restore, and 20 tape
drives,
 then the multisession restore would probably be faster than a
collocated
 restore.

 I still think it's a strong feature whose value should be investigated
 and discussed -- even if you only use it for the purpose we're
 discussing here.  If you know you're in a DR scenario and you're going
 to be restoring multiple systems, why wouldn't you do create an
 ACTIVEDATA pool and do a COPY ACTIVEDATA instead of a MOVE NODE?

 OK, here's another question.  Is it assumed that the ACTIVEDATA pool
 have node-level collocation on?  Can you use group collocation
instead?
 Then maybe I and my friend could both get what we want?

 Just throwing thoughts out there.

 ---
 W. Curtis Preston
 Backup Blog @ www.backupcentral.com
 VP Data Protection, GlassHouse Technologies

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
 Maria Ilieva
 Sent: Tuesday, January 22, 2008 10:22 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?

 The procedure of creating active data pools (assuming you have TSM
 version 5.4 or more) is the following:
 1. Create FILE type DISK pool or sequential TAPE pool specifying
 pooltype=ACTIVEDATA
 2.Update node's domain(s) specifying ACTIVEDESTINATION=created active
 data pool
 3. Issue COPY ACTIVEDATA node_name
 This process incrementaly copies node's active data, so it can be
 restarted if needed. HSM 

Re: Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Steven Harris
Nick

I may well have a flawed understanding here but

Set up an active-data pool
clone the domain containing the servers requiring recovery
set the ACTIVEDATAPOOL parameter on the cloned domain
move the servers requiring recovery to the new domain,
Run COPY ACTIVEDATA on the primary tape pool

Since only the nodes we want are in the domain with the ACTIVEDATAPOOL
parameter specified, will not only data from those nodes be copied?

Regards

Steve

Steven Harris
TSM Admin, SYdney Australia

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 23/01/2008
11:38:17 AM:

 For this scenario, the problem with Active Storagepools is it's a
 pool-to-pool relationship.  So ALL active data in a storagepool would be
 copied to the Active Pool.  Not knowing what percentage of the nodes on
the
 TSM Server will be restored, but assuming they're all in one storage
pool,
 you'd probably want to move nodedata them to another pool, then do the
 copy activedata.  Two steps, and needs more resources.  Just doing
move
 nodedata within the same pool will semi-collocate the data (See Note
 below).  Obviously, a DASD pool, for this circumstance, would be best, if
 it's available, but even cycling the data within the existing pool will
 have benefits.

 Note:  Semi-collocated, as each process will make all of the named nodes
 data contiguous, even if it ends up on the same media with another nodes
 data.  Turning on collocation before starting the jobs, and marking all
 filling volumes read-only, will give you separate volumes for each node,
 but requires a decent scratch pool to try.

 Nick Cassimatis

 - Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 07:25 PM
 -

 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
 01:58:11 PM:

  Are files that are no longer active automatically expired from the
  activedata pool when you perform the latest COPY ACTIVEDATA?  This
would
  mean that, at some point, you would need to do reclamation on this
pool,
  right?
 
  It would seem to me that this would be a much better answer to TOP's
  question.  Instead of doing a MOVE NODE (which requires moving ALL of
  the node's files), or doing an EXPORT NODE (which requires a separate
  server), he can just create an ACTIVEDATA pool, then perform a COPY
  ACTIVEDATA into it while he's preparing for the restore.  Putting said
  pool on disk would be even better, of course.
 
  I was just discussing this with another one of our TSM experts, and
he's
  not as bullish on it as I am.  (It was an off-list convo, so I'll let
  him go nameless unless he wants to speak up.)  He doesn't like that you
  can't use a DISK type device class (disk has to be listed as FILE
type).
 
  He also has issues with the resources needed to create this 3rd copy
  of the data.  He said, Most customers have trouble getting backups
  complete and creating their offsite copies in a 24 hour period and
would
  not be able to complete a third copy of the data.  Add to that the
  possibility of doing reclamation on this pool and you've got even more
  work to do.
 
  He's more of a fan of group collocation and the multisession restore
  feature.  I think this has more value if you're restoring fewer clients
  than you have tape drives.  Because if you collocate all your active
  files, then you'll only be using one tape drive per client.  If you've
  got 40 clients to restore and 20 tape drives, I don't see this slowing
  you down.  But if you've got one client to restore, and 20 tape drives,
  then the multisession restore would probably be faster than a
collocated
  restore.
 
  I still think it's a strong feature whose value should be investigated
  and discussed -- even if you only use it for the purpose we're
  discussing here.  If you know you're in a DR scenario and you're going
  to be restoring multiple systems, why wouldn't you do create an
  ACTIVEDATA pool and do a COPY ACTIVEDATA instead of a MOVE NODE?
 
  OK, here's another question.  Is it assumed that the ACTIVEDATA pool
  have node-level collocation on?  Can you use group collocation instead?
  Then maybe I and my friend could both get what we want?
 
  Just throwing thoughts out there.
 
  ---
  W. Curtis Preston
  Backup Blog @ www.backupcentral.com
  VP Data Protection, GlassHouse Technologies
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of
  Maria Ilieva
  Sent: Tuesday, January 22, 2008 10:22 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?
 
  The procedure of creating active data pools (assuming you have TSM
  version 5.4 or more) is the following:
  1. Create FILE type DISK pool or sequential TAPE pool specifying
  pooltype=ACTIVEDATA
  2.Update node's domain(s) specifying ACTIVEDESTINATION=created active
  data pool
  3. Issue COPY ACTIVEDATA node_name
  This process incrementaly copies node's active data, so it can be
  restarted if needed. HSM migrated and archived 

Re: Fw: DISASTER: How to do a LOT of restores? [like Steve H said, but...]

2008-01-22 Thread James R Owen

DR strategy using an ACTIVEdata STGpool is like Steve H said, but
with minor additions and a major (but temporary) caveat:

COPY ACTIVEdata is not quite ready for this DR strategy yet:

See APAR PK59507:  COPy ACTIVEdata performance can be significantly degraded
(until TSM 5.4.3/5.5.1) unless *all* nodes are enabled for the ACTIVEdata 
STGpool.

http://www-1.ibm.com/support/docview.wss?rs=663context=SSGSG7dc=DB550uid=swg1PK59507loc=en_UScs=UTF-8lang=enrss=ct663tivoli

Here's a slightly improved description of how it should work:

DEFine STGpool actvpool ... POoltype=ACTIVEdata -
COLlocate=[No/GRoup/NODe/FIlespace] ...
COPy DOmain old... new...
UPDate DOmain new... ACTIVEDESTination=actvpool
ACTivate POlicy new... somePolicy
Query SCHedule old... * NOde=node1,...,nodeN[note old... sched.assoc's] 

UPDate NOde nodeX DOmain=new... [for each node[1-N]
DEFine ASSOCiation new... [someSched] nodeX [as previously associated]
COpy ACTIVEdata oldstgpool actvpool [for each oldstgpool w/active backups]

[If no other DOmain except new... has ACTIVEDESTination=actvpool,
the COpy ACTIVEdata command(s) will copy the Active backups from specified
nodes node[1-N] into the ACTIVEdata STGpool actvpool to expedite DR for...]

[But, not recommended until TSM 5.4.3/5.5.1 fixes APAR PK59507!]
--
[EMAIL PROTECTED]   (203.432.6693)

Steven Harris wrote:

Nick

I may well have a flawed understanding here but

Set up an active-data pool
clone the domain containing the servers requiring recovery
set the ACTIVEDATAPOOL parameter on the cloned domain
move the servers requiring recovery to the new domain,
Run COPY ACTIVEDATA on the primary tape pool

Since only the nodes we want are in the domain with the ACTIVEDATAPOOL
parameter specified, will not only data from those nodes be copied?

Regards

Steve

Steven Harris
TSM Admin, SYdney Australia

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 23/01/2008
11:38:17 AM:


For this scenario, the problem with Active Storagepools is it's a
pool-to-pool relationship.  So ALL active data in a storagepool would be
copied to the Active Pool.  Not knowing what percentage of the nodes on

the

TSM Server will be restored, but assuming they're all in one storage

pool,

you'd probably want to move nodedata them to another pool, then do the
copy activedata.  Two steps, and needs more resources.  Just doing

move

nodedata within the same pool will semi-collocate the data (See Note
below).  Obviously, a DASD pool, for this circumstance, would be best, if
it's available, but even cycling the data within the existing pool will
have benefits.

Note:  Semi-collocated, as each process will make all of the named nodes
data contiguous, even if it ends up on the same media with another nodes
data.  Turning on collocation before starting the jobs, and marking all
filling volumes read-only, will give you separate volumes for each node,
but requires a decent scratch pool to try.

Nick Cassimatis

- Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 07:25 PM
-

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
01:58:11 PM:


Are files that are no longer active automatically expired from the
activedata pool when you perform the latest COPY ACTIVEDATA?  This

would

mean that, at some point, you would need to do reclamation on this

pool,

right?

It would seem to me that this would be a much better answer to TOP's
question.  Instead of doing a MOVE NODE (which requires moving ALL of
the node's files), or doing an EXPORT NODE (which requires a separate
server), he can just create an ACTIVEDATA pool, then perform a COPY
ACTIVEDATA into it while he's preparing for the restore.  Putting said
pool on disk would be even better, of course.

I was just discussing this with another one of our TSM experts, and

he's

not as bullish on it as I am.  (It was an off-list convo, so I'll let
him go nameless unless he wants to speak up.)  He doesn't like that you
can't use a DISK type device class (disk has to be listed as FILE

type).

He also has issues with the resources needed to create this 3rd copy
of the data.  He said, Most customers have trouble getting backups
complete and creating their offsite copies in a 24 hour period and

would

not be able to complete a third copy of the data.  Add to that the
possibility of doing reclamation on this pool and you've got even more
work to do.

He's more of a fan of group collocation and the multisession restore
feature.  I think this has more value if you're restoring fewer clients
than you have tape drives.  Because if you collocate all your active
files, then you'll only be using one tape drive per client.  If you've
got 40 clients to restore and 20 tape drives, I don't see this slowing
you down.  But if you've got one client to restore, and 20 tape drives,
then the multisession restore would probably be faster than a

collocated

restore.

I still think it's a strong feature whose 

Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Nicholas Cassimatis
Obviously, being in Chicago this week has frozen my brain (or maybe I'm
downwind from UIC...).  Yes, you're correct - it is Domain and Storagepool
combined.

Nick Cassimatis

- Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 09:42 PM
-

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
08:24:29 PM:

 Nick

 I may well have a flawed understanding here but

 Set up an active-data pool
 clone the domain containing the servers requiring recovery
 set the ACTIVEDATAPOOL parameter on the cloned domain
 move the servers requiring recovery to the new domain,
 Run COPY ACTIVEDATA on the primary tape pool

 Since only the nodes we want are in the domain with the ACTIVEDATAPOOL
 parameter specified, will not only data from those nodes be copied?

 Regards

 Steve

 Steven Harris
 TSM Admin, SYdney Australia

 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 23/01/2008
 11:38:17 AM:

  For this scenario, the problem with Active Storagepools is it's a
  pool-to-pool relationship.  So ALL active data in a storagepool would
be
  copied to the Active Pool.  Not knowing what percentage of the nodes on
 the
  TSM Server will be restored, but assuming they're all in one storage
 pool,
  you'd probably want to move nodedata them to another pool, then do
the
  copy activedata.  Two steps, and needs more resources.  Just doing
 move
  nodedata within the same pool will semi-collocate the data (See Note
  below).  Obviously, a DASD pool, for this circumstance, would be best,
if
  it's available, but even cycling the data within the existing pool will
  have benefits.
 
  Note:  Semi-collocated, as each process will make all of the named
nodes
  data contiguous, even if it ends up on the same media with another
nodes
  data.  Turning on collocation before starting the jobs, and marking all
  filling volumes read-only, will give you separate volumes for each
node,
  but requires a decent scratch pool to try.
 
  Nick Cassimatis
 
  - Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 07:25
PM
  -
 
  ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
  01:58:11 PM:
 
   Are files that are no longer active automatically expired from the
   activedata pool when you perform the latest COPY ACTIVEDATA?  This
 would
   mean that, at some point, you would need to do reclamation on this
 pool,
   right?
  
   It would seem to me that this would be a much better answer to TOP's
   question.  Instead of doing a MOVE NODE (which requires moving ALL of
   the node's files), or doing an EXPORT NODE (which requires a separate
   server), he can just create an ACTIVEDATA pool, then perform a COPY
   ACTIVEDATA into it while he's preparing for the restore.  Putting
said
   pool on disk would be even better, of course.
  
   I was just discussing this with another one of our TSM experts, and
 he's
   not as bullish on it as I am.  (It was an off-list convo, so I'll let
   him go nameless unless he wants to speak up.)  He doesn't like that
you
   can't use a DISK type device class (disk has to be listed as FILE
 type).
  
   He also has issues with the resources needed to create this 3rd
copy
   of the data.  He said, Most customers have trouble getting backups
   complete and creating their offsite copies in a 24 hour period and
 would
   not be able to complete a third copy of the data.  Add to that the
   possibility of doing reclamation on this pool and you've got even
more
   work to do.
  
   He's more of a fan of group collocation and the multisession restore
   feature.  I think this has more value if you're restoring fewer
clients
   than you have tape drives.  Because if you collocate all your active
   files, then you'll only be using one tape drive per client.  If
you've
   got 40 clients to restore and 20 tape drives, I don't see this
slowing
   you down.  But if you've got one client to restore, and 20 tape
drives,
   then the multisession restore would probably be faster than a
 collocated
   restore.
  
   I still think it's a strong feature whose value should be
investigated
   and discussed -- even if you only use it for the purpose we're
   discussing here.  If you know you're in a DR scenario and you're
going
   to be restoring multiple systems, why wouldn't you do create an
   ACTIVEDATA pool and do a COPY ACTIVEDATA instead of a MOVE NODE?
  
   OK, here's another question.  Is it assumed that the ACTIVEDATA pool
   have node-level collocation on?  Can you use group collocation
instead?
   Then maybe I and my friend could both get what we want?
  
   Just throwing thoughts out there.
  
   ---
   W. Curtis Preston
   Backup Blog @ www.backupcentral.com
   VP Data Protection, GlassHouse Technologies
  
   -Original Message-
   From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
 Of
   Maria Ilieva
   Sent: Tuesday, January 22, 2008 10:22 AM
   To: ADSM-L@VM.MARIST.EDU
   Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of 

Fw: DISASTER: How to do a LOT of restores?

2008-01-22 Thread Nicholas Cassimatis
Curtis,

Suffering Frozen Brain Syndrome - it's Domain and Storagepool, combined,
that make data eligible to be put in the Activedata pool.

Nick Cassimatis

- Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 09:45 PM
-

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
08:21:12 PM:

 AhHAH!  So this would only really work if he has a storage pool with
 clients that should be copied in this manner.  That makes sense.

 What about the expiration of inactive files the next time you do a copy
 activedata?  It doesn't say in the manual that this is what it does, but
 you would think it does it that way.  Am I right?

 ---
 W. Curtis Preston
 Backup Blog @ www.backupcentral.com
 VP Data Protection, GlassHouse Technologies

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
 Nicholas Cassimatis
 Sent: Tuesday, January 22, 2008 4:38 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: [ADSM-L] Fw: DISASTER: How to do a LOT of restores?

 For this scenario, the problem with Active Storagepools is it's a
 pool-to-pool relationship.  So ALL active data in a storagepool would be
 copied to the Active Pool.  Not knowing what percentage of the nodes on
 the
 TSM Server will be restored, but assuming they're all in one storage
 pool,
 you'd probably want to move nodedata them to another pool, then do the
 copy activedata.  Two steps, and needs more resources.  Just doing
 move
 nodedata within the same pool will semi-collocate the data (See Note
 below).  Obviously, a DASD pool, for this circumstance, would be best,
 if
 it's available, but even cycling the data within the existing pool will
 have benefits.

 Note:  Semi-collocated, as each process will make all of the named nodes
 data contiguous, even if it ends up on the same media with another nodes
 data.  Turning on collocation before starting the jobs, and marking all
 filling volumes read-only, will give you separate volumes for each node,
 but requires a decent scratch pool to try.

 Nick Cassimatis

 - Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008 07:25
 PM
 -

 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
 01:58:11 PM:

  Are files that are no longer active automatically expired from the
  activedata pool when you perform the latest COPY ACTIVEDATA?  This
 would
  mean that, at some point, you would need to do reclamation on this
 pool,
  right?
 
  It would seem to me that this would be a much better answer to TOP's
  question.  Instead of doing a MOVE NODE (which requires moving ALL of
  the node's files), or doing an EXPORT NODE (which requires a separate
  server), he can just create an ACTIVEDATA pool, then perform a COPY
  ACTIVEDATA into it while he's preparing for the restore.  Putting said
  pool on disk would be even better, of course.
 
  I was just discussing this with another one of our TSM experts, and
 he's
  not as bullish on it as I am.  (It was an off-list convo, so I'll let
  him go nameless unless he wants to speak up.)  He doesn't like that
 you
  can't use a DISK type device class (disk has to be listed as FILE
 type).
 
  He also has issues with the resources needed to create this 3rd copy
  of the data.  He said, Most customers have trouble getting backups
  complete and creating their offsite copies in a 24 hour period and
 would
  not be able to complete a third copy of the data.  Add to that the
  possibility of doing reclamation on this pool and you've got even more
  work to do.
 
  He's more of a fan of group collocation and the multisession restore
  feature.  I think this has more value if you're restoring fewer
 clients
  than you have tape drives.  Because if you collocate all your active
  files, then you'll only be using one tape drive per client.  If you've
  got 40 clients to restore and 20 tape drives, I don't see this slowing
  you down.  But if you've got one client to restore, and 20 tape
 drives,
  then the multisession restore would probably be faster than a
 collocated
  restore.
 
  I still think it's a strong feature whose value should be investigated
  and discussed -- even if you only use it for the purpose we're
  discussing here.  If you know you're in a DR scenario and you're going
  to be restoring multiple systems, why wouldn't you do create an
  ACTIVEDATA pool and do a COPY ACTIVEDATA instead of a MOVE NODE?
 
  OK, here's another question.  Is it assumed that the ACTIVEDATA pool
  have node-level collocation on?  Can you use group collocation
 instead?
  Then maybe I and my friend could both get what we want?
 
  Just throwing thoughts out there.
 
  ---
  W. Curtis Preston
  Backup Blog @ www.backupcentral.com
  VP Data Protection, GlassHouse Technologies
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
 Of
  Maria Ilieva
  Sent: Tuesday, January 22, 2008 10:22 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of 

Re: Status report on the forum - mailing list link

2008-01-22 Thread Allen S. Rout
 On Fri, 18 Jan 2008 14:14:46 -0500, Curtis Preston [EMAIL PROTECTED] said:


 It's now been just over a year since I created the link between the
 Backup Central forums and this mailing list.  When I first started it,
 people asked me to keep a close eye on it and then see how things went
 after a year.  It's been a year, and here's what I have to report.

[...]


I was an espeically voiciferous critic of the link, so I figured I'd
chime in.  While I have had more than a few Gee, that was a dumb
question, look for source, Oh yeah from -there- moments, I have to
agree that the wave of drek I foretold absolutely did not happen.  So
I'll sit the heck back down on this topic and mutter to myself. :)

Thanks for your efforts to keep the message stream clean.  Looks like
it's working.


- Allen S. Rout


Re: Status report on the forum - mailing list link

2008-01-22 Thread Curtis Preston
I was an espeically voiciferous critic of the link, so I figured I'd
chime in.  While I have had more than a few Gee, that was a dumb
question, look for source, Oh yeah from -there- moments, I have to
agree that the wave of drek I foretold absolutely did not happen.  So
I'll sit the heck back down on this topic and mutter to myself. :)

I knew that would happen, and I appreciate all those who responded to
such questions, even if you gave them the RTFM answer. ;) I would say
I've seen a few of those questions on every list I'm on, regardless of
source.  Hey, I've just installed this thing.  I couldn't afford
professional services or training, and can't be bothered to read the
manual or the FAQ.  How does it work?

Thanks for your efforts to keep the message stream clean.  Looks like
it's working.

It requires one or two points and clicks per day, so it's not completely
onerous.  It was obvious from the response of one spam that I either do
that or disconnect the link. ;)


Re: Fw: DISASTER: How to do a LOT of restores? [like Steve H said, but...]

2008-01-22 Thread Curtis Preston
Bummer. :( But when it's fixed, I sure think it sounds like a better
solution to this situation than the traditional answers -- even if only
used on demand.

---
W. Curtis Preston
Backup Blog @ www.backupcentral.com
VP Data Protection, GlassHouse Technologies 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
James R Owen
Sent: Tuesday, January 22, 2008 6:37 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Fw: DISASTER: How to do a LOT of restores? [like
Steve H said, but...]

DR strategy using an ACTIVEdata STGpool is like Steve H said, but
with minor additions and a major (but temporary) caveat:

COPY ACTIVEdata is not quite ready for this DR strategy yet:

See APAR PK59507:  COPy ACTIVEdata performance can be significantly
degraded
(until TSM 5.4.3/5.5.1) unless *all* nodes are enabled for the
ACTIVEdata STGpool.

http://www-1.ibm.com/support/docview.wss?rs=663context=SSGSG7dc=DB550;
uid=swg1PK59507loc=en_UScs=UTF-8lang=enrss=ct663tivoli

Here's a slightly improved description of how it should work:

DEFine STGpool actvpool ... POoltype=ACTIVEdata -
COLlocate=[No/GRoup/NODe/FIlespace] ...
COPy DOmain old... new...
UPDate DOmain new... ACTIVEDESTination=actvpool
ACTivate POlicy new... somePolicy
Query SCHedule old... * NOde=node1,...,nodeN[note old...
sched.assoc's]  
UPDate NOde nodeX DOmain=new... [for each node[1-N]
DEFine ASSOCiation new... [someSched] nodeX [as previously
associated]
COpy ACTIVEdata oldstgpool actvpool [for each oldstgpool w/active
backups]

[If no other DOmain except new... has ACTIVEDESTination=actvpool,
 the COpy ACTIVEdata command(s) will copy the Active backups from
specified
 nodes node[1-N] into the ACTIVEdata STGpool actvpool to expedite DR
for...]

[But, not recommended until TSM 5.4.3/5.5.1 fixes APAR PK59507!]
--
[EMAIL PROTECTED]   (203.432.6693)

Steven Harris wrote:
 Nick

 I may well have a flawed understanding here but

 Set up an active-data pool
 clone the domain containing the servers requiring recovery
 set the ACTIVEDATAPOOL parameter on the cloned domain
 move the servers requiring recovery to the new domain,
 Run COPY ACTIVEDATA on the primary tape pool

 Since only the nodes we want are in the domain with the ACTIVEDATAPOOL
 parameter specified, will not only data from those nodes be copied?

 Regards

 Steve

 Steven Harris
 TSM Admin, SYdney Australia

 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 23/01/2008
 11:38:17 AM:

 For this scenario, the problem with Active Storagepools is it's a
 pool-to-pool relationship.  So ALL active data in a storagepool would
be
 copied to the Active Pool.  Not knowing what percentage of the nodes
on
 the
 TSM Server will be restored, but assuming they're all in one storage
 pool,
 you'd probably want to move nodedata them to another pool, then do
the
 copy activedata.  Two steps, and needs more resources.  Just doing
 move
 nodedata within the same pool will semi-collocate the data (See Note
 below).  Obviously, a DASD pool, for this circumstance, would be
best, if
 it's available, but even cycling the data within the existing pool
will
 have benefits.

 Note:  Semi-collocated, as each process will make all of the named
nodes
 data contiguous, even if it ends up on the same media with another
nodes
 data.  Turning on collocation before starting the jobs, and marking
all
 filling volumes read-only, will give you separate volumes for each
node,
 but requires a decent scratch pool to try.

 Nick Cassimatis

 - Forwarded by Nicholas Cassimatis/Raleigh/IBM on 01/22/2008
07:25 PM
 -

 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/22/2008
 01:58:11 PM:

 Are files that are no longer active automatically expired from the
 activedata pool when you perform the latest COPY ACTIVEDATA?  This
 would
 mean that, at some point, you would need to do reclamation on this
 pool,
 right?

 It would seem to me that this would be a much better answer to TOP's
 question.  Instead of doing a MOVE NODE (which requires moving ALL
of
 the node's files), or doing an EXPORT NODE (which requires a
separate
 server), he can just create an ACTIVEDATA pool, then perform a COPY
 ACTIVEDATA into it while he's preparing for the restore.  Putting
said
 pool on disk would be even better, of course.

 I was just discussing this with another one of our TSM experts, and
 he's
 not as bullish on it as I am.  (It was an off-list convo, so I'll
let
 him go nameless unless he wants to speak up.)  He doesn't like that
you
 can't use a DISK type device class (disk has to be listed as FILE
 type).
 He also has issues with the resources needed to create this 3rd
copy
 of the data.  He said, Most customers have trouble getting backups
 complete and creating their offsite copies in a 24 hour period and
 would
 not be able to complete a third copy of the data.  Add to that the
 possibility of doing reclamation on this pool and you've got even

Re: TDP for SQL question

2008-01-22 Thread Paul Dudley
OK - I have done this and it is working as far as backing up the SQL
database goes.

Now what do I do if I need to restore from this new node name back onto
the client?

Do I need to do anything with the dsm.opt and dsmarch.opt files before
starting the restore?

Regards
Paul Dudley

Senior IT Systems Administrator
ANL IT Operations Dept.
ANL Container Line
[EMAIL PROTECTED]
03-9257-0603
http://www.anl.com.au



 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
 Behalf Of Del Hoobler
 Sent: Saturday, 4 August 2007 2:37 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] TDP for SQL question

 Paul,

 Typically...  I see that people will name their node
 the same as their primary SQL node with an extension.
 Something like:
 Primary:   SQLSRV23_SQL
 Archive:   SQLSRV23_SQL_ARCH

 And they will have a separate DSM.OPT file, something like
 DSMARCH.OPT that has the archive nodename.

 Thanks,

 Del

 

 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on
 08/03/2007
 12:53:23 AM:

  I have been told that if I want to create an archive backup of an
SQL
  database via TDP for SQL, then I should create a separate node name
in
  TSM (such as SQL_Archive) and then backup using that node name
once
 a
  month (for example) and make sure to bind those backups to a
management
  class that has the long term settings that meet our requirements.
 
  What else is involved in setting this up? On the client do I have to
  create another dsm.opt file with the new node name to match what I
set
  up on the TSM server?
 
  Regards
  Paul Dudley





ANL DISCLAIMER
This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.