Migration question

2007-10-01 Thread Dollens, Bruce
I have a question that I feel a little stupid in asking.

What is the difference between migration and backup primary (disk) to
copy (tape)? 

I am working on changing my scheduling up and the recommended order of
steps I was given is:

Backup clients
Migration
Backup primary to copy
Backup db
Expiration
Reclamation
Start all over again

Thanks!


migration question

2004-09-09 Thread Levi, Ralph
I am running TSM 5.1.9 on AIX 4.3.3 .  My primary disk pool is 490GB
with the himig=95, lowmig=90 and migprocess=1 .  Occasionally, I hit the
95% mark before the normal scheduled migration takes place.  When that
happens the automigration starts and I would expect it to stop once it
gets below the 90% mark.  It typically does not.  Instead, it will
continue to until it gets to the mid 70% level.  My maxsize threshold is
20gb but with only 1 migprocess running that wouldn't account for such a
big migration. Does anyone have any ideas ?

Thanks,
Ralph 


Migration question

2001-08-17 Thread Alan Davenport

Hello fellow *SMers! I'm thinking of redoing how our backup pool
migration/tape copies process is done to increase efficiency. We are
currently processing this way:

Backuppool (disk) migration to primary onsite tape pool (3490).
Tapepool (onsite) copy backed up to vault (offsite) tape copy pool.

What I would like to do is this:

Backup disk pool to vault tape (offsite) copy.
Migrate disk to tape (primary onsite) copy.
Backup primary tape pool to offsite pool to catch any migration etc. that
may have occurred during the last 24 hours.

My concern is this. I do not want to back up the data TWICE from the disk
pool to the offsite pool. (Once from disk to offsite pool and again from
onsite tape pool to offsite tape pool.) That is, is *SM smart enough to know
that the data that was backed up to the vault (offsite) pool prior to
migration has already been backed up and will not back it up again from the
onsite tape pool to the offsite pool when the "BACKUP STG TAPEPOOL COPYPOOL"
command is executed?

My environment is TSM 4.1.0.0 on OS390 with 24 3490 tape drives.

Thanks for listening!

Alan Davenport
Selective Insurance
[EMAIL PROTECTED]



Migration Question

2003-08-15 Thread Krishnan, Vyju
Hello folks,

Any one here who had done migration to TSM from other backup software
products like StorageTek REEL Librarian, Legato NetWorker, CA BrightStor,
etc. I am currently using STK REEL Products and looking for a best
compatible replacement for this.  Any help in this regard would be highly
appreciated.

Thanks,
Vyju Krishnan


Migration question

2006-07-26 Thread David Browne
I know I can set the migration delay to more than 0 days to delay
migration.

I would like  to keep all of my active versions of backup on disk and
migrate inactive versions only to tape.

Is there a way to specify only inactive versions of files are eligible for
migration?

The information transmitted is intended only for the person or entity to which 
it is addressed and may contain CONFIDENTIAL material.  If you receive this 
material/information in error, please contact the sender and delete or destroy 
the material/information.


Migration Question

2002-05-07 Thread Roy Lake

Hi Guys,

I have a quick question for you

I am about to delete all of the promary disk storage pools for a
resizing and re-placement exercise.

IBM have given me their recommendations as:-

1. Backup stgpool.
2. Update lo mig and Hi mig to 0 to migrate everything to tape.

Now, am I right in saying that option 2 is un-necessary, as this data
should already reside on tape anyway (as it would have been backup up
sometime when a "backup stgpool" was taken)?.

Kind Regards,

Roy Lake
RS/6000 & TSM Systems Administration Team
Tibbett & Britten European I.T.
Judd House
Ripple Road
Barking
Essex IG11 0TU
0208 526 8853
[EMAIL PROTECTED]

This e-mail message has been scanned using Sophos Sweep http://www.sophos.com

**
---  IMPORTANT INFORMATION -
This message is intended only for the use of the Person(s) ('the
intended recipient') to whom it was addressed. It may contain
information which is privileged & confidential within the meaning
of applicable law. Accordingly any dissemination, distribution,
copying, or other use of this message, or any of its contents,
by any person other than the intended recipient may constitute
a breach of civil or criminal law and is strictly prohibited.

If you are NOT the intended recipient please contact the sender
and dispose of this email as soon as possible. If in doubt please
contact European IT on 0870 607 6777 (+44 20 85 26 88 88).
This message has been sent via the Public Internet.
**



Re: Migration question

2007-10-01 Thread Larry Clark

Migration involves moving between primary storage pools.
The conventional situation was ( before VTLs) a site would have a hierarchy
of storage. The clients would back up to disks to permits a large number of
concurrent backups, then those backups to disk would be migrated to tape.
Both are primary storage pools.

Copypools are copies of data in primary storage pools. Their purpose is to
provide for the loss of data in the primary pools and are populated using
backup stg to the copypool. Fio example, a tape volume goes bad in your
primary storage pool, you can recover it from the copypool.

Copypools are also used at some sites for vaulting. The copies of the
primary data that is created on the copypools is sent offsite.

- Original Message -
From: "Dollens, Bruce" <[EMAIL PROTECTED]>
To: 
Sent: Monday, October 01, 2007 12:35 PM
Subject: [ADSM-L] Migration question


I have a question that I feel a little stupid in asking.

What is the difference between migration and backup primary (disk) to
copy (tape)?

I am working on changing my scheduling up and the recommended order of
steps I was given is:

Backup clients
Migration
Backup primary to copy
Backup db
Expiration
Reclamation
Start all over again

Thanks!


Re: Migration question

2007-10-01 Thread Remeta, Mark
Migration moves the data from one primary pool to another. A storage pool
backup is copying the data from a primary storage pool to a copy storage
pool.



Mark Remeta

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Dollens, Bruce
Sent: Monday, October 01, 2007 12:36 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Migration question

I have a question that I feel a little stupid in asking.

What is the difference between migration and backup primary (disk) to copy
(tape)?

I am working on changing my scheduling up and the recommended order of steps
I was given is:

Backup clients
Migration
Backup primary to copy
Backup db
Expiration
Reclamation
Start all over again

Thanks!

Confidentiality Note: The information transmitted is intended only for the
person or entity to whom or which it is addressed and may contain
confidential and/or privileged material. Any review, retransmission,
dissemination or other use of this information by persons or entities other
than the intended recipient is prohibited. If you receive this in error,
please delete this material immediately.
Please be advised that someone other than the intended recipients, including
a third-party in the Seligman organization and government agencies, may
review all electronic communications to and from this address.


Re: Migration question

2007-10-01 Thread Larry Clark

And about your sequence list.
- Expiration will trigger the reclamations
- You might want to schedule your backup primary to copy prior to your
migration. Assuming you are backing up to disk, then migrating to tape.
That way, you avoid the overhead of mounts of tapes when you are creating
copypool volumes from primary storage pool volumes ( that have been
migrated from disk to tape.).
- Original Message -
From: "Dollens, Bruce" <[EMAIL PROTECTED]>
To: 
Sent: Monday, October 01, 2007 12:35 PM
Subject: [ADSM-L] Migration question


I have a question that I feel a little stupid in asking.

What is the difference between migration and backup primary (disk) to
copy (tape)?

I am working on changing my scheduling up and the recommended order of
steps I was given is:

Backup clients
Migration
Backup primary to copy
Backup db
Expiration
Reclamation
Start all over again

Thanks!


Re: Migration question

2007-10-01 Thread Gabriel Gombik
hi Bruce,

migration *MOVES* your clients' data from a PRIMARY storage pool
to a NEXT PRIMARY storage pool, e.g. from DISK to a primary TAPE pool.

backup stgpool *COPIES* clients' data stored on a PRIMARY pool
to a COPY pool (that has to be SEQUENTIAL, i.e. a tape pool).

after the successful backup of a primary storage pool, you will have
2 copies of data on TSM server, and you can send the 2nd copy to an
off-site location (vault).

see:
  help migrate stg
  help backup stg

hope it helps.

regards
Gabriel Peter

On 1 Oct 2007, 18:35, Dollens, Bruce wrote:
> I have a question that I feel a little stupid in asking.
>
> What is the difference between migration and backup primary (disk) to
> copy (tape)?
>
> I am working on changing my scheduling up and the recommended order of
> steps I was given is:
>
> Backup clients
> Migration
> Backup primary to copy
> Backup db
> Expiration
> Reclamation
> Start all over again
>
> Thanks!


Re: Migration question

2007-10-01 Thread Stuart Lamble

On 02/10/2007, at 2:35 AM, Dollens, Bruce wrote:


I have a question that I feel a little stupid in asking.

What is the difference between migration and backup primary (disk) to
copy (tape)?


Other people have already answered, so I won't bother. :)


I am working on changing my scheduling up and the recommended order of
steps I was given is:

Backup clients
Migration
Backup primary to copy
Backup db
Expiration
Reclamation
Start all over again


I'd change that order slightly to:

Backup clients
Backup disk pool(s)
Migration
Backup tape pool(s)
Backup db
Expiration
Reclamation
Repeat ad nauseum.

Better to backup the storage pool data while it's still on disk,
rather than to flush it out to tape and then back it up - saves on
the amount of data read from tape. Some data may well still flush
down to tape before the first backup stgpool runs, which is why you
want to backup the tape pool as well after migration is complete.


Re: Migration question

2007-10-02 Thread CAYE PIERRE
May I suggest a sequence ?

1- backups nodes
2- backup primary disk pools to tape
3- backup primary tape pool to tape
4- migrate primary disk pool
5- db backup
6- expiration
7- reclaim primary tape pool
8- reclaim copy pool 

Pierre

> -Message d'origine-
> De : ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] De 
> la part de Dollens, Bruce
> Envoyé : lundi 1 octobre 2007 18:36
> À : ADSM-L@VM.MARIST.EDU
> Objet : [ADSM-L] Migration question
> 
> I have a question that I feel a little stupid in asking.
> 
> What is the difference between migration and backup primary 
> (disk) to copy (tape)? 
> 
> I am working on changing my scheduling up and the recommended 
> order of steps I was given is:
> 
> Backup clients
> Migration
> Backup primary to copy
> Backup db
> Expiration
> Reclamation
> Start all over again
> 
> Thanks!
> 


Re: Migration question

2007-10-02 Thread Larry Clark

1- backups nodes
2- backup primary disk pools to tape (copypool)
3- migrate primary disk pool to tape
4- backup primary tape pool to tape (copypool) ( this allows picking up 
files to copypool not completed in 2 before 3 was triggered)

5- db backup
6- expiration
7- reclaim primary tape pool
8- reclaim copy pool

- Original Message - 
From: "CAYE PIERRE" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 02, 2007 3:25 AM
Subject: Re: [ADSM-L] Migration question


May I suggest a sequence ?

1- backups nodes
2- backup primary disk pools to tape
3- backup primary tape pool to tape
4- migrate primary disk pool
5- db backup
6- expiration
7- reclaim primary tape pool
8- reclaim copy pool

Pierre


-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] De
la part de Dollens, Bruce
Envoyé : lundi 1 octobre 2007 18:36
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Migration question

I have a question that I feel a little stupid in asking.

What is the difference between migration and backup primary
(disk) to copy (tape)?

I am working on changing my scheduling up and the recommended
order of steps I was given is:

Backup clients
Migration
Backup primary to copy
Backup db
Expiration
Reclamation
Start all over again

Thanks!



Re: Migration question

2007-10-02 Thread CAYE PIERRE
Well done ! 

> -Message d'origine-
> De : ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] De 
> la part de Larry Clark
> Envoyé : mardi 2 octobre 2007 13:03
> À : ADSM-L@VM.MARIST.EDU
> Objet : Re: [ADSM-L] Migration question
> 
> 1- backups nodes
> 2- backup primary disk pools to tape (copypool)
> 3- migrate primary disk pool to tape
> 4- backup primary tape pool to tape (copypool) ( this allows 
> picking up files to copypool not completed in 2 before 3 was 
> triggered)
> 5- db backup
> 6- expiration
> 7- reclaim primary tape pool
> 8- reclaim copy pool
> 
> - Original Message -
> From: "CAYE PIERRE" <[EMAIL PROTECTED]>
> To: 
> Sent: Tuesday, October 02, 2007 3:25 AM
> Subject: Re: [ADSM-L] Migration question
> 
> 
> May I suggest a sequence ?
> 
> 1- backups nodes
> 2- backup primary disk pools to tape
> 3- backup primary tape pool to tape
> 4- migrate primary disk pool
> 5- db backup
> 6- expiration
> 7- reclaim primary tape pool
> 8- reclaim copy pool
> 
> Pierre
> 
> > -Message d'origine-
> > De : ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] De
> > la part de Dollens, Bruce
> > Envoyé : lundi 1 octobre 2007 18:36
> > À : ADSM-L@VM.MARIST.EDU
> > Objet : [ADSM-L] Migration question
> >
> > I have a question that I feel a little stupid in asking.
> >
> > What is the difference between migration and backup primary
> > (disk) to copy (tape)?
> >
> > I am working on changing my scheduling up and the recommended
> > order of steps I was given is:
> >
> > Backup clients
> > Migration
> > Backup primary to copy
> > Backup db
> > Expiration
> > Reclamation
> > Start all over again
> >
> > Thanks!
> >
> 


Re: Migration question

2007-10-02 Thread Timothy Hughes

Is there really recomended sequence every TSM Environment should follow?
Or does it depend on your Individual environmental needs?

Tim




Larry Clark wrote:

1- backups nodes
2- backup primary disk pools to tape (copypool)
3- migrate primary disk pool to tape
4- backup primary tape pool to tape (copypool) ( this allows picking 
up files to copypool not completed in 2 before 3 was triggered)

5- db backup
6- expiration
7- reclaim primary tape pool
8- reclaim copy pool

- Original Message - From: "CAYE PIERRE" 
<[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 02, 2007 3:25 AM
Subject: Re: [ADSM-L] Migration question


May I suggest a sequence ?

1- backups nodes
2- backup primary disk pools to tape
3- backup primary tape pool to tape
4- migrate primary disk pool
5- db backup
6- expiration
7- reclaim primary tape pool
8- reclaim copy pool

Pierre


-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] De
la part de Dollens, Bruce
Envoyé : lundi 1 octobre 2007 18:36
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Migration question

I have a question that I feel a little stupid in asking.

What is the difference between migration and backup primary
(disk) to copy (tape)?

I am working on changing my scheduling up and the recommended
order of steps I was given is:

Backup clients
Migration
Backup primary to copy
Backup db
Expiration
Reclamation
Start all over again

Thanks!



Re: Migration question

2007-10-02 Thread Ron Welsh
TSM has their "wheel of life" with the recommended operational procedure: 

http://www.redbooks.ibm.com/redbooks/SG245416/wwhelp/wwhimpl/common/html/wwhelp.htm?context=SG245416&file=21-02.htm


Thanks,
Ron


Ron Welsh
Systems Administrator
Systems & Technology 
Open Solutions Inc. 
2091 Springdale Road
Cherry Hill, NJ 08003

phone: 856-874-4121
email: [EMAIL PROTECTED]
website: http://www.opensolutions.com

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Timothy 
Hughes
Sent: Tuesday, October 02, 2007 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Migration question

Is there really recomended sequence every TSM Environment should follow?
Or does it depend on your Individual environmental needs?

Tim




Larry Clark wrote:
> 1- backups nodes
> 2- backup primary disk pools to tape (copypool)
> 3- migrate primary disk pool to tape
> 4- backup primary tape pool to tape (copypool) ( this allows picking 
> up files to copypool not completed in 2 before 3 was triggered)
> 5- db backup
> 6- expiration
> 7- reclaim primary tape pool
> 8- reclaim copy pool
>
> - Original Message - From: "CAYE PIERRE" 
> <[EMAIL PROTECTED]>
> To: 
> Sent: Tuesday, October 02, 2007 3:25 AM
> Subject: Re: [ADSM-L] Migration question
>
>
> May I suggest a sequence ?
>
> 1- backups nodes
> 2- backup primary disk pools to tape
> 3- backup primary tape pool to tape
> 4- migrate primary disk pool
> 5- db backup
> 6- expiration
> 7- reclaim primary tape pool
> 8- reclaim copy pool
>
> Pierre
>
>> -Message d'origine-
>> De : ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] De
>> la part de Dollens, Bruce
>> Envoyé : lundi 1 octobre 2007 18:36
>> À : ADSM-L@VM.MARIST.EDU
>> Objet : [ADSM-L] Migration question
>>
>> I have a question that I feel a little stupid in asking.
>>
>> What is the difference between migration and backup primary
>> (disk) to copy (tape)?
>>
>> I am working on changing my scheduling up and the recommended
>> order of steps I was given is:
>>
>> Backup clients
>> Migration
>> Backup primary to copy
>> Backup db
>> Expiration
>> Reclamation
>> Start all over again
>>
>> Thanks!
>>


Re: Migration question

2007-10-02 Thread Larry Clark
I believe this is the general recommended sequence in most contexts where 
the initial incrementals are directed to disk to be later migrated to tape. 
That covers most sites.


For those sites with VTLs the situation changes. You are not migrating from 
primary storage pools, disk based, to copy pools. The whole issue of 
migration dissappears, and you simply back up then do your VTL data to 
copypools.


For those sites that have VTLs and high speed tape, they might be granular. 
Directing their large database backups, exports, mksysbs, (large files) to 
high speed tape directly and directing the remaining backups to the VTL. 
Both then copied to copypools.


So, yes, every solution is situation dependent. What hardware you have to 
meet your needs and impress others with...:-).


But it sounded from your initial post that you were in a primary storage 
pool on disk situation.


- Original Message - 
From: "Timothy Hughes" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 02, 2007 11:34 AM
Subject: Re: [ADSM-L] Migration question


Is there really recomended sequence every TSM Environment should follow?
Or does it depend on your Individual environmental needs?

Tim




Larry Clark wrote:

1- backups nodes
2- backup primary disk pools to tape (copypool)
3- migrate primary disk pool to tape
4- backup primary tape pool to tape (copypool) ( this allows picking up 
files to copypool not completed in 2 before 3 was triggered)

5- db backup
6- expiration
7- reclaim primary tape pool
8- reclaim copy pool

- Original Message - From: "CAYE PIERRE" 
<[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 02, 2007 3:25 AM
Subject: Re: [ADSM-L] Migration question


May I suggest a sequence ?

1- backups nodes
2- backup primary disk pools to tape
3- backup primary tape pool to tape
4- migrate primary disk pool
5- db backup
6- expiration
7- reclaim primary tape pool
8- reclaim copy pool

Pierre


-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] De
la part de Dollens, Bruce
Envoyé : lundi 1 octobre 2007 18:36
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Migration question

I have a question that I feel a little stupid in asking.

What is the difference between migration and backup primary
(disk) to copy (tape)?

I am working on changing my scheduling up and the recommended
order of steps I was given is:

Backup clients
Migration
Backup primary to copy
Backup db
Expiration
Reclamation
Start all over again

Thanks!



Re: migration question

2004-09-09 Thread Bill Kelly
Ralph,

The migration process will pick the node that is currently occupying the
greatest amount of space in the disk pool, and will migrate all of that
node's data (not just the largest individual files belonging to that node)
to tape before it checks again to see if we're below the low migration
threshold yet.  So if that node has enough data on disk, it could easily
account for this behavior.

-Bill

Bill Kelly
Auburn University OIT
334-844-9917

On Thu, 9 Sep 2004, Levi, Ralph wrote:

> I am running TSM 5.1.9 on AIX 4.3.3 .  My primary disk pool is 490GB
> with the himig=95, lowmig=90 and migprocess=1 .  Occasionally, I hit the
> 95% mark before the normal scheduled migration takes place.  When that
> happens the automigration starts and I would expect it to stop once it
> gets below the 90% mark.  It typically does not.  Instead, it will
> continue to until it gets to the mid 70% level.  My maxsize threshold is
> 20gb but with only 1 migprocess running that wouldn't account for such a
> big migration. Does anyone have any ideas ?
>
> Thanks,
> Ralph
>


Re: Migration question

2001-08-17 Thread Prather, Wanda

'Course TSM is smart!

We do what you describe every night - saves a lot of tape mounts, especially
since our onsitetape is collocated:

backup stgpool diskpool offsitecopypool
migrate to onsitetape
backup stgpool onsitetape offsitecopypool


BACKUP STGPOOL is always INCREMENTAL - TSM checks the DB and only copies
files that aren't in the destination copy pool already.


-Original Message-
From: Alan Davenport [mailto:[EMAIL PROTECTED]]
Sent: Friday, August 17, 2001 12:58 PM
To: [EMAIL PROTECTED]
Subject: Migration question


Hello fellow *SMers! I'm thinking of redoing how our backup pool
migration/tape copies process is done to increase efficiency. We are
currently processing this way:

Backuppool (disk) migration to primary onsite tape pool (3490).
Tapepool (onsite) copy backed up to vault (offsite) tape copy pool.

What I would like to do is this:

Backup disk pool to vault tape (offsite) copy.
Migrate disk to tape (primary onsite) copy.
Backup primary tape pool to offsite pool to catch any migration etc. that
may have occurred during the last 24 hours.

My concern is this. I do not want to back up the data TWICE from the disk
pool to the offsite pool. (Once from disk to offsite pool and again from
onsite tape pool to offsite tape pool.) That is, is *SM smart enough to know
that the data that was backed up to the vault (offsite) pool prior to
migration has already been backed up and will not back it up again from the
onsite tape pool to the offsite pool when the "BACKUP STG TAPEPOOL COPYPOOL"
command is executed?

My environment is TSM 4.1.0.0 on OS390 with 24 3490 tape drives.

Thanks for listening!

Alan Davenport
Selective Insurance
[EMAIL PROTECTED]



Re: Migration question

2001-08-17 Thread David Longo

Yes, it is smart enough that it won't backup the data twice.  It keeps
track of all that stuff.

David Longo

>>> [EMAIL PROTECTED] 08/17/01 12:58PM >>>
My concern is this. I do not want to back up the data TWICE from the disk
pool to the offsite pool. (Once from disk to offsite pool and again from
onsite tape pool to offsite tape pool.) That is, is *SM smart enough to know
that the data that was backed up to the vault (offsite) pool prior to
migration has already been backed up and will not back it up again from the
onsite tape pool to the offsite pool when the "BACKUP STG TAPEPOOL COPYPOOL"
command is executed?

My environment is TSM 4.1.0.0 on OS390 with 24 3490 tape drives.

Thanks for listening!

Alan Davenport
Selective Insurance
[EMAIL PROTECTED] 



"MMS " made the following
 annotations on 08/17/01 13:24:22
--
This message is for the named person's use only.  It may contain confidential, 
proprietary, or legally privileged information.  No confidentiality or privilege is 
waived or lost by any mistransmission.  If you receive this message in error, please 
immediately delete it and all copies of it from your system, destroy any hard copies 
of it, and notify the sender.  You must not, directly or indirectly, use, disclose, 
distribute, print, or copy any part of this message if you are not the intended 
recipient.  Health First reserves the right to monitor all e-mail communications 
through its networks.  Any views or opinions expressed in this message are solely 
those of the individual sender, except (1) where the message states such views or 
opinions are on behalf of a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.

==



Re: Migration question

2001-08-17 Thread Alan Davenport

Thank you, and all the others who have responded. I inherited *SM and those
who set it up before me were under the impression that it WOULD back the
data up twice. I believed otherwise but I gave them the benefit of the doubt
and asked. Again, thank you very much for responding to my question. Looks
like I've be able to shave 2 hours off of my morning processing. (That tape
to tape copy AFTER migration was a killer. Especially with 3490 tapes!)

+ -Original Message-
+ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
+ Sent: Friday, August 17, 2001 1:31 PM
+ To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
+ Subject: Re: Migration question
+
+
+ From: [EMAIL PROTECTED]
+ To: [EMAIL PROTECTED]
+ Date: Fri, 17 Aug 2001 13:31:08 -0400
+ Subject: Re: Migration question
+
+ 'Course TSM is smart!
+
+ We do what you describe every night - saves a lot of tape mounts,
+ especially
+ since our onsitetape is collocated:
+
+ backup stgpool diskpool offsitecopypool
+ migrate to onsitetape
+ backup stgpool onsitetape offsitecopypool
+
+
+ BACKUP STGPOOL is always INCREMENTAL - TSM checks the DB and only copies
+ files that aren't in the destination copy pool already.
+
+
+ -Original Message-
+ From: Alan Davenport [mailto:[EMAIL PROTECTED]]
+ Sent: Friday, August 17, 2001 12:58 PM
+ To: [EMAIL PROTECTED]
+ Subject: Migration question
+
+
+ Hello fellow *SMers! I'm thinking of redoing how our backup pool
+ migration/tape copies process is done to increase efficiency. We are
+ currently processing this way:
+
+ Backuppool (disk) migration to primary onsite tape pool (3490).
+ Tapepool (onsite) copy backed up to vault (offsite) tape copy pool.
+
+ What I would like to do is this:
+
+ Backup disk pool to vault tape (offsite) copy.
+ Migrate disk to tape (primary onsite) copy.
+ Backup primary tape pool to offsite pool to catch any migration etc. that
+ may have occurred during the last 24 hours.
+
+ My concern is this. I do not want to back up the data TWICE from the disk
+ pool to the offsite pool. (Once from disk to offsite pool and again from
+ onsite tape pool to offsite tape pool.) That is, is *SM smart
+ enough to know
+ that the data that was backed up to the vault (offsite) pool prior to
+ migration has already been backed up and will not back it up
+ again from the
+ onsite tape pool to the offsite pool when the "BACKUP STG
+ TAPEPOOL COPYPOOL"
+ command is executed?
+
+ My environment is TSM 4.1.0.0 on OS390 with 24 3490 tape drives.
+
+ Thanks for listening!
+
+ Alan Davenport
+ Selective Insurance
+ [EMAIL PROTECTED]
+



Re: Migration question

2001-08-18 Thread Kelly J. Lipp

I'd change it to:

backup stg disk copy
backup stg tape copy (for anything that might have been migrated during the
night.  Do it this way since the copy pool tapes are already mounted and
ready to go)
migrate disk tape

Kelly J. Lipp
Storage Solutions Specialists, Inc.
PO Box 51313
Colorado Springs CO 80949-1313
(719) 531-5926
Fax: (240) 539-7175
Email: [EMAIL PROTECTED] or [EMAIL PROTECTED]
www.storsol.com
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Alan Davenport
Sent: Friday, August 17, 2001 12:26 PM
To: [EMAIL PROTECTED]
Subject: Re: Migration question


Thank you, and all the others who have responded. I inherited *SM and those
who set it up before me were under the impression that it WOULD back the
data up twice. I believed otherwise but I gave them the benefit of the doubt
and asked. Again, thank you very much for responding to my question. Looks
like I've be able to shave 2 hours off of my morning processing. (That tape
to tape copy AFTER migration was a killer. Especially with 3490 tapes!)

+ -Original Message-
+ From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
+ Sent: Friday, August 17, 2001 1:31 PM
+ To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
+ Subject: Re: Migration question
+
+
+ From: [EMAIL PROTECTED]
+ To: [EMAIL PROTECTED]
+ Date: Fri, 17 Aug 2001 13:31:08 -0400
+ Subject: Re: Migration question
+
+ 'Course TSM is smart!
+
+ We do what you describe every night - saves a lot of tape mounts,
+ especially
+ since our onsitetape is collocated:
+
+ backup stgpool diskpool offsitecopypool
+ migrate to onsitetape
+ backup stgpool onsitetape offsitecopypool
+
+
+ BACKUP STGPOOL is always INCREMENTAL - TSM checks the DB and only copies
+ files that aren't in the destination copy pool already.
+
+
+ -Original Message-
+ From: Alan Davenport [mailto:[EMAIL PROTECTED]]
+ Sent: Friday, August 17, 2001 12:58 PM
+ To: [EMAIL PROTECTED]
+ Subject: Migration question
+
+
+ Hello fellow *SMers! I'm thinking of redoing how our backup pool
+ migration/tape copies process is done to increase efficiency. We are
+ currently processing this way:
+
+ Backuppool (disk) migration to primary onsite tape pool (3490).
+ Tapepool (onsite) copy backed up to vault (offsite) tape copy pool.
+
+ What I would like to do is this:
+
+ Backup disk pool to vault tape (offsite) copy.
+ Migrate disk to tape (primary onsite) copy.
+ Backup primary tape pool to offsite pool to catch any migration etc. that
+ may have occurred during the last 24 hours.
+
+ My concern is this. I do not want to back up the data TWICE from the disk
+ pool to the offsite pool. (Once from disk to offsite pool and again from
+ onsite tape pool to offsite tape pool.) That is, is *SM smart
+ enough to know
+ that the data that was backed up to the vault (offsite) pool prior to
+ migration has already been backed up and will not back it up
+ again from the
+ onsite tape pool to the offsite pool when the "BACKUP STG
+ TAPEPOOL COPYPOOL"
+ command is executed?
+
+ My environment is TSM 4.1.0.0 on OS390 with 24 3490 tape drives.
+
+ Thanks for listening!
+
+ Alan Davenport
+ Selective Insurance
+ [EMAIL PROTECTED]
+



Re: Migration question

2001-08-20 Thread Alan Davenport

Thanks but that willl not work here. I have mount retention set at 1 minute.
When I inherited the system, mount retention was set at 1 HOUR and with
collocation and lots of nodes *SM had almost all 24 tape drives tied up
preventing other batch processes and HSM from getting any drives! It's
behaving a little more sanely now. (:

 Al
+
+ I'd change it to:
+
+ backup stg disk copy
+ backup stg tape copy (for anything that might have been migrated
+ during the
+ night.  Do it this way since the copy pool tapes are already mounted and
+ ready to go)
+ migrate disk tape
+
+
+
+ Thank you, and all the others who have responded. I inherited *SM
+ and those
+ who set it up before me were under the impression that it WOULD back the
+ data up twice. I believed otherwise but I gave them the benefit
+ of the doubt
+ and asked. Again, thank you very much for responding to my question. Looks
+ like I've be able to shave 2 hours off of my morning processing.
+ (That tape
+ to tape copy AFTER migration was a killer. Especially with 3490 tapes!)
+
+ +
+ + We do what you describe every night - saves a lot of tape mounts,
+ + especially
+ + since our onsitetape is collocated:
+ +
+ + backup stgpool diskpool offsitecopypool
+ + migrate to onsitetape
+ + backup stgpool onsitetape offsitecopypool
+ +
+ +
+ + BACKUP STGPOOL is always INCREMENTAL - TSM checks the DB and only copies
+ + files that aren't in the destination copy pool already.
+ +
+ +
+ +
+ + Hello fellow *SMers! I'm thinking of redoing how our backup pool
+ + migration/tape copies process is done to increase efficiency. We are
+ + currently processing this way:
+ +
+ + Backuppool (disk) migration to primary onsite tape pool (3490).
+ + Tapepool (onsite) copy backed up to vault (offsite) tape copy pool.
+ +
+ + What I would like to do is this:
+ +
+ + Backup disk pool to vault tape (offsite) copy.
+ + Migrate disk to tape (primary onsite) copy.
+ + Backup primary tape pool to offsite pool to catch any migration
+ etc. that
+ + may have occurred during the last 24 hours.
+ +
+ + My concern is this. I do not want to back up the data TWICE
+ from the disk
+ + pool to the offsite pool. (Once from disk to offsite pool and again from
+ + onsite tape pool to offsite tape pool.) That is, is *SM smart
+ + enough to know
+ + that the data that was backed up to the vault (offsite) pool prior to
+ + migration has already been backed up and will not back it up
+ + again from the
+ + onsite tape pool to the offsite pool when the "BACKUP STG
+ + TAPEPOOL COPYPOOL"
+ + command is executed?
+ +
+ + My environment is TSM 4.1.0.0 on OS390 with 24 3490 tape drives.
+ +
+ + Thanks for listening!
+ +
+ + Alan Davenport
+ + Selective Insurance
+ + [EMAIL PROTECTED]
+ +
+



SV: Migration Question

2003-08-18 Thread Christian Svensson
Hi Vyju!
We are developing a tool where you can migrate data from your old system to TSM.
Today are we only support Backup Exec and NetBackup to TSM.
But right now are we got a request to migrate from ARCServe 9 to TSM and later are we 
also going to support Legato NetWorker.
If you are intressting send me a privet email.

/Christian Svensson


-Ursprungligt meddelande-
Från: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Krishnan,
Vyju
Skickat: den 15 augusti 2003 15:46
Till: [EMAIL PROTECTED]
Ämne: Migration Question


Hello folks,

Any one here who had done migration to TSM from other backup software
products like StorageTek REEL Librarian, Legato NetWorker, CA BrightStor,
etc. I am currently using STK REEL Products and looking for a best
compatible replacement for this.  Any help in this regard would be highly
appreciated.

Thanks,
Vyju Krishnan


TSM Migration Question

2006-08-22 Thread Weaver, Gerry P.
Good Day To All.

 

Currently, we are running 5.2.3.3 on a Z/os platform and we
are preparing to migrate to 5.3 

 

Couple of questions:

 

1)   Has anyone in the Z/os world gone to 5.3.3 yet? 

2)   From 5.2.3.3 can we migrate directly to 5.3.3?  Or do we
migrate from 5.2.3.3 ... to 5.3 ... to 5.3.1 ... to 5.3.3

 

Gerry P. Weaver


Re: Migration question

2002-09-13 Thread Steve Schaub

No, each storage group has a setting that controls the number of migration processes 
used during migration.  Do a Q STG  F=D to show the setting, and UPD STG 
 MIGPR=<#> to change it.  For example, to use 3 drives during the migration 
of a stgpool named DISKPOOL, enter UPD STG DISKPOOL MIGPR=3.

Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]
616-393-1457 Desk
616-836-6962 Cell
Siempre Hay Esperanza

>>> [EMAIL PROTECTED] 09/13 3:43 AM >>>
Good morning to all of you.

I have a quick question. Does migration automatically use all available
tape drives?

Thanks in advance

Farren Minns - TSM and Solaris System Admin - John Wiley and Sons Ltd

Our Chichester based offices have amalgamated and relocated to a new
address

John Wiley & Sons Ltd
The Atrium
Southern Gate
Chichester
West Sussex
PO19 8SQ

Main phone and fax numbers remain the same:
Phone +44 (0)1243 779777
Fax   +44 (0)1243 775878
Direct dial numbers are unchanged

Address, phone and fax nos. for all other Wiley UK locations are unchanged
**



Re: Migration question

2002-09-13 Thread Farren Minns

Thanks for that

I see now that we have only ever been using one drive! I'll use two from
now on.

Thanks again

Farren






Steve Schaub <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 13/09/2002
11:56:28

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:[EMAIL PROTECTED]
cc:
Subject:Re: Migration question


No, each storage group has a setting that controls the number of migration
processes used during migration.  Do a Q STG  F=D to show the
setting, and UPD STG  MIGPR=<#> to change it.  For example, to use
3 drives during the migration of a stgpool named DISKPOOL, enter UPD STG
DISKPOOL MIGPR=3.

Steve Schaub
Systems Engineer
Haworth, Inc
[EMAIL PROTECTED]
616-393-1457 Desk
616-836-6962 Cell
Siempre Hay Esperanza

>>> [EMAIL PROTECTED] 09/13 3:43 AM >>>
Good morning to all of you.

I have a quick question. Does migration automatically use all available
tape drives?

Thanks in advance

Farren Minns - TSM and Solaris System Admin - John Wiley and Sons Ltd

Our Chichester based offices have amalgamated and relocated to a new
address

John Wiley & Sons Ltd
The Atrium
Southern Gate
Chichester
West Sussex
PO19 8SQ

Main phone and fax numbers remain the same:
Phone +44 (0)1243 779777
Fax   +44 (0)1243 775878
Direct dial numbers are unchanged

Address, phone and fax nos. for all other Wiley UK locations are unchanged
**



Our Chichester based offices have amalgamated and relocated to a new
address

John Wiley & Sons Ltd
The Atrium
Southern Gate
Chichester
West Sussex
PO19 8SQ

Main phone and fax numbers remain the same:
Phone +44 (0)1243 779777
Fax   +44 (0)1243 775878
Direct dial numbers are unchanged

Address, phone and fax nos. for all other Wiley UK locations are unchanged
**



Re: Migration question

2002-09-13 Thread Cook, Dwight E

Nope !
Migration uses drives based on the smallest number of :
A) max number of migration processes set for stgpool to be migrated
B) number of tape drives in the devclass of the stgpool being
migrated to
C) unique node's data in the stgpool to be migrated

OK, so if you have 6 drives, migration processes set to 4, but only one
node's data in a storage pool, you will only see a single migration process.

I really hate this when a single node dumps 800 GB to disk :-(

Dwight



-Original Message-
From: Farren Minns [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 13, 2002 2:44 AM
To: [EMAIL PROTECTED]
Subject: Migration question


Good morning to all of you.

I have a quick question. Does migration automatically use all available
tape drives?

Thanks in advance

Farren Minns - TSM and Solaris System Admin - John Wiley and Sons Ltd

Our Chichester based offices have amalgamated and relocated to a new
address

John Wiley & Sons Ltd
The Atrium
Southern Gate
Chichester
West Sussex
PO19 8SQ

Main phone and fax numbers remain the same:
Phone +44 (0)1243 779777
Fax   +44 (0)1243 775878
Direct dial numbers are unchanged

Address, phone and fax nos. for all other Wiley UK locations are unchanged

**



Re: Migration Question

2002-05-07 Thread Alan Davenport

I would say NO, that would cause you problems. When you restart the server
it will expect the files to still be in the disc pool if you skip item #2.
In this case you would have to do a restore stgpool to make the server
happy. You will not be saving any time by skipping item #2 and most likely,
it will take you longer.

Do it this way. If you migrate and backup the disc pool to the same tape
pool, skip item #1 you've described below. If you backup the disc pool to a
different pool than you migrate to, do both items 1 and 2 below.

Take care,
Al

Alan Davenport
Senior Storage Administrator
Selective Insurance
[EMAIL PROTECTED]
(973) 948-1306

-Original Message-
From: Roy Lake [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, May 07, 2002 8:07 AM
To: [EMAIL PROTECTED]
Subject: Migration Question


Hi Guys,

I have a quick question for you

I am about to delete all of the promary disk storage pools for a
resizing and re-placement exercise.

IBM have given me their recommendations as:-

1. Backup stgpool.
2. Update lo mig and Hi mig to 0 to migrate everything to tape.

Now, am I right in saying that option 2 is un-necessary, as this data
should already reside on tape anyway (as it would have been backup up
sometime when a "backup stgpool" was taken)?.

Kind Regards,

Roy Lake
RS/6000 & TSM Systems Administration Team
Tibbett & Britten European I.T.
Judd House
Ripple Road
Barking
Essex IG11 0TU
0208 526 8853
[EMAIL PROTECTED]

This e-mail message has been scanned using Sophos Sweep
http://www.sophos.com

**
---  IMPORTANT INFORMATION -
This message is intended only for the use of the Person(s) ('the
intended recipient') to whom it was addressed. It may contain
information which is privileged & confidential within the meaning
of applicable law. Accordingly any dissemination, distribution,
copying, or other use of this message, or any of its contents,
by any person other than the intended recipient may constitute
a breach of civil or criminal law and is strictly prohibited.

If you are NOT the intended recipient please contact the sender
and dispose of this email as soon as possible. If in doubt please
contact European IT on 0870 607 6777 (+44 20 85 26 88 88).
This message has been sent via the Public Internet.
**



Re: Migration Question

2002-05-07 Thread Burak Demircan

Hi, 
I think IBM wants you to have at least 2 copies of backups on tape media. 
So, I would also recommend that. 
Regards, 
Burak 





[EMAIL PROTECTED] 
Sent by: [EMAIL PROTECTED] 

07.05.2002 16:13 
Please respond to ADSM-L 
        
        To:        [EMAIL PROTECTED] 
        cc:         
        Subject:        Migration Question

Hi Guys,
 
I have a quick question for you
 
I am about to delete all of the promary disk storage pools for a
resizing and re-placement exercise.
 
IBM have given me their recommendations as:-
 
1. Backup stgpool.
2. Update lo mig and Hi mig to 0 to migrate everything to tape.
 
Now, am I right in saying that option 2 is un-necessary, as this data
should already reside on tape anyway (as it would have been backup up
sometime when a "backup stgpool" was taken)?.
 
Kind Regards,
 
Roy Lake
RS/6000 & TSM Systems Administration Team
Tibbett & Britten European I.T.
Judd House
Ripple Road
Barking
Essex IG11 0TU
0208 526 8853
[EMAIL PROTECTED]
 
This e-mail message has been scanned using Sophos Sweep http://www.sophos.com
 
**
---  IMPORTANT INFORMATION -
This message is intended only for the use of the Person(s) ('the
intended recipient') to whom it was addressed. It may contain
information which is privileged & confidential within the meaning
of applicable law. Accordingly any dissemination, distribution,
copying, or other use of this message, or any of its contents,
by any person other than the intended recipient may constitute
a breach of civil or criminal law and is strictly prohibited.
 
If you are NOT the intended recipient please contact the sender
and dispose of this email as soon as possible. If in doubt please
contact European IT on 0870 607 6777 (+44 20 85 26 88 88).
This message has been sent via the Public Internet.
**
 




Re: Migration Question

2002-05-07 Thread Bill Smoldt

Roy,

You don't have to do the backup of the diskpool first, but why wouldn't you?
That is what you should do every day anyway.  If all the data in diskpool
has already been backed up to your copypool in your daily processing, it
will only take a few seconds and you'll have piece of mind when you hit
return on the del volume command.

Before migrating the data to tapepool, however, do an

update stg diskpool cache=no

That will purge all the cached files in the diskpool during migration so you
shouldn't have to add the discarddata=yes when you delete the diskpool
volumes.

Bill SmoldtSSSI
STORServer, Inc.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Roy Lake
Sent: Tuesday, May 07, 2002 6:07 AM
To: [EMAIL PROTECTED]
Subject: Migration Question


Hi Guys,

I have a quick question for you

I am about to delete all of the promary disk storage pools for a
resizing and re-placement exercise.

IBM have given me their recommendations as:-

1. Backup stgpool.
2. Update lo mig and Hi mig to 0 to migrate everything to tape.

Now, am I right in saying that option 2 is un-necessary, as this data
should already reside on tape anyway (as it would have been backup up
sometime when a "backup stgpool" was taken)?.

Kind Regards,

Roy Lake
RS/6000 & TSM Systems Administration Team
Tibbett & Britten European I.T.
Judd House
Ripple Road
Barking
Essex IG11 0TU
0208 526 8853
[EMAIL PROTECTED]

This e-mail message has been scanned using Sophos Sweep
http://www.sophos.com

**
---  IMPORTANT INFORMATION -
This message is intended only for the use of the Person(s) ('the
intended recipient') to whom it was addressed. It may contain
information which is privileged & confidential within the meaning
of applicable law. Accordingly any dissemination, distribution,
copying, or other use of this message, or any of its contents,
by any person other than the intended recipient may constitute
a breach of civil or criminal law and is strictly prohibited.

If you are NOT the intended recipient please contact the sender
and dispose of this email as soon as possible. If in doubt please
contact European IT on 0870 607 6777 (+44 20 85 26 88 88).
This message has been sent via the Public Internet.
**



TSM Migration Question

2016-09-21 Thread Plair, Ricky
Within TSM I am migrating an old storage pool on a DD4200 to a new storage pool 
on a DD4500.

First of all, it worked fine yesterday.

The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
process, but I had to stop it.

Now when I restart it the migration process it is migrating to the old storage 
volumes instead of the new storage volumes. Basically it's just migrating from 
one disk volume inside the ddstgpool to another disk volume in the ddstgpool.

It is not using the next pool parameter,  has anyone seen this problem before?

I appreciate the help.








Ricky M. Plair
Storage Engineer
HealthPlan Services
Office: 813 289 1000 Ext 2273
Mobile: 813 357 9673



_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
CONFIDENTIALITY NOTICE: This email message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and/or Protected Health Information (PHI) subject to 
protection under the law, including the Health Insurance Portability and 
Accountability Act of 1996, as amended (HIPAA). If you are not the intended 
recipient or the person responsible for delivering the email to the intended 
recipient, be advised that you have received this email in error and that any 
use, disclosure, distribution, forwarding, printing, or copying of this email 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately and destroy all copies of the original message.


Re: TSM Migration Question

2006-08-22 Thread Kevin Kinder
Gerry,

We migrated from 5.1.7.0 to 5.3.0.0, and then to 5.3.3.3 on z/OS.

If you have any questions, feel free to contact me directly.

Kevin Kinder
admin(at)wvadmin.gov

>>> [EMAIL PROTECTED] 8/22/06 1:57 PM >>>
Good Day To All.

 

Currently, we are running 5.2.3.3 on a Z/os platform and we
are preparing to migrate to 5.3 

 

Couple of questions:

 

1)   Has anyone in the Z/os world gone to 5.3.3 yet? 

2)   From 5.2.3.3 can we migrate directly to 5.3.3?  Or do we
migrate from 5.2.3.3 ... to 5.3 ... to 5.3.1 ... to 5.3.3

 

Gerry P. Weaver


Tsm v6 migration question

2011-03-23 Thread Lee, Gary D.
We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

Re: TSM Migration Question

2016-09-21 Thread Skylar Thompson
Can you post the output of "Q STG F=D" for each of those pools?

On Wed, Sep 21, 2016 at 02:33:42PM +, Plair, Ricky wrote:
> Within TSM I am migrating an old storage pool on a DD4200 to a new storage 
> pool on a DD4500.
>
> First of all, it worked fine yesterday.
>
> The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
> process, but I had to stop it.
>
> Now when I restart it the migration process it is migrating to the old 
> storage volumes instead of the new storage volumes. Basically it's just 
> migrating from one disk volume inside the ddstgpool to another disk volume in 
> the ddstgpool.
>
> It is not using the next pool parameter,  has anyone seen this problem before?
>
> I appreciate the help.

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
OLD STORAGE POOL

tsm: PROD-TSM01-VM>q stg ddstgpool f=d

Storage Pool Name: DDSTGPOOL
Storage Pool Type: Primary
Device Class Name: DDFILE
   Estimated Capacity: 402,224 G
   Space Trigger Util: 69.4
 Pct Util: 70.4
 Pct Migr: 70.4
  Pct Logical: 95.9
 High Mig Pct: 100
  Low Mig Pct: 95
  Migration Delay: 0
   Migration Continue: Yes
  Migration Processes: 26
Reclamation Processes: 10
Next Storage Pool: DDSTGPOOL4500
 Reclaim Storage Pool:
   Maximum Size Threshold: No Limit
   Access: Read/Write
  Description:
Overflow Location:
Cache Migrated Files?:
   Collocate?: No
Reclamation Threshold: 70
Offsite Reclamation Limit:
  Maximum Scratch Volumes Allowed: 3,000
   Number of Scratch Volumes Used: 2,947
Delay Period for Volume Reuse: 0 Day(s)
   Migration in Progress?: No
 Amount Migrated (MB): 0.00
 Elapsed Migration Time (seconds): 4,560
 Reclamation in Progress?: Yes
   Last Update by (administrator): RPLAIR
Last Update Date/Time: 09/21/2016 09:05:51
 Storage Pool Data Format: Native
 Copy Storage Pool(s):
  Active Data Pool(s):
  Continue Copy on Error?: Yes
 CRC Data: No
 Reclamation Type: Threshold
  Overwrite Data when Deleted:
Deduplicate Data?: No
 Processes For Identifying Duplicates:
Duplicate Data Not Stored:
   Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No



NEW STORAGE POOL

tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d

Storage Pool Name: DDSTGPOOL4500
Storage Pool Type: Primary
Device Class Name: DDFILE1
   Estimated Capacity: 437,159 G
   Space Trigger Util: 21.4
 Pct Util: 6.7
 Pct Migr: 6.7
  Pct Logical: 100.0
 High Mig Pct: 90
  Low Mig Pct: 70
  Migration Delay: 0
   Migration Continue: Yes
  Migration Processes: 1
Reclamation Processes: 1
Next Storage Pool: TAPEPOOL
 Reclaim Storage Pool:
   Maximum Size Threshold: No Limit
   Access: Read/Write
  Description:
Overflow Location:
Cache Migrated Files?:
   Collocate?: No
Reclamation Threshold: 70
Offsite Reclamation Limit:
  Maximum Scratch Volumes Allowed: 3,000
   Number of Scratch Volumes Used: 0
Delay Period for Volume Reuse: 0 Day(s)
   Migration in Progress?: No
 Amount Migrated (MB): 0.00
 Elapsed Migration Time (seconds): 0
 Reclamation in Progress?: No
   Last Update by (administrator): RPLAIR
Last Update Date/Time: 09/21/2016 08:38:58
 Storage Pool Data Format: Native
 Copy Storage Pool(s):
  Active Data Pool(s):
  Continue Copy on Error?: Yes
 CRC Data: No
 Reclamation Type: Threshold
  Overwrite Data when Deleted:
Deduplicate Data?: No
 Processes For Identifying Duplicates:
Duplicate Data Not Stored:
   Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

Can you post the output of "Q STG F=D" for each of those pools?

On Wed, Sep 21, 2016 at 02:33:42PM +, Plair, Ricky wrote:
> Within TSM I am migrating an old storage pool on a DD4200 to a new storage 
> pool on a DD4500.
>
> First of all, it worked fine yesterday.
>
> The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
> process, but I had to stop it.
>
> Now when I restart it the migration process it is migrating to the old 
> storage volumes instead of the new storage volumes. Basically it's just 
> migrating from one disk volume inside the ddstgpool to another disk volume in 
> the ddstgpool.
>
> It is n

Re: TSM Migration Question

2016-09-21 Thread Skylar Thompson
The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No
>  Processes For Identifying Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client
> Contains Data Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 0
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 0
>  Reclamation in Progress?: No
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 08:38:58
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No
>  Processes For Identifying Duplicates:
> Duplicate Data Not Stored:
>    Auto-copy Mode: Client
> Contains Data Deduplicated by Client?: No
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.ED

Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
Nothing out of the normal, the below is kind of odd but I'm not sure it has 
anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to migrate to 
the tapepool and now it's doing the same thing. Migrating to itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 0
> Delay Period for 

Re: TSM Migration Question

2016-09-21 Thread Gee, Norman
Is it migrating or reclaiming at 70% as defined.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Wednesday, September 21, 2016 8:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Migration Question

Nothing out of the normal, the below is kind of odd but I'm not sure it has 
anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to migrate to 
the tapepool and now it's doing the same thing. Migrating to itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated F

Re: TSM Migration Question

2016-09-21 Thread Ron Delaware
Ricky,

could you please send the output of the following commands:
1.   Q MOUNT
2. q act begint=-12 se=Migration

Also, the only way that the stgpool would migrate back to itself would be 
if there was a loop, meaning your disk pool points to the tape pool as the 
next stgpool, and your tape pool points to the disk pool as the next 
stgpool. If your tape pool was to hit the high migration mark, it would 
start a migration to the disk pool



_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert
IBM Corporation | System Lab Services
IBM Certified Solutions Advisor - Spectrum Storage
IBM Certified Spectrum Scale 4.1 & Spectrum Protect 7.1.3
IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   "Plair, Ricky" 
To: ADSM-L@VM.MARIST.EDU
Date:   09/21/2016 08:57 AM
Subject:Re: [ADSM-L] TSM Migration Question
Sent by:"ADSM: Dist Stor Manager" 



Nothing out of the normal, the below is kind of odd but I'm not sure it 
has anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to 
migrate to the tapepool and now it's doing the same thing. Migrating to 
itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Skylar Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>

Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
The problem was corrected but,  I have no idea how.

I stopped the migration and replication kicked off. I let the replication run 
for an hour or so while I was at lunch. When I got back I stopped replication 
and restarted the migration. I have no idea how but it used the correct volumes 
this time. 

If this makes any sense by all means please explain it to me.

I appreciate all the help.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Ron 
Delaware
Sent: Wednesday, September 21, 2016 12:55 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

Ricky,

could you please send the output of the following commands:
1.   Q MOUNT
2. q act begint=-12 se=Migration

Also, the only way that the stgpool would migrate back to itself would be if 
there was a loop, meaning your disk pool points to the tape pool as the next 
stgpool, and your tape pool points to the disk pool as the next stgpool. If 
your tape pool was to hit the high migration mark, it would start a migration 
to the disk pool



_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert IBM Corporation | System Lab Services 
IBM Certified Solutions Advisor - Spectrum Storage IBM Certified Spectrum Scale 
4.1 & Spectrum Protect 7.1.3 IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   "Plair, Ricky" 
To: ADSM-L@VM.MARIST.EDU
Date:   09/21/2016 08:57 AM
Subject:Re: [ADSM-L] TSM Migration Question
Sent by:"ADSM: Dist Stor Manager" 



Nothing out of the normal, the below is kind of odd but I'm not sure it 
has anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to 
migrate to the tapepool and now it's doing the same thing. Migrating to 
itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Skylar Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  

Tape reclaim & stgpool migration question/

2003-06-19 Thread PINNI, BALANAND (SBCSI)
All-

I recently created reclaim stgpool with upper mig to 80% and lower mig to
0%.

But the CFILE devclass stgpool that is disk when gets to 80 % migrate  level
, tapes do not stop to reclaim  but it continues .What might have been
missing

Thanks in advance

Balanand Pinni


LTO1 to LTO4 migration question

2008-06-05 Thread TSM User
I'm trying to find the best way to migrate a 3584 library from LTO1 to LTO4
media.  For a period of time, both drive types will be installed.  I know
the library can be partitioned, but capacity is already pretty tight, and it
would almost certainly require continuous "redrawing the borders" as the
relative proportion of media types shifted, in order to make everything
fit.  Would there be any gotchas if I defined both device classes to the
same library, and then assigned the LTO4 scratch media to the appropriate
storage pools immediately on checkin?  Or am I required to do some kind of
library partitioning to make this work?  I've done some searching online,
but can't find anything explicit one way or the other.

Thanks,

Kathy


Server to Server migration question

2005-08-10 Thread Robert Ouzen
 
Hi to all
 
I tried to make a server to server migration here the steps I made:

  On source Server named SERVER1
  1. I define a server SERVER2 (target server)
  2. I define a devclass SERVERCLASS with device type SERVER and servername 
SERVER2
  3. I create a stg SERVER1_POOL with devclass SERVERCLASS
 
  On target Server named SERVER2
1.  I define a server SERVER1 (source server) with nodename POSTBACK
2.  I create a nodename of type SERVER named POSTBACK
3.  I add this nodename to the STANDARD policy with a copypool destination 
named I2000_SERVER1 (tape storagepool)
4.  Activate the STANDARD policy
 
I update my primary stg on SERVER1 (source server) named primarypool  with next 
stg SERVER1_POOL
 
The migration start I see both on the source and target server activity on the 
actlog but I got always this error message:
 
08/10/2005 20:05:23  ANR0520W Transaction failed for session 23881 for node
  POSTBACK (Windows) - storage pool DISKPOOL is not
  defined. (SESSION: 23881)
 
Why trying to store on DISKPOOL storage  I doubled check my STANDARD policy 
and my copygroup is I2000_SERVER1 (I activate several times the STANDARD policy 
and check the active policy to see that I2000_SERVER1 is correct in did 
 
My TSM version on the source server is 5.2.4.0 Windows  O.S
My TSM version on the target server  is 5.2.4.2 AIX-RS/6000  O.S
 
Any suggestion ?
 
Regards
Robert Ouzen
e-mail: [EMAIL PROTECTED]


Library Manager and Migration question

2008-10-15 Thread Haberstroh, Debbie (IT)
Hi All,

We currently have 1 TSM server ver 5.3.3 on AIX 5.3, with a large database, 
228GB.  We have finally received approval to move this to a new server, upgrade 
to 5.5 and split the clients up to reduce the DB size.  We would like to have 4 
LPAR's created on the new server, 1 for a Library manager and the other 3 for 
client databases. Our library is a TS3584 with 12 LTO2 drives.  We are adding 
another expansion cabinet and adding 4 LTO3 drives but are staying with LTO2 
tapes for now.  Our current library configuration has the library shared but it 
is not a library manager since we only had one server.   

Due to the size and complexity of our environment, we contracted with a vendor 
to do the setup and start the process.  Now that the contract has been signed 
and we have started our planning sessions, the vendor is having a problem with 
the library manager/client configuration.  At first he said that there was no 
such thing.  I sent him documentation and they also found an IBM redbook on the 
subject but now says that it is too complicated since our server is so large 
and that I will need to partition the library, with some data loss!!  I am 
strongly against this setup and would like to know if I am wrong.  How 
difficult would it be to create a library manager and client setup for an 
existing environment?  I currently have over 1000 tapes onsite and as many 
offsite and naturally do not want to lose any data.  Thanks for the feedback.

Debbie Haberstroh
Server Administration


Re: Tsm v6 migration question

2011-03-23 Thread J. Pohlmann
Gary, please take a look at

http://www-01.ibm.com/support/docview.wss?uid=swg21421060&myns=swgtiv&mynp=O
CSSGSG7&mync=E

I found that the actual amount of space for the DB has been , 2 X. I
generally recommend that a file system of 3 X the v5 database be set aside.

Regards,

Joerg Pohlmann

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Lee, Gary D.
Sent: Wednesday, March 23, 2011 10:44
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we
will need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310


Re: Tsm v6 migration question

2011-03-23 Thread Anders Räntilä
My experience from the latest upgrades are that the database will 

grow by 10% when upgrading from 5.5 to 6.2.2.2.

 

2 days ago upgraded a 5.5 server to 6.2.2.2 and this happened 

with the database size:

 

 

**

*** ---> Q DB F=Dfrom TSM 5.5

**

 

 

Available Assigned  Pct  Max.   

Space Capacity  Util  Util  





   (MB)  

-  - - 

  120,000  120,00   74.2  74.2  


 

 

This is 120GB*74.2% = 89GB used

 

**

*** After the upgrade to 6.2.2.2 

**

 Database Name: TSMDB1

Total Size of File System (MB): 737,233

Space Used by Database(MB): 97,216

 

 

All the upgrades I've done during the last 6 months show the same pattern.

 

When TSM 6.1 was released IBM said the DB would grow approx. 3x (300%), 

but that doesn't seem to be the case today.

 

Of course if you start adding features like dedup the DB will grow. I 

have one installation that was 40GB in 5.5 and today is around 800GB.

This growth is mostly caused by dedup (add 1TB of DB disk and save 75TB 

of stgpool disk)

 

 

-Anders Räntilä

 

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

RÄNKAB - Räntilä Konsult AB  

Klippingvägen 23

SE-196 32 Kungsängen

Sweden

 

Email: and...@rantila.com  

Phone: +46 701 489 431

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

 

 

 

 


Re: Tsm v6 migration question

2011-03-24 Thread molin gregory
Hello Gary,

Effectively, the database grows between 10 and 20 %. (more if new features, 
like dedup, are intalled)
But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Lee, 
Gary D.
Envoyé : mercredi 23 mars 2011 18:44
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. "


Re: Tape reclaim & stgpool migration question/

2003-06-19 Thread Jin Bae Chi
You may have a little confusion about Reclamation and Migration.

Do 'q stg xxxpool f=d' and find 'Reclamation Threshold' which governs how reclamation 
will be done.

Highmig=xx and lowmig=xx are parameters that works only for Migration of data from one 
pool to another. When a stg reaches highmig #, it kicks migration until reaches 
lowmig#. 

Hope this helps.


Gus





>>> [EMAIL PROTECTED] 06/19/03 11:25AM >>>
All-

I recently created reclaim stgpool with upper mig to 80% and lower mig to
0%.

But the CFILE devclass stgpool that is disk when gets to 80 % migrate  level
, tapes do not stop to reclaim  but it continues .What might have been
missing

Thanks in advance

Balanand Pinni


Re: Tape reclaim & stgpool migration question/

2003-06-19 Thread PINNI, BALANAND (SBCSI)
Reclamation % is set to 50 on tapepool.Reclamation Threshold: 50

Migration set as High Mig Pct: 80 on SQUEnTIal  ACCESS Stgpool disk with
devclass cfile.
Low Mig Pct: 0

Migration takes place but after certain retries . What I see is after  dsik
stgpool reach 95% it retries with No more space in stgpool of disk pool to
which tapes are getting reclaimed.

My question is why it is not sensing 80% on sequential access disk stgpool.

Thanks
Balanand Pinni



-Original Message-
From: Jin Bae Chi [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 19, 2003 10:51 AM
To: [EMAIL PROTECTED]
Subject: Re: Tape reclaim & stgpool migration question/


You may have a little confusion about Reclamation and Migration.

Do 'q stg xxxpool f=d' and find 'Reclamation Threshold' which governs how
reclamation will be done.

Highmig=xx and lowmig=xx are parameters that works only for Migration of
data from one pool to another. When a stg reaches highmig #, it kicks
migration until reaches lowmig#.

Hope this helps.


Gus





>>> [EMAIL PROTECTED] 06/19/03 11:25AM >>>
All-

I recently created reclaim stgpool with upper mig to 80% and lower mig to
0%.

But the CFILE devclass stgpool that is disk when gets to 80 % migrate  level
, tapes do not stop to reclaim  but it continues .What might have been
missing

Thanks in advance

Balanand Pinni


Re: Tape reclaim & stgpool migration question/

2003-06-29 Thread Zlatko Krastev
It might be related to size of reclamation stgpool and transaction size.
If reclamation have filled the reclamation pool to 75% and next set of
files in inter-server transaction is worth 20% of reclamation pool size,
you will have the pool filled to 75+20=95% before migration threshold is
passed.

Zlatko Krastev
IT Consultant






"PINNI, BALANAND (SBCSI)" <[EMAIL PROTECTED]>
Sent by: "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
19.06.2003 19:08
Please respond to "ADSM: Dist Stor Manager"


To: [EMAIL PROTECTED]
cc:
Subject:    Re: Tape reclaim & stgpool migration question/


Reclamation % is set to 50 on tapepool.Reclamation Threshold: 50

Migration set as High Mig Pct: 80 on SQUEnTIal  ACCESS Stgpool disk with
devclass cfile.
Low Mig Pct: 0

Migration takes place but after certain retries . What I see is after dsik
stgpool reach 95% it retries with No more space in stgpool of disk pool to
which tapes are getting reclaimed.

My question is why it is not sensing 80% on sequential access disk
stgpool.

Thanks
Balanand Pinni



-Original Message-
From: Jin Bae Chi [mailto:[EMAIL PROTECTED]
Sent: Thursday, June 19, 2003 10:51 AM
To: [EMAIL PROTECTED]
Subject: Re: Tape reclaim & stgpool migration question/


You may have a little confusion about Reclamation and Migration.

Do 'q stg xxxpool f=d' and find 'Reclamation Threshold' which governs how
reclamation will be done.

Highmig=xx and lowmig=xx are parameters that works only for Migration of
data from one pool to another. When a stg reaches highmig #, it kicks
migration until reaches lowmig#.

Hope this helps.


Gus





>>> [EMAIL PROTECTED] 06/19/03 11:25AM >>>
All-

I recently created reclaim stgpool with upper mig to 80% and lower mig to
0%.

But the CFILE devclass stgpool that is disk when gets to 80 % migrate
level
, tapes do not stop to reclaim  but it continues .What might have been
missing

Thanks in advance

Balanand Pinni


Re: LTO1 to LTO4 migration question

2008-06-05 Thread David Longo
You do not have to partition the library, mixed LTO devices
can exist in same logical library now.

Install and configure the LTO4 drives.  Then label/checkin LTO4
tapes and create new Tape Stg Pools.  Then take an existing Mgmnt
Class and point it to new tape pool.  Data will then go there and gradually
expire on old tape pools.  Depending on how much space you have in
3584, you may then want to Migrate data from LTO1 pools to LTO4 pools
this will get rid of old tapes quicker.  Don't forget offsite pools
also.

One main thing is to balance number of LTO1/4 drives to make this work
well.  Do you have a choice of replacing some of the LTO1 drives now
and the rest later?  That would help.  I would not try to do all pools at once,
do 1 or 2 to start and then others as you have resources and see how this 
is working.

There are a few other items, but that covers the basics.

David Longo

>>> TSM User <[EMAIL PROTECTED]> 6/5/2008 9:34 AM >>>
I'm trying to find the best way to migrate a 3584 library from LTO1 to LTO4
media.  For a period of time, both drive types will be installed.  I know
the library can be partitioned, but capacity is already pretty tight, and it
would almost certainly require continuous "redrawing the borders" as the
relative proportion of media types shifted, in order to make everything
fit.  Would there be any gotchas if I defined both device classes to the
same library, and then assigned the LTO4 scratch media to the appropriate
storage pools immediately on checkin?  Or am I required to do some kind of
library partitioning to make this work?  I've done some searching online,
but can't find anything explicit one way or the other.

Thanks,

Kathy



#
This message is for the named person's use only.  It may 
contain private, proprietary, or legally privileged information.  
No privilege is waived or lost by any mistransmission.  If you 
receive this message in error, please immediately delete it and 
all copies of it from your system, destroy any hard copies of it, 
and notify the sender.  You must not, directly or indirectly, use, 
disclose, distribute, print, or copy any part of this message if you 
are not the intended recipient.  Health First reserves the right to 
monitor all e-mail communications through its networks.  Any views 
or opinions expressed in this message are solely those of the 
individual sender, except (1) where the message states such views 
or opinions are on behalf of a particular entity;  and (2) the sender 
is authorized by the entity to give such views or opinions.
#


Re: LTO1 to LTO4 migration question

2008-06-05 Thread Anil Maurya
I am running LTO2 and LTO4 drive under same library manager since last 4
months so far no problem.wE diod not partition library.
THX
 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
David Longo
Sent: Thursday, June 05, 2008 10:26 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] LTO1 to LTO4 migration question

You do not have to partition the library, mixed LTO devices can exist in
same logical library now.

Install and configure the LTO4 drives.  Then label/checkin LTO4 tapes
and create new Tape Stg Pools.  Then take an existing Mgmnt Class and
point it to new tape pool.  Data will then go there and gradually expire
on old tape pools.  Depending on how much space you have in 3584, you
may then want to Migrate data from LTO1 pools to LTO4 pools this will
get rid of old tapes quicker.  Don't forget offsite pools also.

One main thing is to balance number of LTO1/4 drives to make this work
well.  Do you have a choice of replacing some of the LTO1 drives now and
the rest later?  That would help.  I would not try to do all pools at
once, do 1 or 2 to start and then others as you have resources and see
how this is working.

There are a few other items, but that covers the basics.

David Longo

>>> TSM User <[EMAIL PROTECTED]> 6/5/2008 9:34 AM >>>
I'm trying to find the best way to migrate a 3584 library from LTO1 to
LTO4 media.  For a period of time, both drive types will be installed.
I know the library can be partitioned, but capacity is already pretty
tight, and it would almost certainly require continuous "redrawing the
borders" as the relative proportion of media types shifted, in order to
make everything fit.  Would there be any gotchas if I defined both
device classes to the same library, and then assigned the LTO4 scratch
media to the appropriate storage pools immediately on checkin?  Or am I
required to do some kind of library partitioning to make this work?
I've done some searching online, but can't find anything explicit one
way or the other.

Thanks,

Kathy



#
This message is for the named person's use only.  It may contain
private, proprietary, or legally privileged information.  
No privilege is waived or lost by any mistransmission.  If you receive
this message in error, please immediately delete it and all copies of it
from your system, destroy any hard copies of it, and notify the sender.
You must not, directly or indirectly, use, disclose, distribute, print,
or copy any part of this message if you are not the intended recipient.
Health First reserves the right to monitor all e-mail communications
through its networks.  Any views or opinions expressed in this message
are solely those of the individual sender, except (1) where the message
states such views or opinions are on behalf of a particular entity;  and
(2) the sender is authorized by the entity to give such views or
opinions.
#


Re: LTO1 to LTO4 migration question

2008-06-05 Thread TSM User
Thanks for the info--this is what I was hoping to read.  We will be
retaining enough LTO1 drives in the environment for long enough to migrate,
and we do expect to actively migrate data as well as force all new data to
the new media.  Mostly, I wanted to be certain that the various components
would have the sense to use the correct tape drives depending on the tape
(or storage pool), since there's no compatibility between the LTO1 and
LTO4.  This isn't a situation where I can really afford to be wrong or
misinterpret something, so I'm verifying everything.


On Thu, Jun 5, 2008 at 10:25 AM, David Longo <[EMAIL PROTECTED]>
wrote:

> You do not have to partition the library, mixed LTO devices
> can exist in same logical library now.
>
> Install and configure the LTO4 drives.  Then label/checkin LTO4
> tapes and create new Tape Stg Pools.  Then take an existing Mgmnt
> Class and point it to new tape pool.  Data will then go there and gradually
> expire on old tape pools.  Depending on how much space you have in
> 3584, you may then want to Migrate data from LTO1 pools to LTO4 pools
> this will get rid of old tapes quicker.  Don't forget offsite pools
> also.
>
> One main thing is to balance number of LTO1/4 drives to make this work
> well.  Do you have a choice of replacing some of the LTO1 drives now
> and the rest later?  That would help.  I would not try to do all pools at
> once,
> do 1 or 2 to start and then others as you have resources and see how this
> is working.
>
> There are a few other items, but that covers the basics.
>
> David Longo
>
> >>> TSM User <[EMAIL PROTECTED]> 6/5/2008 9:34 AM >>>
> I'm trying to find the best way to migrate a 3584 library from LTO1 to LTO4
> media.  For a period of time, both drive types will be installed.  I know
> the library can be partitioned, but capacity is already pretty tight, and
> it
> would almost certainly require continuous "redrawing the borders" as the
> relative proportion of media types shifted, in order to make everything
> fit.  Would there be any gotchas if I defined both device classes to the
> same library, and then assigned the LTO4 scratch media to the appropriate
> storage pools immediately on checkin?  Or am I required to do some kind of
> library partitioning to make this work?  I've done some searching online,
> but can't find anything explicit one way or the other.
>
> Thanks,
>
> Kathy
>
>
>
> #
> This message is for the named person's use only.  It may
> contain private, proprietary, or legally privileged information.
> No privilege is waived or lost by any mistransmission.  If you
> receive this message in error, please immediately delete it and
> all copies of it from your system, destroy any hard copies of it,
> and notify the sender.  You must not, directly or indirectly, use,
> disclose, distribute, print, or copy any part of this message if you
> are not the intended recipient.  Health First reserves the right to
> monitor all e-mail communications through its networks.  Any views
> or opinions expressed in this message are solely those of the
> individual sender, except (1) where the message states such views
> or opinions are on behalf of a particular entity;  and (2) the sender
> is authorized by the entity to give such views or opinions.
> #
>


Re: LTO1 to LTO4 migration question

2008-06-05 Thread David Longo
It should not put tapes in wrong drive.  LTO4 drive can read/write
LTO3 tape and can read LTO2 but not LTO1.

David Longo

>>> TSM User <[EMAIL PROTECTED]> 6/5/2008 11:01 AM >>>
Thanks for the info--this is what I was hoping to read.  We will be
retaining enough LTO1 drives in the environment for long enough to migrate,
and we do expect to actively migrate data as well as force all new data to
the new media.  Mostly, I wanted to be certain that the various components
would have the sense to use the correct tape drives depending on the tape
(or storage pool), since there's no compatibility between the LTO1 and
LTO4.  This isn't a situation where I can really afford to be wrong or
misinterpret something, so I'm verifying everything.


On Thu, Jun 5, 2008 at 10:25 AM, David Longo <[EMAIL PROTECTED]>
wrote:

> You do not have to partition the library, mixed LTO devices
> can exist in same logical library now.
>
> Install and configure the LTO4 drives.  Then label/checkin LTO4
> tapes and create new Tape Stg Pools.  Then take an existing Mgmnt
> Class and point it to new tape pool.  Data will then go there and gradually
> expire on old tape pools.  Depending on how much space you have in
> 3584, you may then want to Migrate data from LTO1 pools to LTO4 pools
> this will get rid of old tapes quicker.  Don't forget offsite pools
> also.
>
> One main thing is to balance number of LTO1/4 drives to make this work
> well.  Do you have a choice of replacing some of the LTO1 drives now
> and the rest later?  That would help.  I would not try to do all pools at
> once,
> do 1 or 2 to start and then others as you have resources and see how this
> is working.
>
> There are a few other items, but that covers the basics.
>
> David Longo
>
> >>> TSM User <[EMAIL PROTECTED]> 6/5/2008 9:34 AM >>>
> I'm trying to find the best way to migrate a 3584 library from LTO1 to LTO4
> media.  For a period of time, both drive types will be installed.  I know
> the library can be partitioned, but capacity is already pretty tight, and
> it
> would almost certainly require continuous "redrawing the borders" as the
> relative proportion of media types shifted, in order to make everything
> fit.  Would there be any gotchas if I defined both device classes to the
> same library, and then assigned the LTO4 scratch media to the appropriate
> storage pools immediately on checkin?  Or am I required to do some kind of
> library partitioning to make this work?  I've done some searching online,
> but can't find anything explicit one way or the other.
>
> Thanks,
>
> Kathy
>
>
>
> #
> This message is for the named person's use only.  It may
> contain private, proprietary, or legally privileged information.
> No privilege is waived or lost by any mistransmission.  If you
> receive this message in error, please immediately delete it and
> all copies of it from your system, destroy any hard copies of it,
> and notify the sender.  You must not, directly or indirectly, use,
> disclose, distribute, print, or copy any part of this message if you
> are not the intended recipient.  Health First reserves the right to
> monitor all e-mail communications through its networks.  Any views
> or opinions expressed in this message are solely those of the
> individual sender, except (1) where the message states such views
> or opinions are on behalf of a particular entity;  and (2) the sender
> is authorized by the entity to give such views or opinions.
> #
>


Re: LTO1 to LTO4 migration question

2008-06-05 Thread Kauffman, Tom
You have two options:

Partition the library, with LTO1 drives and media as one logical library and 
LTO4 drives and media as the second. If you have ALMS, you can over-commit 
slots in both libraries. If not, I'd check out as many LTO1 as possible before 
partitioning and swap tapes as you migrate off the old ones.

Or - mark all LTO1 tapes read-only and check them out as you move the data off 
of them. This will still be problematic, as TSM *may* shove an LTO1 tape into 
an LTO4 drive in this case. This is because TSM is managing the tapes and 
drives at the device class level, and you'll only have one device class in the 
library. If this does happen, the tape will most likely get marked 
'unavailable' because it could not be read.

Tom Kauffman
NIBCO, Inc

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of TSM User
Sent: Thursday, June 05, 2008 9:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: LTO1 to LTO4 migration question

I'm trying to find the best way to migrate a 3584 library from LTO1 to LTO4
media.  For a period of time, both drive types will be installed.  I know
the library can be partitioned, but capacity is already pretty tight, and it
would almost certainly require continuous "redrawing the borders" as the
relative proportion of media types shifted, in order to make everything
fit.  Would there be any gotchas if I defined both device classes to the
same library, and then assigned the LTO4 scratch media to the appropriate
storage pools immediately on checkin?  Or am I required to do some kind of
library partitioning to make this work?  I've done some searching online,
but can't find anything explicit one way or the other.

Thanks,

Kathy
CONFIDENTIALITY NOTICE:  This email and any attachments are for the 
exclusive and confidential use of the intended recipient.  If you are not
the intended recipient, please do not read, distribute or take action in 
reliance upon this message. If you have received this in error, please 
notify us immediately by return email and promptly delete this message 
and its attachments from your computer system. We do not waive  
attorney-client or work product privilege by the transmission of this
message.



Re: Server to Server migration question

2005-08-10 Thread Mark D. Rodriguez

Robert,

When you are doing virtual volumes like you are doing the virtual volume
on the target server is actually managed as an archive object.
Therefore, I would bet that the archive copygroup for that management
class is pointing at the diskpool.  Also, it is a good practice to put
server nodes for virtual volumes into a separate domain so as not to get
them confused with other nodes policies.  Remember that although it is
using an archive copygroup the only attribute it cares about is the
destination storage pool.  Retention time is irrelevant since it is
managed by the source server.

Good luck and let us know how you made out.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===



Robert Ouzen wrote:



Hi to all

I tried to make a server to server migration here the steps I made:

 On source Server named SERVER1
 1. I define a server SERVER2 (target server)
 2. I define a devclass SERVERCLASS with device type SERVER and servername 
SERVER2
 3. I create a stg SERVER1_POOL with devclass SERVERCLASS

 On target Server named SERVER2
1.  I define a server SERVER1 (source server) with nodename POSTBACK
2.  I create a nodename of type SERVER named POSTBACK
3.  I add this nodename to the STANDARD policy with a copypool destination 
named I2000_SERVER1 (tape storagepool)
4.  Activate the STANDARD policy

I update my primary stg on SERVER1 (source server) named primarypool  with next 
stg SERVER1_POOL

The migration start I see both on the source and target server activity on the 
actlog but I got always this error message:

08/10/2005 20:05:23  ANR0520W Transaction failed for session 23881 for node
 POSTBACK (Windows) - storage pool DISKPOOL is not
 defined. (SESSION: 23881)

Why trying to store on DISKPOOL storage  I doubled check my STANDARD policy 
and my copygroup is I2000_SERVER1 (I activate several times the STANDARD policy 
and check the active policy to see that I2000_SERVER1 is correct in did 

My TSM version on the source server is 5.2.4.0 Windows  O.S
My TSM version on the target server  is 5.2.4.2 AIX-RS/6000  O.S

Any suggestion ?

Regards
Robert Ouzen
e-mail: [EMAIL PROTECTED]





Re: Server to Server migration question

2005-08-10 Thread fred johanson

Check to see if DISKPOOL is defined.  It may be the default for DIRMC,
assuming this is for WinNT.


At 09:19 PM 8/10/2005 +0300, you wrote:


Hi to all

I tried to make a server to server migration here the steps I made:

  On source Server named SERVER1
  1. I define a server SERVER2 (target server)
  2. I define a devclass SERVERCLASS with device type SERVER and
servername SERVER2
  3. I create a stg SERVER1_POOL with devclass SERVERCLASS

  On target Server named SERVER2
1.  I define a server SERVER1 (source server) with nodename POSTBACK
2.  I create a nodename of type SERVER named POSTBACK
3.  I add this nodename to the STANDARD policy with a copypool destination
named I2000_SERVER1 (tape storagepool)
4.  Activate the STANDARD policy

I update my primary stg on SERVER1 (source server) named primarypool  with
next stg SERVER1_POOL

The migration start I see both on the source and target server activity on
the actlog but I got always this error message:

08/10/2005 20:05:23  ANR0520W Transaction failed for session 23881 for node
  POSTBACK (Windows) - storage pool DISKPOOL is not
  defined. (SESSION: 23881)

Why trying to store on DISKPOOL storage  I doubled check my STANDARD
policy and my copygroup is I2000_SERVER1 (I activate several times the
STANDARD policy and check the active policy to see that I2000_SERVER1 is
correct in did 

My TSM version on the source server is 5.2.4.0 Windows  O.S
My TSM version on the target server  is 5.2.4.2 AIX-RS/6000  O.S

Any suggestion ?

Regards
Robert Ouzen
e-mail: [EMAIL PROTECTED]


Fred Johanson
ITSM Administrator
University of Chicago
773-702-8464


Re: Server to Server migration question

2005-08-10 Thread Robert Ouzen
Hi Mark

First thanks for the response ..

I didn't mention before but I thought too and I made an archive copy  group  
with destination I2000_SERVER1 , active archive policy correct too

Make no sense

I did before an export node from the source server to the target server  with 
domain=standard and toserver=server2, he wrote correctly on a scratch tape 
under I2000_SERVER1

Regards

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark D. 
Rodriguez
Sent: Wednesday, August 10, 2005 8:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

When you are doing virtual volumes like you are doing the virtual volume on the 
target server is actually managed as an archive object.
Therefore, I would bet that the archive copygroup for that management class is 
pointing at the diskpool.  Also, it is a good practice to put server nodes for 
virtual volumes into a separate domain so as not to get them confused with 
other nodes policies.  Remember that although it is using an archive copygroup 
the only attribute it cares about is the destination storage pool.  Retention 
time is irrelevant since it is managed by the source server.

Good luck and let us know how you made out.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified Advanced 
Technical Expert, CATE AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM 
and Linux Red Hat Certified Engineer, RHCE 
===



Robert Ouzen wrote:

>
>Hi to all
>
>I tried to make a server to server migration here the steps I made:
>
>  On source Server named SERVER1
>  1. I define a server SERVER2 (target server)  2. I define a devclass 
> SERVERCLASS with device type SERVER and servername SERVER2  3. I 
> create a stg SERVER1_POOL with devclass SERVERCLASS
>
>  On target Server named SERVER2
>1.  I define a server SERVER1 (source server) with nodename POSTBACK 2.  
>I create a nodename of type SERVER named POSTBACK 3.  I add this 
>nodename to the STANDARD policy with a copypool destination named 
>I2000_SERVER1 (tape storagepool) 4.  Activate the STANDARD policy
>
>I update my primary stg on SERVER1 (source server) named primarypool  
>with next stg SERVER1_POOL
>
>The migration start I see both on the source and target server activity on the 
>actlog but I got always this error message:
>
>08/10/2005 20:05:23  ANR0520W Transaction failed for session 23881 for node
>  POSTBACK (Windows) - storage pool DISKPOOL is not
>  defined. (SESSION: 23881)
>
>Why trying to store on DISKPOOL storage  I doubled check my STANDARD 
>policy and my copygroup is I2000_SERVER1 (I activate several times the 
>STANDARD policy and check the active policy to see that I2000_SERVER1 is 
>correct in did 
>
>My TSM version on the source server is 5.2.4.0 Windows  O.S
>My TSM version on the target server  is 5.2.4.2 AIX-RS/6000  O.S
>
>Any suggestion ?
>
>Regards
>Robert Ouzen
>e-mail: [EMAIL PROTECTED]
>
>
>


Re: Server to Server migration question

2005-08-10 Thread Mark D. Rodriguez

Robert,

Could you please post the output of the following commands:

q mg standard active
q co standard active t=a f=d

Then I can see a little more clearly what might be going on.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===



Robert Ouzen wrote:


Hi Mark

First thanks for the response ..

I didn't mention before but I thought too and I made an archive copy  group  
with destination I2000_SERVER1 , active archive policy correct too

Make no sense

I did before an export node from the source server to the target server  with 
domain=standard and toserver=server2, he wrote correctly on a scratch tape 
under I2000_SERVER1

Regards

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark D. 
Rodriguez
Sent: Wednesday, August 10, 2005 8:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

When you are doing virtual volumes like you are doing the virtual volume on the 
target server is actually managed as an archive object.
Therefore, I would bet that the archive copygroup for that management class is 
pointing at the diskpool.  Also, it is a good practice to put server nodes for 
virtual volumes into a separate domain so as not to get them confused with 
other nodes policies.  Remember that although it is using an archive copygroup 
the only attribute it cares about is the destination storage pool.  Retention 
time is irrelevant since it is managed by the source server.

Good luck and let us know how you made out.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified Advanced 
Technical Expert, CATE AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM 
and Linux Red Hat Certified Engineer, RHCE 
===



Robert Ouzen wrote:




Hi to all

I tried to make a server to server migration here the steps I made:

On source Server named SERVER1
1. I define a server SERVER2 (target server)  2. I define a devclass
SERVERCLASS with device type SERVER and servername SERVER2  3. I
create a stg SERVER1_POOL with devclass SERVERCLASS

On target Server named SERVER2
1.  I define a server SERVER1 (source server) with nodename POSTBACK 2.
I create a nodename of type SERVER named POSTBACK 3.  I add this
nodename to the STANDARD policy with a copypool destination named
I2000_SERVER1 (tape storagepool) 4.  Activate the STANDARD policy

I update my primary stg on SERVER1 (source server) named primarypool
with next stg SERVER1_POOL

The migration start I see both on the source and target server activity on the 
actlog but I got always this error message:

08/10/2005 20:05:23  ANR0520W Transaction failed for session 23881 for node
POSTBACK (Windows) - storage pool DISKPOOL is not
defined. (SESSION: 23881)

Why trying to store on DISKPOOL storage  I doubled check my STANDARD policy 
and my copygroup is I2000_SERVER1 (I activate several times the STANDARD policy 
and check the active policy to see that I2000_SERVER1 is correct in did 

My TSM version on the source server is 5.2.4.0 Windows  O.S
My TSM version on the target server  is 5.2.4.2 AIX-RS/6000  O.S

Any suggestion ?

Regards
Robert Ouzen
e-mail: [EMAIL PROTECTED]











Re: Server to Server migration question

2005-08-10 Thread Robert Ouzen
Hi mark 

Here the output attach look at destnation corect storage pool

Regards Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark D. 
Rodriguez
Sent: Wednesday, August 10, 2005 9:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

Could you please post the output of the following commands:

q mg standard active
q co standard active t=a f=d

Then I can see a little more clearly what might be going on.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified Advanced 
Technical Expert, CATE AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM 
and Linux Red Hat Certified Engineer, RHCE 
===



Robert Ouzen wrote:

>Hi Mark
>
>First thanks for the response ..
>
>I didn't mention before but I thought too and I made an archive copy  
>group  with destination I2000_SERVER1 , active archive policy correct 
>too
>
>Make no sense
>
>I did before an export node from the source server to the target server  
>with domain=standard and toserver=server2, he wrote correctly on a 
>scratch tape under I2000_SERVER1
>
>Regards
>
>Robert
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf 
>Of Mark D. Rodriguez
>Sent: Wednesday, August 10, 2005 8:54 PM
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: Server to Server migration question
>
>Robert,
>
>When you are doing virtual volumes like you are doing the virtual volume on 
>the target server is actually managed as an archive object.
>Therefore, I would bet that the archive copygroup for that management class is 
>pointing at the diskpool.  Also, it is a good practice to put server nodes for 
>virtual volumes into a separate domain so as not to get them confused with 
>other nodes policies.  Remember that although it is using an archive copygroup 
>the only attribute it cares about is the destination storage pool.  Retention 
>time is irrelevant since it is managed by the source server.
>
>Good luck and let us know how you made out.
>
>--
>Regards,
>Mark D. Rodriguez
>President MDR Consulting, Inc.
>
>===
>
>MDR Consulting
>The very best in Technical Training and Consulting.
>IBM Advanced Business Partner
>SAIR Linux and GNU Authorized Center for Education IBM Certified 
>Advanced Technical Expert, CATE AIX Support and Performance Tuning, 
>RS6000 SP, TSM/ADSM and Linux Red Hat Certified Engineer, RHCE 
>===
>
>
>
>
>Robert Ouzen wrote:
>
>
>
>>Hi to all
>>
>>I tried to make a server to server migration here the steps I made:
>>
>> On source Server named SERVER1
>> 1. I define a server SERVER2 (target server)  2. I define a devclass 
>>SERVERCLASS with device type SERVER and servername SERVER2  3. I 
>>create a stg SERVER1_POOL with devclass SERVERCLASS
>>
>> On target Server named SERVER2
>>1.  I define a server SERVER1 (source server) with nodename POSTBACK 2.
>>I create a nodename of type SERVER named POSTBACK 3.  I add this 
>>nodename to the STANDARD policy with a copypool destination named
>>I2000_SERVER1 (tape storagepool) 4.  Activate the STANDARD policy
>>
>>I update my primary stg on SERVER1 (source server) named primarypool 
>>with next stg SERVER1_POOL
>>
>>The migration start I see both on the source and target server activity on 
>>the actlog but I got always this error message:
>>
>>08/10/2005 20:05:23  ANR0520W Transaction failed for session 23881 for node
>> POSTBACK (Windows) - storage pool DISKPOOL is not
>> defined. (SESSION: 23881)
>>
>>Why trying to store on DISKPOOL storage  I doubled check my STANDARD 
>>policy and my copygroup is I2000_SERVER1 (I activate several times the 
>>STANDARD policy and check the active policy to see that I2000_SERVER1 is 
>>correct in did 
>>
>>My TSM version on the source server is 5.2.4.0 Windows  O.S
>>My TSM version on the target server  is 5.2.4.2 AIX-RS/6000  O.S
>>
>>Any suggestion ?
>>
>>Regards
>>Robert Ouzen
>>e-mail: [EMAIL PROTECTED]
>>
>>
>>
>>
>>
>
>
>

PolicyPolicyMgmt  

Re: Server to Server migration question

2005-08-10 Thread Mark D. Rodriguez

Robert,

You are right it looks like the destination is correct.  I am assuming
the error message you got was on the target server, correct?  What is
the output of:

q stg f=d

I am wondering if I2000_server1 has its "nest storage pool" set to
migrate to diskpool.  If so then if I2000_server1 has no space or has a
"max size threshold" set than it might be trying to send it directly to
the "next storage pool" of diskpool.  Another thing to look at is:

q node postback t=s f=d

In particular lets look at the "Maximum Mount Points Allowed" and make
sure it is 1 or more.

This is a little strange but I am sure we can figure it out.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===



Robert Ouzen wrote:


Hi mark

Here the output attach look at destnation corect storage pool

Regards Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark D. 
Rodriguez
Sent: Wednesday, August 10, 2005 9:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

Could you please post the output of the following commands:

q mg standard active
q co standard active t=a f=d

Then I can see a little more clearly what might be going on.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified Advanced 
Technical Expert, CATE AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM 
and Linux Red Hat Certified Engineer, RHCE 
===



Robert Ouzen wrote:




Hi Mark

First thanks for the response ..

I didn't mention before but I thought too and I made an archive copy
group  with destination I2000_SERVER1 , active archive policy correct
too

Make no sense

I did before an export node from the source server to the target server
with domain=standard and toserver=server2, he wrote correctly on a
scratch tape under I2000_SERVER1

Regards

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
Of Mark D. Rodriguez
Sent: Wednesday, August 10, 2005 8:54 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

When you are doing virtual volumes like you are doing the virtual volume on the 
target server is actually managed as an archive object.
Therefore, I would bet that the archive copygroup for that management class is 
pointing at the diskpool.  Also, it is a good practice to put server nodes for 
virtual volumes into a separate domain so as not to get them confused with 
other nodes policies.  Remember that although it is using an archive copygroup 
the only attribute it cares about is the destination storage pool.  Retention 
time is irrelevant since it is managed by the source server.

Good luck and let us know how you made out.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===

MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified
Advanced Technical Expert, CATE AIX Support and Performance Tuning,
RS6000 SP, TSM/ADSM and Linux Red Hat Certified Engineer, RHCE
===




Robert Ouzen wrote:






Hi to all

I tried to make a server to server migration here the steps I made:

   On source Server named SERVER1
1. I define a server SERVER2 (target server)  2. I define a devclass
SERVERCLASS with device type SERVER and servername SERVER2  3. I
create a stg SERVER1_POOL with devclass SERVERCLASS

   On target Server named SERVER2
1.  I define a server SERVER1 (source server) with nodename POSTBACK 2.
I create a nodename of type SERVER named POSTBACK 3.  I add this
nodename to the STANDARD policy with a copypool destination named
I2000_SERVER1 (tape storagepool) 4.  Activate the STANDARD policy

I update my primary stg on SERVER1 (source server) named primarypool
with next stg SERVER1_POOL

The migration start I see both on the source and target server activity on the 
actlog but I got always this error message:

08/10/2005 20:05:23  ANR0520W Transaction failed for session 23881 

Re: Server to Server migration question

2005-08-10 Thread Robert Ouzen
Hi mark 

When I do on my source server a q vol stg=adsmpool_u  ( my new stg with 
devclass of type server)

I got 
ADSM.BFS.123703429ADSMPOOL_U   SERVERCLASS   13.2  100.0Full

I did a query content and I see all the files

By the way where is it store now on the database ??
 
Regards Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark D. 
Rodriguez
Sent: Wednesday, August 10, 2005 9:19 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

Could you please post the output of the following commands:

q mg standard active
q co standard active t=a f=d

Then I can see a little more clearly what might be going on.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified Advanced 
Technical Expert, CATE AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM 
and Linux Red Hat Certified Engineer, RHCE 
===



Robert Ouzen wrote:

>Hi Mark
>
>First thanks for the response ..
>
>I didn't mention before but I thought too and I made an archive copy  
>group  with destination I2000_SERVER1 , active archive policy correct 
>too
>
>Make no sense
>
>I did before an export node from the source server to the target server  
>with domain=standard and toserver=server2, he wrote correctly on a 
>scratch tape under I2000_SERVER1
>
>Regards
>
>Robert
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf 
>Of Mark D. Rodriguez
>Sent: Wednesday, August 10, 2005 8:54 PM
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: Server to Server migration question
>
>Robert,
>
>When you are doing virtual volumes like you are doing the virtual volume on 
>the target server is actually managed as an archive object.
>Therefore, I would bet that the archive copygroup for that management class is 
>pointing at the diskpool.  Also, it is a good practice to put server nodes for 
>virtual volumes into a separate domain so as not to get them confused with 
>other nodes policies.  Remember that although it is using an archive copygroup 
>the only attribute it cares about is the destination storage pool.  Retention 
>time is irrelevant since it is managed by the source server.
>
>Good luck and let us know how you made out.
>
>--
>Regards,
>Mark D. Rodriguez
>President MDR Consulting, Inc.
>
>===
>
>MDR Consulting
>The very best in Technical Training and Consulting.
>IBM Advanced Business Partner
>SAIR Linux and GNU Authorized Center for Education IBM Certified 
>Advanced Technical Expert, CATE AIX Support and Performance Tuning, 
>RS6000 SP, TSM/ADSM and Linux Red Hat Certified Engineer, RHCE 
>===
>
>
>
>
>Robert Ouzen wrote:
>
>
>
>>Hi to all
>>
>>I tried to make a server to server migration here the steps I made:
>>
>> On source Server named SERVER1
>> 1. I define a server SERVER2 (target server)  2. I define a devclass 
>>SERVERCLASS with device type SERVER and servername SERVER2  3. I 
>>create a stg SERVER1_POOL with devclass SERVERCLASS
>>
>> On target Server named SERVER2
>>1.  I define a server SERVER1 (source server) with nodename POSTBACK 2.
>>I create a nodename of type SERVER named POSTBACK 3.  I add this 
>>nodename to the STANDARD policy with a copypool destination named
>>I2000_SERVER1 (tape storagepool) 4.  Activate the STANDARD policy
>>
>>I update my primary stg on SERVER1 (source server) named primarypool 
>>with next stg SERVER1_POOL
>>
>>The migration start I see both on the source and target server activity on 
>>the actlog but I got always this error message:
>>
>>08/10/2005 20:05:23  ANR0520W Transaction failed for session 23881 for node
>> POSTBACK (Windows) - storage pool DISKPOOL is not
>> defined. (SESSION: 23881)
>>
>>Why trying to store on DISKPOOL storage  I doubled check my STANDARD 
>>policy and my copygroup is I2000_SERVER1 (I activate several times the 
>>STANDARD policy and check the active policy to see that I2000_SERVER1 is 
>>correct in did 
>>
>>My TSM version on the source server is 5.2.4.0 Windows  O.S
>>My TSM version on the 

Re: Server to Server migration question

2005-08-10 Thread Robert Ouzen
Mark here the output's

tsm: ADSM>q stg i2000_server1 f=d

   Storage Pool Name: I2000_SERVER1
   Storage Pool Type: Primary
   Device Class Name: I2000CLASS
 Estimated Capacity (MB): 3,815 G
Pct Util: 0.0
Pct Migr: 10.0
 Pct Logical: 100.0
High Mig Pct: 100
 Low Mig Pct: 99
 Migration Delay: 0
  Migration Continue: Yes
 Migration Processes:
   Next Storage Pool:
Reclaim Storage Pool:
  Maximum Size Threshold: No Limit
  Access: Read/Write
 Description: Storage for Postback server
   Overflow Location:
   Cache Migrated Files?:
  Collocate?: No
   Reclamation Threshold: 100
 Maximum Scratch Volumes Allowed: 10
   Delay Period for Volume Reuse: 0 Day(s)
  Migration in Progress?: No
Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
Reclamation in Progress?: No
 Volume Being Migrated/Reclaimed:
  Last Update by (administrator): CCC
   Last Update Date/Time: 08/02/2005 12:42:13
Storage Pool Data Format: Native
Copy Storage Pool(s):
 Continue Copy on Error?:
CRC Data: No


tsm: ADSM>q node postback t=s f=d

 Node Name: POSTBACK
  Platform: Windows
   Client OS Level: (?)
Client Version: (?)
Policy Domain Name: STANDARD
 Last Access Date/Time: 08/10/2005 21:52:17
Days Since Last Access: <1
Password Set Date/Time: 08/02/2005 11:23:17
   Days Since Password Set: 8
 Invalid Sign-on Count: 0
   Locked?: No
   Contact:
   Compression: Client
   Archive Delete Allowed?: Yes
Backup Delete Allowed?: Yes
Registration Date/Time: 08/02/2005 11:19:08
 Registering Administrator: CCC
Last Communication Method Used:
   Bytes Received Last Session: 0
   Bytes Sent Last Session: 0
  Duration of Last Session: 0.00
   Pct. Idle Wait Last Session: 0.00
more...   ( to continue, 'C' to cancel)

  Pct. Comm. Wait Last Session: 0.00
  Pct. Media Wait Last Session: 0.00
 Optionset:
   URL: http://client.host.name:1581
 Node Type: Server
Password Expiration Period:
 Keep Mount Point?: No
  Maximum Mount Points Allowed: 2
Auto Filespace Rename : Yes
 Validate Protocol: No
   TCP/IP Name:
TCP/IP Address:
Globally Unique ID:
 Transaction Group Max: 0
   Data Write Path: ANY
Data Read Path: ANY
Session Initiation: ClientOrServer
High-level Address:
 Low-level Address: 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark D. 
Rodriguez
Sent: Wednesday, August 10, 2005 9:58 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

You are right it looks like the destination is correct.  I am assuming the 
error message you got was on the target server, correct?  What is the output of:

q stg f=d

I am wondering if I2000_server1 has its "nest storage pool" set to migrate to 
diskpool.  If so then if I2000_server1 has no space or has a "max size 
threshold" set than it might be trying to send it directly to the "next storage 
pool" of diskpool.  Another thing to look at is:

q node postback t=s f=d

In particular lets look at the "Maximum Mount Points Allowed" and make sure it 
is 1 or more.

This is a little strange but I am sure we can figure it out.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified Advanced 
Technical Expert, CATE AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM 
and Linux Red Hat Certified Engineer, RHCE 
===



Robert Ouzen wrote:

>Hi mark
>
>Here the output attach look at destnation corect storage pool
>
>Regards Robert
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf 
>Of Mark D. Rodriguez
>Sent: Wednesday, August 10, 2005 9:19 PM
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: Server to Server migration question
>
>Robert,
>
>Could you please post the output of the following commands:
>
>q mg standard active
>q co standard act

Re: Server to Server migration question

2005-08-10 Thread Mark D. Rodriguez

Robert,

You have a naming problem.  Your storage pool name is I2000_server1, but
your copy destination is sending it to a storage pool called I2000_postback!

Update your copydestination in the archive copygroup of the default
management class for the standard domain and I think that will fix it.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===



Robert Ouzen wrote:


Mark here the output's

tsm: ADSM>q stg i2000_server1 f=d

  Storage Pool Name: I2000_SERVER1
  Storage Pool Type: Primary
  Device Class Name: I2000CLASS
Estimated Capacity (MB): 3,815 G
   Pct Util: 0.0
   Pct Migr: 10.0
Pct Logical: 100.0
   High Mig Pct: 100
Low Mig Pct: 99
Migration Delay: 0
 Migration Continue: Yes
Migration Processes:
  Next Storage Pool:
   Reclaim Storage Pool:
 Maximum Size Threshold: No Limit
 Access: Read/Write
Description: Storage for Postback server
  Overflow Location:
  Cache Migrated Files?:
 Collocate?: No
  Reclamation Threshold: 100
Maximum Scratch Volumes Allowed: 10
  Delay Period for Volume Reuse: 0 Day(s)
 Migration in Progress?: No
   Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
   Reclamation in Progress?: No
Volume Being Migrated/Reclaimed:
 Last Update by (administrator): CCC
  Last Update Date/Time: 08/02/2005 12:42:13
   Storage Pool Data Format: Native
   Copy Storage Pool(s):
Continue Copy on Error?:
   CRC Data: No


tsm: ADSM>q node postback t=s f=d

Node Name: POSTBACK
 Platform: Windows
  Client OS Level: (?)
   Client Version: (?)
   Policy Domain Name: STANDARD
Last Access Date/Time: 08/10/2005 21:52:17
   Days Since Last Access: <1
   Password Set Date/Time: 08/02/2005 11:23:17
  Days Since Password Set: 8
Invalid Sign-on Count: 0
  Locked?: No
  Contact:
  Compression: Client
  Archive Delete Allowed?: Yes
   Backup Delete Allowed?: Yes
   Registration Date/Time: 08/02/2005 11:19:08
Registering Administrator: CCC
Last Communication Method Used:
  Bytes Received Last Session: 0
  Bytes Sent Last Session: 0
 Duration of Last Session: 0.00
  Pct. Idle Wait Last Session: 0.00
more...   ( to continue, 'C' to cancel)

 Pct. Comm. Wait Last Session: 0.00
 Pct. Media Wait Last Session: 0.00
Optionset:
  URL: http://client.host.name:1581
Node Type: Server
   Password Expiration Period:
Keep Mount Point?: No
 Maximum Mount Points Allowed: 2
   Auto Filespace Rename : Yes
Validate Protocol: No
  TCP/IP Name:
   TCP/IP Address:
   Globally Unique ID:
Transaction Group Max: 0
  Data Write Path: ANY
   Data Read Path: ANY
   Session Initiation: ClientOrServer
   High-level Address:
Low-level Address:

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark D. 
Rodriguez
Sent: Wednesday, August 10, 2005 9:58 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

You are right it looks like the destination is correct.  I am assuming the 
error message you got was on the target server, correct?  What is the output of:

q stg f=d

I am wondering if I2000_server1 has its "nest storage pool" set to migrate to diskpool.  If so then 
if I2000_server1 has no space or has a "max size threshold" set than it might be trying to send it 
directly to the "next storage pool" of diskpool.  Another thing to look at is:

q node postback t=s f=d

In particular lets look at the "Maximum Mount Points Allowed" and make sure it 
is 1 or more.

This is a little strange but I am sure we can figure it out.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Educ

Re: Server to Server migration question

2005-08-10 Thread Robert Ouzen
No Mark is ok I made now some changes to figure the mistake and send you the 
wrong output the stg is i2000_server1 and archive copypool is too i2000_server1

robert 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark D. 
Rodriguez
Sent: Wednesday, August 10, 2005 10:32 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

You have a naming problem.  Your storage pool name is I2000_server1, but your 
copy destination is sending it to a storage pool called I2000_postback!

Update your copydestination in the archive copygroup of the default management 
class for the standard domain and I think that will fix it.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified Advanced 
Technical Expert, CATE AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM 
and Linux Red Hat Certified Engineer, RHCE 
===



Robert Ouzen wrote:

>Mark here the output's
>
>tsm: ADSM>q stg i2000_server1 f=d
>
>   Storage Pool Name: I2000_SERVER1
>   Storage Pool Type: Primary
>   Device Class Name: I2000CLASS
> Estimated Capacity (MB): 3,815 G
>Pct Util: 0.0
>Pct Migr: 10.0
> Pct Logical: 100.0
>High Mig Pct: 100
> Low Mig Pct: 99
> Migration Delay: 0
>  Migration Continue: Yes
> Migration Processes:
>   Next Storage Pool:
>Reclaim Storage Pool:
>  Maximum Size Threshold: No Limit
>  Access: Read/Write
> Description: Storage for Postback server
>   Overflow Location:
>   Cache Migrated Files?:
>  Collocate?: No
>   Reclamation Threshold: 100
> Maximum Scratch Volumes Allowed: 10
>   Delay Period for Volume Reuse: 0 Day(s)
>  Migration in Progress?: No
>Amount Migrated (MB): 0.00
>Elapsed Migration Time (seconds): 0
>Reclamation in Progress?: No
> Volume Being Migrated/Reclaimed:
>  Last Update by (administrator): CCC
>   Last Update Date/Time: 08/02/2005 12:42:13
>Storage Pool Data Format: Native
>Copy Storage Pool(s):
> Continue Copy on Error?:
>CRC Data: No
>
>
>tsm: ADSM>q node postback t=s f=d
>
> Node Name: POSTBACK
>  Platform: Windows
>   Client OS Level: (?)
>Client Version: (?)
>Policy Domain Name: STANDARD
> Last Access Date/Time: 08/10/2005 21:52:17
>Days Since Last Access: <1
>Password Set Date/Time: 08/02/2005 11:23:17
>   Days Since Password Set: 8
> Invalid Sign-on Count: 0
>   Locked?: No
>   Contact:
>   Compression: Client
>   Archive Delete Allowed?: Yes
>Backup Delete Allowed?: Yes
>Registration Date/Time: 08/02/2005 11:19:08
> Registering Administrator: CCC
>Last Communication Method Used:
>   Bytes Received Last Session: 0
>   Bytes Sent Last Session: 0
>  Duration of Last Session: 0.00
>   Pct. Idle Wait Last Session: 0.00
>more...   ( to continue, 'C' to cancel)
>
>  Pct. Comm. Wait Last Session: 0.00
>  Pct. Media Wait Last Session: 0.00
> Optionset:
>   URL: http://client.host.name:1581
> Node Type: Server
>Password Expiration Period:
> Keep Mount Point?: No
>  Maximum Mount Points Allowed: 2
>Auto Filespace Rename : Yes
> Validate Protocol: No
>   TCP/IP Name:
>TCP/IP Address:
>Globally Unique ID:
> Transaction Group Max: 0
>   Data Write Path: ANY
>Data Read Path: ANY
>Session Initiation: ClientOrServer
>High-level Address:
> Low-level Address:
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf 
>Of Mark D. Rodriguez
>Sent: Wednesday, August 10, 2005 9:58 PM
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: Server to Server migration question
>
>Robert,
>
>You are right it looks like the destination is correct.  I am assuming the 
>error me

Re: Server to Server migration question

2005-08-10 Thread Mark D. Rodriguez

Robert,

The only other thing I can think of right now is to have you start a
migration and let it run until it fails.  Then send me a copy of the
activity log for both the source and the target servers that cover that
time period.  I will look through it and see if I can see anything in it.

I will try to get back to you today on it, but it might be tomorrow
before I can respond.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===



Robert Ouzen wrote:


No Mark is ok I made now some changes to figure the mistake and send you the 
wrong output the stg is i2000_server1 and archive copypool is too i2000_server1

robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark D. 
Rodriguez
Sent: Wednesday, August 10, 2005 10:32 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

You have a naming problem.  Your storage pool name is I2000_server1, but your 
copy destination is sending it to a storage pool called I2000_postback!

Update your copydestination in the archive copygroup of the default management 
class for the standard domain and I think that will fix it.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified Advanced 
Technical Expert, CATE AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM 
and Linux Red Hat Certified Engineer, RHCE 
===



Robert Ouzen wrote:




Mark here the output's

tsm: ADSM>q stg i2000_server1 f=d

 Storage Pool Name: I2000_SERVER1
 Storage Pool Type: Primary
 Device Class Name: I2000CLASS
   Estimated Capacity (MB): 3,815 G
  Pct Util: 0.0
  Pct Migr: 10.0
   Pct Logical: 100.0
  High Mig Pct: 100
   Low Mig Pct: 99
   Migration Delay: 0
Migration Continue: Yes
   Migration Processes:
 Next Storage Pool:
  Reclaim Storage Pool:
Maximum Size Threshold: No Limit
Access: Read/Write
   Description: Storage for Postback server
 Overflow Location:
 Cache Migrated Files?:
Collocate?: No
 Reclamation Threshold: 100
Maximum Scratch Volumes Allowed: 10
 Delay Period for Volume Reuse: 0 Day(s)
Migration in Progress?: No
  Amount Migrated (MB): 0.00
Elapsed Migration Time (seconds): 0
  Reclamation in Progress?: No
Volume Being Migrated/Reclaimed:
Last Update by (administrator): CCC
 Last Update Date/Time: 08/02/2005 12:42:13
  Storage Pool Data Format: Native
  Copy Storage Pool(s):
   Continue Copy on Error?:
  CRC Data: No


tsm: ADSM>q node postback t=s f=d

   Node Name: POSTBACK
Platform: Windows
 Client OS Level: (?)
  Client Version: (?)
  Policy Domain Name: STANDARD
   Last Access Date/Time: 08/10/2005 21:52:17
  Days Since Last Access: <1
  Password Set Date/Time: 08/02/2005 11:23:17
 Days Since Password Set: 8
   Invalid Sign-on Count: 0
 Locked?: No
 Contact:
 Compression: Client
 Archive Delete Allowed?: Yes
  Backup Delete Allowed?: Yes
  Registration Date/Time: 08/02/2005 11:19:08
   Registering Administrator: CCC
Last Communication Method Used:
 Bytes Received Last Session: 0
 Bytes Sent Last Session: 0
Duration of Last Session: 0.00
 Pct. Idle Wait Last Session: 0.00
more...   ( to continue, 'C' to cancel)

Pct. Comm. Wait Last Session: 0.00
Pct. Media Wait Last Session: 0.00
   Optionset:
 URL: http://client.host.name:1581
   Node Type: Server
  Password Expiration Period:
   Keep Mount Point?: No
Maximum Mount Points Allowed: 2
  Auto Filespace Rename : Yes
   Validate Protocol: No
 TCP/IP Name:
  TCP/IP Address:
  Globally Unique ID:
   Transaction Group Max: 0
 Data Write Path: ANY
  Data Read Path: ANY
  Session Initiation: ClientOrSer

Re: Server to Server migration question

2005-08-11 Thread Robert Ouzen
Hi Mark

Eureka . I found the mystery , I had another domain policy STANDARD2 with 
policy set STANDARD without any archive copy pool after adding the correct stg  
I2000_SERVER1 works like a dream 

THANK you very much for your support

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Mark D. 
Rodriguez
Sent: Thursday, August 11, 2005 12:08 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Server to Server migration question

Robert,

The only other thing I can think of right now is to have you start a migration 
and let it run until it fails.  Then send me a copy of the activity log for 
both the source and the target servers that cover that time period.  I will 
look through it and see if I can see anything in it.

I will try to get back to you today on it, but it might be tomorrow before I 
can respond.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified Advanced 
Technical Expert, CATE AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM 
and Linux Red Hat Certified Engineer, RHCE 
===



Robert Ouzen wrote:

>No Mark is ok I made now some changes to figure the mistake and send 
>you the wrong output the stg is i2000_server1 and archive copypool is 
>too i2000_server1
>
>robert
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf 
>Of Mark D. Rodriguez
>Sent: Wednesday, August 10, 2005 10:32 PM
>To: ADSM-L@VM.MARIST.EDU
>Subject: Re: Server to Server migration question
>
>Robert,
>
>You have a naming problem.  Your storage pool name is I2000_server1, but your 
>copy destination is sending it to a storage pool called I2000_postback!
>
>Update your copydestination in the archive copygroup of the default management 
>class for the standard domain and I think that will fix it.
>
>--
>Regards,
>Mark D. Rodriguez
>President MDR Consulting, Inc.
>
>===
>
>MDR Consulting
>The very best in Technical Training and Consulting.
>IBM Advanced Business Partner
>SAIR Linux and GNU Authorized Center for Education IBM Certified 
>Advanced Technical Expert, CATE AIX Support and Performance Tuning, 
>RS6000 SP, TSM/ADSM and Linux Red Hat Certified Engineer, RHCE 
>===
>
>
>
>
>Robert Ouzen wrote:
>
>
>
>>Mark here the output's
>>
>>tsm: ADSM>q stg i2000_server1 f=d
>>
>>  Storage Pool Name: I2000_SERVER1
>>  Storage Pool Type: Primary
>>  Device Class Name: I2000CLASS
>>Estimated Capacity (MB): 3,815 G
>>   Pct Util: 0.0
>>   Pct Migr: 10.0
>>Pct Logical: 100.0
>>   High Mig Pct: 100
>>Low Mig Pct: 99
>>Migration Delay: 0
>> Migration Continue: Yes
>>Migration Processes:
>>  Next Storage Pool:
>>   Reclaim Storage Pool:
>> Maximum Size Threshold: No Limit
>> Access: Read/Write
>>Description: Storage for Postback server
>>  Overflow Location:
>>  Cache Migrated Files?:
>> Collocate?: No
>>  Reclamation Threshold: 100
>>Maximum Scratch Volumes Allowed: 10
>>  Delay Period for Volume Reuse: 0 Day(s)
>> Migration in Progress?: No
>>   Amount Migrated (MB): 0.00
>>Elapsed Migration Time (seconds): 0
>>   Reclamation in Progress?: No
>>Volume Being Migrated/Reclaimed:
>> Last Update by (administrator): CCC
>>  Last Update Date/Time: 08/02/2005 12:42:13
>>   Storage Pool Data Format: Native
>>   Copy Storage Pool(s):
>>Continue Copy on Error?:
>>   CRC Data: No
>>
>>
>>tsm: ADSM>q node postback t=s f=d
>>
>>Node Name: POSTBACK
>> Platform: Windows
>>  Client OS Level: (?)
>>   Client Version: (?)
>>   Policy Domain Name: STANDARD
>>Last Access Date/Time: 08/10/2005 21:52:17
>>   Days Since Last Access: <1
>>   Password Set Date/Time: 08/02/2005 11:23:17
>>  Days 

Re: Library Manager and Migration question

2008-10-15 Thread Mark Stapleton
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
> Due to the size and complexity of our environment, we contracted with
a
> vendor to do the setup and start the process.  Now that the contract
> has been signed and we have started our planning sessions, the vendor
> is having a problem with the library manager/client configuration.  At
> first he said that there was no such thing.  I sent him documentation
> and they also found an IBM redbook on the subject but now says that it
> is too complicated since our server is so large and that I will need
to
> partition the library, with some data loss!!  I am strongly against
> this setup and would like to know if I am wrong.  How difficult would
> it be to create a library manager and client setup for an existing
> environment?  I currently have over 1000 tapes onsite and as many
> offsite and naturally do not want to lose any data.  Thanks for the
> feedback.

It takes about 10 minutes to set up (not counting such non-TSM
activities such as SAN zoning and such).

I'd either get another engineer from the consulting firm (one who knows
what he's talking about), or get another consulting firm. You're dealing
with someone who knows a lot less than you do.
 
--
Mark Stapleton ([EMAIL PROTECTED])
CDW Berbee
System engineer
7145 Boone Avenue North, Suite 140
Brooklyn Park MN 55428-1511
763-592-5963
www.berbee.com
 


Re: Library Manager and Migration question

2008-10-15 Thread Howard Coles
Sounds like you signed with the wrong group, you know more than they do.
I've been in this same boat with TSM and other apps.  Drives you up the
wall.  We had one vendor put in a TSM system at a former workplace.  He
told us we only had to run reclamation on the onsite pools and the
offsite stuff would just "take care of itself".  

Anyway, All you really need, from my understanding, is to setup a new
TSM server LPAR and make it the Library manager, and make the current
TSM instance another Library client.  Then setup your other TSM
instances as you desire, and perform node exports.  I would upgrade to
TSM 5.5.1 first, so that you can do restartable exports and imports,
etc.  From my perspective and what I've seen in this no data loss should
be expected, and you can do a good dB backup ahead of time to give you a
go back to spot.  But in all honesty a TSM consultant should not be
overwhelmed by any environment (noticeably that is).

See Ya'
Howard


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
> Of Haberstroh, Debbie (IT)
> Sent: Wednesday, October 15, 2008 12:52 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Library Manager and Migration question
> 
> Hi All,
> 
> We currently have 1 TSM server ver 5.3.3 on AIX 5.3, with a large
> database, 228GB.  We have finally received approval to move this to a
> new server, upgrade to 5.5 and split the clients up to reduce the DB
> size.  We would like to have 4 LPAR's created on the new server, 1 for
> a Library manager and the other 3 for client databases. Our library is
> a TS3584 with 12 LTO2 drives.  We are adding another expansion cabinet
> and adding 4 LTO3 drives but are staying with LTO2 tapes for now.  Our
> current library configuration has the library shared but it is not a
> library manager since we only had one server.
> 
> Due to the size and complexity of our environment, we contracted with
a
> vendor to do the setup and start the process.  Now that the contract
> has been signed and we have started our planning sessions, the vendor
> is having a problem with the library manager/client configuration.  At
> first he said that there was no such thing.  I sent him documentation
> and they also found an IBM redbook on the subject but now says that it
> is too complicated since our server is so large and that I will need
to
> partition the library, with some data loss!!  I am strongly against
> this setup and would like to know if I am wrong.  How difficult would
> it be to create a library manager and client setup for an existing
> environment?  I currently have over 1000 tapes onsite and as many
> offsite and naturally do not want to lose any data.  Thanks for the
> feedback.
> 
> Debbie Haberstroh
> Server Administration


Re: Library Manager and Migration question

2008-10-15 Thread Jim Neal
Hi Debbie,

There is absolutely no reason why you can't separate your current
configuration into a Library manager and library client without losing data.
IMHO the vendor that you hired does not seem really understand how TSM
works.  I recently had to do something similar here at UC Berkeley with our
3494 Library; and while it can be complex and time consuming, it can be done
without any data loss.

Jim Neal
Sr. TSM Administrator
IS&T Storage and Backup
[EMAIL PROTECTED]


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Haberstroh, Debbie (IT)
Sent: Wednesday, October 15, 2008 10:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Library Manager and Migration question

Hi All,

We currently have 1 TSM server ver 5.3.3 on AIX 5.3, with a large database,
228GB.  We have finally received approval to move this to a new server,
upgrade to 5.5 and split the clients up to reduce the DB size.  We would
like to have 4 LPAR's created on the new server, 1 for a Library manager and
the other 3 for client databases. Our library is a TS3584 with 12 LTO2
drives.  We are adding another expansion cabinet and adding 4 LTO3 drives
but are staying with LTO2 tapes for now.  Our current library configuration
has the library shared but it is not a library manager since we only had one
server.

Due to the size and complexity of our environment, we contracted with a
vendor to do the setup and start the process.  Now that the contract has
been signed and we have started our planning sessions, the vendor is having
a problem with the library manager/client configuration.  At first he said
that there was no such thing.  I sent him documentation and they also found
an IBM redbook on the subject but now says that it is too complicated since
our server is so large and that I will need to partition the library, with
some data loss!!  I am strongly against this setup and would like to know if
I am wrong.  How difficult would it be to create a library manager and
client setup for an existing environment?  I currently have over 1000 tapes
onsite and as many offsite and naturally do not want to lose any data.
Thanks for the feedback.

Debbie Haberstroh
Server Administration


Re: Library Manager and Migration question

2008-10-15 Thread Haberstroh, Debbie (IT)
Thanks, hadn't thought about the restartable restores, good idea.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Howard Coles
Sent: Wednesday, October 15, 2008 1:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Library Manager and Migration question


Sounds like you signed with the wrong group, you know more than they do.
I've been in this same boat with TSM and other apps.  Drives you up the
wall.  We had one vendor put in a TSM system at a former workplace.  He
told us we only had to run reclamation on the onsite pools and the
offsite stuff would just "take care of itself".  

Anyway, All you really need, from my understanding, is to setup a new
TSM server LPAR and make it the Library manager, and make the current
TSM instance another Library client.  Then setup your other TSM
instances as you desire, and perform node exports.  I would upgrade to
TSM 5.5.1 first, so that you can do restartable exports and imports,
etc.  From my perspective and what I've seen in this no data loss should
be expected, and you can do a good dB backup ahead of time to give you a
go back to spot.  But in all honesty a TSM consultant should not be
overwhelmed by any environment (noticeably that is).

See Ya'
Howard


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
> Of Haberstroh, Debbie (IT)
> Sent: Wednesday, October 15, 2008 12:52 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Library Manager and Migration question
> 
> Hi All,
> 
> We currently have 1 TSM server ver 5.3.3 on AIX 5.3, with a large
> database, 228GB.  We have finally received approval to move this to a
> new server, upgrade to 5.5 and split the clients up to reduce the DB
> size.  We would like to have 4 LPAR's created on the new server, 1 for
> a Library manager and the other 3 for client databases. Our library is
> a TS3584 with 12 LTO2 drives.  We are adding another expansion cabinet
> and adding 4 LTO3 drives but are staying with LTO2 tapes for now.  Our
> current library configuration has the library shared but it is not a
> library manager since we only had one server.
> 
> Due to the size and complexity of our environment, we contracted with
a
> vendor to do the setup and start the process.  Now that the contract
> has been signed and we have started our planning sessions, the vendor
> is having a problem with the library manager/client configuration.  At
> first he said that there was no such thing.  I sent him documentation
> and they also found an IBM redbook on the subject but now says that it
> is too complicated since our server is so large and that I will need
to
> partition the library, with some data loss!!  I am strongly against
> this setup and would like to know if I am wrong.  How difficult would
> it be to create a library manager and client setup for an existing
> environment?  I currently have over 1000 tapes onsite and as many
> offsite and naturally do not want to lose any data.  Thanks for the
> feedback.
> 
> Debbie Haberstroh
> Server Administration


Re: Library Manager and Migration question

2008-10-15 Thread Mark Scott
Morning can you point me to the library manager Redbook please?

Regards 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Haberstroh, Debbie (IT)
Sent: Thursday, 16 October 2008 3:41 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Library Manager and Migration question

Thanks, hadn't thought about the restartable restores, good idea.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Howard Coles
Sent: Wednesday, October 15, 2008 1:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Library Manager and Migration question


Sounds like you signed with the wrong group, you know more than they do.
I've been in this same boat with TSM and other apps.  Drives you up the
wall.  We had one vendor put in a TSM system at a former workplace.  He
told us we only had to run reclamation on the onsite pools and the
offsite stuff would just "take care of itself".  

Anyway, All you really need, from my understanding, is to setup a new
TSM server LPAR and make it the Library manager, and make the current
TSM instance another Library client.  Then setup your other TSM
instances as you desire, and perform node exports.  I would upgrade to
TSM 5.5.1 first, so that you can do restartable exports and imports,
etc.  From my perspective and what I've seen in this no data loss should
be expected, and you can do a good dB backup ahead of time to give you a
go back to spot.  But in all honesty a TSM consultant should not be
overwhelmed by any environment (noticeably that is).

See Ya'
Howard


> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf
> Of Haberstroh, Debbie (IT)
> Sent: Wednesday, October 15, 2008 12:52 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Library Manager and Migration question
> 
> Hi All,
> 
> We currently have 1 TSM server ver 5.3.3 on AIX 5.3, with a large
> database, 228GB.  We have finally received approval to move this to a
> new server, upgrade to 5.5 and split the clients up to reduce the DB
> size.  We would like to have 4 LPAR's created on the new server, 1 for
> a Library manager and the other 3 for client databases. Our library is
> a TS3584 with 12 LTO2 drives.  We are adding another expansion cabinet
> and adding 4 LTO3 drives but are staying with LTO2 tapes for now.  Our
> current library configuration has the library shared but it is not a
> library manager since we only had one server.
> 
> Due to the size and complexity of our environment, we contracted with
a
> vendor to do the setup and start the process.  Now that the contract
> has been signed and we have started our planning sessions, the vendor
> is having a problem with the library manager/client configuration.  At
> first he said that there was no such thing.  I sent him documentation
> and they also found an IBM redbook on the subject but now says that it
> is too complicated since our server is so large and that I will need
to
> partition the library, with some data loss!!  I am strongly against
> this setup and would like to know if I am wrong.  How difficult would
> it be to create a library manager and client setup for an existing
> environment?  I currently have over 1000 tapes onsite and as many
> offsite and naturally do not want to lose any data.  Thanks for the
> feedback.
> 
> Debbie Haberstroh
> Server Administration


Bunnings Legal Disclaimer:

1) This email is confidential and may contain legally privileged
information.  If you are not the intended recipient, you must not
disclose or use the information contained in it.  If you have received
this email in error, please notify us immediately by return email and
delete the document.

2) All emails sent to and sent from Bunnings Group Limited.
are scanned for content.  Any material deemed to contain inappropriate
subject matter will be reported to the email administrator of all
parties concerned.



Re: Library Manager and Migration question

2008-10-16 Thread Richard Rhodes
We origionally had 2 (at different datacenters) big TSM instances (+150gb
db each), each with their own library.  In preparation for adding new
instances we converted to standalone library manager instances (one at each
site), then brought on new instances.  It all worked very well.  There are
a couple docs about how to convert a standalone tsm instance to a library
.manager  and library client setup.  I can think of no reason you could not
do this.

http://www.redbooks.ibm.com/abstracts/redp0140.html?Open
http://www.tsmexpert.org/2006/03/how-to-setup-library-manager-instance.html

We origionally had the library manager instances on the same server as the
main TSM instances.  We have since moved the library managers instance to
tiny standalone AIX lpars.  This allows us to upgrade them first without
having to upgrade the main tsm instances at the same time.

Rick








 "Haberstroh,
 Debbie (IT)"
 <[EMAIL PROTECTED]  To
 RCRAFT.COM>   ADSM-L@VM.MARIST.EDU
 Sent by: "ADSM:cc
 Dist Stor
 Manager"  Subject
 <[EMAIL PROTECTED] Library Manager and Migration
 .EDU> question


 10/15/2008 01:53
 PM


 Please respond to
 "ADSM: Dist Stor
 Manager"
 <[EMAIL PROTECTED]
   .EDU>






Hi All,

We currently have 1 TSM server ver 5.3.3 on AIX 5.3, with a large database,
228GB.  We have finally received approval to move this to a new server,
upgrade to 5.5 and split the clients up to reduce the DB size.  We would
like to have 4 LPAR's created on the new server, 1 for a Library manager
and the other 3 for client databases. Our library is a TS3584 with 12 LTO2
drives.  We are adding another expansion cabinet and adding 4 LTO3 drives
but are staying with LTO2 tapes for now.  Our current library configuration
has the library shared but it is not a library manager since we only had
one server.

Due to the size and complexity of our environment, we contracted with a
vendor to do the setup and start the process.  Now that the contract has
been signed and we have started our planning sessions, the vendor is having
a problem with the library manager/client configuration.  At first he said
that there was no such thing.  I sent him documentation and they also found
an IBM redbook on the subject but now says that it is too complicated since
our server is so large and that I will need to partition the library, with
some data loss!!  I am strongly against this setup and would like to know
if I am wrong.  How difficult would it be to create a library manager and
client setup for an existing environment?  I currently have over 1000 tapes
onsite and as many offsite and naturally do not want to lose any data.
Thanks for the feedback.

Debbie Haberstroh
Server Administration


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


Re: Library Manager and Migration question

2008-10-16 Thread Wanda Prather
-Nothing you do with TSM, I mean NOTHING, will result in data loss, if done
correctly.

-The whole POINT of TSM library sharing is so that you don't have to
partition the library; if you partition the library you can't share the
drives dynamically between the TSM instances.

-Splitting the clients between different servers can be tedious; as noted by
others, you may need to do exports if the client data is long-lived.  (Look
at letting the client backup data expire where practical, and just moving
archive data - that can reduce the work a lot.)  BUT, setting up library
sharing is very easy.

-The problem is clearly too complicated for your contractor, rather than too
complicated for your TSM!



On Wed, Oct 15, 2008 at 1:51 PM, Haberstroh, Debbie (IT) <
[EMAIL PROTECTED]> wrote:

> Hi All,
>
> We currently have 1 TSM server ver 5.3.3 on AIX 5.3, with a large database,
> 228GB.  We have finally received approval to move this to a new server,
> upgrade to 5.5 and split the clients up to reduce the DB size.  We would
> like to have 4 LPAR's created on the new server, 1 for a Library manager and
> the other 3 for client databases. Our library is a TS3584 with 12 LTO2
> drives.  We are adding another expansion cabinet and adding 4 LTO3 drives
> but are staying with LTO2 tapes for now.  Our current library configuration
> has the library shared but it is not a library manager since we only had one
> server.
>
> Due to the size and complexity of our environment, we contracted with a
> vendor to do the setup and start the process.  Now that the contract has
> been signed and we have started our planning sessions, the vendor is having
> a problem with the library manager/client configuration.  At first he said
> that there was no such thing.  I sent him documentation and they also found
> an IBM redbook on the subject but now says that it is too complicated since
> our server is so large and that I will need to partition the library, with
> some data loss!!  I am strongly against this setup and would like to know if
> I am wrong.  How difficult would it be to create a library manager and
> client setup for an existing environment?  I currently have over 1000 tapes
> onsite and as many offsite and naturally do not want to lose any data.
>  Thanks for the feedback.
>
> Debbie Haberstroh
> Server Administration
>


Re: Library Manager and Migration question

2008-10-16 Thread Richard Rhodes
some more comments . . .

> > 228GB.  We have finally received approval to move this to a new server,
> > upgrade to 5.5 and split the clients up to reduce the DB size.

We created new instances, then started assigning all new clients to the new
instances.  Then, we went back and cherry picked the handfull of worse
clients and moved them to the new instances (some by occupance, some by
number
of files).  To move them, some we
exported/imported while others we just created new nodes on the new
instance and retired the node on the origional TSM server.

> > Our current library configuration
> > has the library shared but it is not a library manager since we only
had one
> > server.

So, did you have multiple tsm instances on the server, each with their own
3584 partition and dedicated drives?

> >  I sent him documentation and they also found
> > an IBM redbook on the subject but now says that it is too complicated
since
> > our server is so large and that I will need to partition the library,
with
> > some data loss!!  I am strongly against this setup and would like to
know if
> > I am wrong.

No, you are NOT wrong! (or, is that "yes, you are NOT wrong!")
A library manger is the way to go.


> > How difficult would it be to create a library manager and
> > client setup for an existing environment?

Not very hard.  The procedure CAN seem complicated, but when
you work through the steps it's not bad.  The only trick
in the procedure is making sure the library manager assigns
the volume ownerships to the clients correctly.  This is all
in the procedures.

First, make sure you understand how library sharing works
(see the Admin guide), then the redbook on converting to it.
I spent a fair amount of time working through the examples.
I then wrote a detailed procedure for the conversion
(sorry, I looked for it but can't find it), then got the
outage and performed the conversion.  Understand, plan/plan/plan,
ask questions, then do it.

If you can think of a way to test out
a small library sharing environment first, then do it.  We were
able to do this since we had new 3584 libraries onsite that were
waiting for library sharing to go in to use them.  We used them to setup a
small library sharing environment to play with.

> > I currently have over 1000 tapes
> > onsite and as many offsite and naturally do not want to lose any data.

You won't!  We had over 6-8thousand tapes in 2 3494 libraries and 2 3584's
with 300-400 tapes.  We didn't loose a single tape when we converted!

The library manager environment has it's own issues you will need to
get your arms around.  We spent some time balancing the batch schedules
between instances so as to not have one instance hog the drives.  It
also becomes fun trying to get a big picture of what all the
instances are doing.  I finally wrote a script to give a overall
picture of tape drive usage across the instances, and what
processes/sessions were using them (see below).


Here is the output of my status script . . .

rsfebkup1p.fenetwork.com:/tsmdata/tsm_scripts==>./q_tape_drive_status.ksh
==  200810161214
==  library  drive  online paths(total/online/offline)  state  allocated_to
volume_name  process/session
 3584GO 3584GO-F01R01 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R02 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R03 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R04 YES 6/6/0 LOADED TSM1 J04183PROCESS= 3921
Migration
 3584GO 3584GO-F01R05 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R06 YES 6/6/0 LOADED TSM4 J04227PROCESS= 1920 Backup
Storage Pool
 3584GO 3584GO-F01R07 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R08 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R09 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R0A YES 6/6/0 EMPTY
 3584GO 3584GO-F01R0B YES 6/6/0 LOADED TSM1 J04030PROCESS= 3918
Migration
 3584GO 3584GO-F01R0C YES 6/6/0 LOADED TSM1 J02729PROCESS= 3917
Migration
 3584GO 3584GO-F03R01 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R02 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R03 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R04 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R05 YES 22/22/0 LOADED STASAPEI1D1 J04507SESSION=
38572  SAPEI1D1_DB
 3584GO 3584GO-F03R06 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R07 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R08 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R09 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R0A YES 22/22/0 EMPTY
 3584GO 3584GO-F03R0B YES 22/22/0 EMPTY
 3584GO 3584GO-F03R0C YES 22/22/0 EMPTY
 3584GO 3584GO-F04R01 YES 22/22/0 EMPTY
 3584GO 3584GO-F05R01 YES 6/6/0 EMPTY
 3584GO 3584GO-F05R02 YES 6/6/0 LOADED TSM1 J02733PROCESS= 3919
Migration
 3584GO 3584GO-F05R03 YES 6/6/0 LOADED TSM3 J04195PROCESS= 2552
Migration
 3584GO 3584GO-F05R04 YES 6/6/0 LOADED TSM4 J04410
 3584GO 3584GO-F05R05 YES 6/6/0 EMPTY
 3584GO 3584GO-F05R06 YES 6/6/0 LOADED TSM1 J04078PROCESS= 3920
Migration
 3584GO 3584GO-F05R07 YES 6/6/0 EMPTY
 3584GO 3584GO-F05R08 YES 6/6/0 EMPTY
 3584GO 3584GO-F05R09 YES 6/6/0 EMPTY
 3584GO 3584GO-F05R0A YES 6/6/0 LOADED TSM3 J04525PROCESS= 2550
Migration
 3584GO 3584GO-F05R0B YES 6/6/0 EMPTY
 3584GO 3584G

Re: Library Manager and Migration question

2008-10-17 Thread Haberstroh, Debbie (IT)
Thanks to everyone for all of the advice.  My vendor has now assigned a new TSM 
technician that knows about the Library Manager.  Now, another issue has come 
up about this.  

I have 6 LAN-free clients, one of them is running on AIX in an HA environment.  
I am assuming that all of the drives will need to be reconfigured once the 
Library Manager is online and the server to server communication is redone on 
the Manager and Clients.  For now, the LAN-free clients will belong to the 
original server and the library communication will of course be to the Library 
Manager.  Any gotchas in changing these clients, especially the HA client?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Richard Rhodes
Sent: Thursday, October 16, 2008 11:31 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Library Manager and Migration question


some more comments . . .

> > 228GB.  We have finally received approval to move this to a new server,
> > upgrade to 5.5 and split the clients up to reduce the DB size.

We created new instances, then started assigning all new clients to the new
instances.  Then, we went back and cherry picked the handfull of worse
clients and moved them to the new instances (some by occupance, some by
number
of files).  To move them, some we
exported/imported while others we just created new nodes on the new
instance and retired the node on the origional TSM server.

> > Our current library configuration
> > has the library shared but it is not a library manager since we only
had one
> > server.

So, did you have multiple tsm instances on the server, each with their own
3584 partition and dedicated drives?

> >  I sent him documentation and they also found
> > an IBM redbook on the subject but now says that it is too complicated
since
> > our server is so large and that I will need to partition the library,
with
> > some data loss!!  I am strongly against this setup and would like to
know if
> > I am wrong.

No, you are NOT wrong! (or, is that "yes, you are NOT wrong!")
A library manger is the way to go.


> > How difficult would it be to create a library manager and
> > client setup for an existing environment?

Not very hard.  The procedure CAN seem complicated, but when
you work through the steps it's not bad.  The only trick
in the procedure is making sure the library manager assigns
the volume ownerships to the clients correctly.  This is all
in the procedures.

First, make sure you understand how library sharing works
(see the Admin guide), then the redbook on converting to it.
I spent a fair amount of time working through the examples.
I then wrote a detailed procedure for the conversion
(sorry, I looked for it but can't find it), then got the
outage and performed the conversion.  Understand, plan/plan/plan,
ask questions, then do it.

If you can think of a way to test out
a small library sharing environment first, then do it.  We were
able to do this since we had new 3584 libraries onsite that were
waiting for library sharing to go in to use them.  We used them to setup a
small library sharing environment to play with.

> > I currently have over 1000 tapes
> > onsite and as many offsite and naturally do not want to lose any data.

You won't!  We had over 6-8thousand tapes in 2 3494 libraries and 2 3584's
with 300-400 tapes.  We didn't loose a single tape when we converted!

The library manager environment has it's own issues you will need to
get your arms around.  We spent some time balancing the batch schedules
between instances so as to not have one instance hog the drives.  It
also becomes fun trying to get a big picture of what all the
instances are doing.  I finally wrote a script to give a overall
picture of tape drive usage across the instances, and what
processes/sessions were using them (see below).


Here is the output of my status script . . .

rsfebkup1p.fenetwork.com:/tsmdata/tsm_scripts==>./q_tape_drive_status.ksh
==  200810161214
==  library  drive  online paths(total/online/offline)  state  allocated_to
volume_name  process/session
 3584GO 3584GO-F01R01 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R02 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R03 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R04 YES 6/6/0 LOADED TSM1 J04183PROCESS= 3921
Migration
 3584GO 3584GO-F01R05 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R06 YES 6/6/0 LOADED TSM4 J04227PROCESS= 1920 Backup
Storage Pool
 3584GO 3584GO-F01R07 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R08 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R09 YES 6/6/0 EMPTY
 3584GO 3584GO-F01R0A YES 6/6/0 EMPTY
 3584GO 3584GO-F01R0B YES 6/6/0 LOADED TSM1 J04030PROCESS= 3918
Migration
 3584GO 3584GO-F01R0C YES 6/6/0 LOADED TSM1 J02729PROCESS= 3917
Migration
 3584GO 3584GO-F03R01 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R02 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R03 YES 22/22/0 EMPTY
 3584GO 3584GO-F03R04 YES 22/22/0 EMPTY
 35

Ang: Re: Tsm v6 migration question

2011-03-24 Thread Daniel Sparrman
That's not really something new to TSM v6. For performance reasons, it has 
always been recommended to separate database and log on different spindles. 

As with v6, at least for larger implementations, the same rule of thumb 
applies: Try separating your DB paths on at least 4 different arrays/spindles, 
preferably 8 or more. Too many will not increase performance, rather reduce it. 
Put your active log / log mirror on spindles of it's own, as well as your 
archivelog / failover paths.

Watch out for sharing archivelog with something else (like storage) since the 
database backup trigger will look at the % used on the filesystem. So putting 
storagepools and then your archivelog in the same filesystem probably isnt the 
best of ideas.

Best Regards

Daniel Sparrman
-"ADSM: Dist Stor Manager"  skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/24/2011 11:06
Ärende: Re: Tsm v6 migration question

Hello Gary,

Effectively, the database grows between 10 and 20 %. (more if new features, 
like dedup, are intalled)
But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Lee, 
Gary D.
Envoyé : mercredi 23 mars 2011 18:44
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. "


Re: Ang: Re: Tsm v6 migration question

2011-03-25 Thread molin gregory
Hello Daniel,

You say : ' That's not really something new to TSM v6'.

DB2 is now the DBMS for TSM, and if DB2 improve database, DB2 has system 
requirement who was not the same in previous version

My experience:
The installer has set up the solution on our site has not implemented the 
recommendations related to DB2.
TSM crash after two months.


Best Regards
Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Daniel 
Sparrman
Envoyé : jeudi 24 mars 2011 10:17
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Ang: Re: Tsm v6 migration question

That's not really something new to TSM v6. For performance reasons, it has 
always been recommended to separate database and log on different spindles. 

As with v6, at least for larger implementations, the same rule of thumb 
applies: Try separating your DB paths on at least 4 different arrays/spindles, 
preferably 8 or more. Too many will not increase performance, rather reduce it. 
Put your active log / log mirror on spindles of it's own, as well as your 
archivelog / failover paths.

Watch out for sharing archivelog with something else (like storage) since the 
database backup trigger will look at the % used on the filesystem. So putting 
storagepools and then your archivelog in the same filesystem probably isnt the 
best of ideas.

Best Regards

Daniel Sparrman
-"ADSM: Dist Stor Manager"  skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/24/2011 11:06
Ärende: Re: Tsm v6 migration question

Hello Gary,

Effectively, the database grows between 10 and 20 %. (more if new features, 
like dedup, are intalled)
But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Lee, 
Gary D.
Envoyé : mercredi 23 mars 2011 18:44
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. "

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. "


Ang: Re: Ang: Re: Tsm v6 migration question

2011-03-25 Thread Daniel Sparrman
Hi Gregory


My comment was to this previous comment:

"But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie"

What I ment was that it has never been recommended to share the same array / 
spindle / filesystem between logs and database.

If you go back to the TSM v5.5 performance guide, you'll will notice the 
recommendation to separate log and DB on separate spindles / filesystems / 
arrays.

As for DB2, yes, DB2 is new for v6, but the performance setup isnt.

My guess is that if your setup crashed after 2 months, whoever implemented your 
solution probably shared filesystemspace between the database and some other 
part of TSM, such as archivelog or log. Since a) the database can now grow "of 
it's own" I wouldnt place it with something else since you might run out of 
space b) I wouldnt share archivelog space with something else either since it 
also "grows of it's own".

I cant imagine that your TSM implementation crashed just because the person who 
implemented it placed the database on a single filesystem. This is technically 
possible, but not something I would recommend due to performance issues.

Best Regards

Daniel Sparrman

-"ADSM: Dist Stor Manager"  skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/25/2011 14:47
Ärende: Re: Ang: Re: Tsm v6 migration question

Hello Daniel,

You say : ' That's not really something new to TSM v6'.

DB2 is now the DBMS for TSM, and if DB2 improve database, DB2 has system 
requirement who was not the same in previous version

My experience:
The installer has set up the solution on our site has not implemented the 
recommendations related to DB2.
TSM crash after two months.


Best Regards
Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Daniel 
Sparrman
Envoyé : jeudi 24 mars 2011 10:17
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Ang: Re: Tsm v6 migration question

That's not really something new to TSM v6. For performance reasons, it has 
always been recommended to separate database and log on different spindles. 

As with v6, at least for larger implementations, the same rule of thumb 
applies: Try separating your DB paths on at least 4 different arrays/spindles, 
preferably 8 or more. Too many will not increase performance, rather reduce it. 
Put your active log / log mirror on spindles of it's own, as well as your 
archivelog / failover paths.

Watch out for sharing archivelog with something else (like storage) since the 
database backup trigger will look at the % used on the filesystem. So putting 
storagepools and then your archivelog in the same filesystem probably isnt the 
best of ideas.

Best Regards

Daniel Sparrman
-"ADSM: Dist Stor Manager"  skrev: -

Till: ADSM-L@VM.MARIST.EDU
Från: molin gregory 
Sänt av: "ADSM: Dist Stor Manager" 
Datum: 03/24/2011 11:06
Ärende: Re: Tsm v6 migration question

Hello Gary,

Effectively, the database grows between 10 and 20 %. (more if new features, 
like dedup, are intalled)
But, the most important with the 6.x is the architecture of database files.
You have to reserve one disk for the database, on for the actlog and one other 
for the archlog.
Having this disks on the same axe may result poor efficientie

Cordialement,
Grégory Molin
Tel : 0141628162
gregory.mo...@afnor.org

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Lee, 
Gary D.
Envoyé : mercredi 23 mars 2011 18:44
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Tsm v6 migration question

We are preparing to migrate our tsm v5,5,4 system to 6.2.2.


Are there any rules of thumb for relating db size in v5.5.4 with what we will 
need under v6?

I assume it will be somewhat larger being a truly relational structure.

Thanks for any help.


Gary Lee
Senior System Programmer
Ball State University
phone: 765-285-1310

 

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'attention 
exclusive de leur destinataire (aux adresses spécifiques auxquelles il a été 
adressé). Si vous n'êtes pas le destinataire de ce message, vous devez 
immédiatement en avertir l'expéditeur et supprimer ce message et les pièces 
jointes de votre système. 

This message and any attachments are confidential and intended to be received 
only by the addressee. If you are not the intended recipient, please notify 
immediately the sender by reply and delete the message and any attachments from 
your system. "

"ATTENTION.

Ce message et les pièces jointes sont confidentiels et établis à l'atte